added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2016-08-09T08:50:54.084Z
2013-11-01T00:00:00.000
13047729
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.mdpi.com/1996-1944/6/11/5398/pdf", "pdf_hash": "97d54b27cd10fbff1b27c04e3ac3d847bb40d3dc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43906", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Biology" ], "sha1": "97d54b27cd10fbff1b27c04e3ac3d847bb40d3dc", "year": 2013 }
pes2o/s2orc
Comparative Analysis of the Oxygen Supply and Viability of Human Osteoblasts in Three-Dimensional Titanium Scaffolds Produced by Laser-Beam or Electron-Beam Melting Synthetic materials for bone replacement must ensure a sufficient mechanical stability and an adequate cell proliferation within the structures. Hereby, titanium materials are suitable for producing patient-individual porous bone scaffolds by using generative techniques. In this in vitro study, the viability of human osteoblasts was investigated in porous 3D Ti6Al4V scaffolds, which were produced by electron-beam (EBM) or laser-beam melting (LBM). For each examination, two cylindrical scaffolds (30 mm × 10 mm in size, 700 µm × 700 µm macropores) were placed on each other and seeded with cells. The oxygen consumption and the acidification in the center of the structures were investigated by means of microsensors. Additionally, the synthesis of pro-collagen type 1 was analyzed. On the LBM titanium scaffolds, vital bone cells were detected in the center and in the periphery after 8 days of cultivation. In the EBM titanium constructs, however, vital cells were only visible in the center. During the cultivation period, the cells increasingly produced procollagen type 1 in both scaffolds. In comparison to the periphery, the oxygen content in the center of the scaffolds slightly decreased. Furthermore, a slight acidification of the medium was detectable. Compared to LBM, the EBM titanium scaffolds showed a less favorable behavior with regard to cell seeding. Introduction Segmental bone defects can be a result of fractures, traumas, tumors or endoprosthetic loosening. Currently, autologous and allogenic bone grafts are used for the treatment of these defects [1,2]. However, such grafts can be used to a very limited extent only, which is due to their limited availability, risks associated with extraction from the donor site, infections and the risk of immunological reactions to allogenic grafts [3,4]. Therefore, it is necessary to find alternatives in the form of synthetic, porous three-dimensional (3D) bone substitute materials that can be inserted into bone defects. For this purpose, the focus of research is on calcium phosphates as well as metals like titanium and its alloys. These materials can be used in clinical applications, are available in sufficient quantities and can be produced patient-individually by means of various manufacturing techniques (e.g., 3D printing, additive production methods). A lot of mechanical and biological conditions must be taken into account to ensure that the bone can grow into the synthetic materials. The different mechanical properties of bone substitute materials decide whether the materials are used in load-bearing or non-load-bearing areas, because these areas are subjected to different levels of mechanical stress. Mismatching between bone substitute material and the surrounding bone tissue can lead to a change in the mechanical load distribution within the tissue, as a result of which tissue growth into the material will be inhibited or the implant will loosen [5,6]. For this reason, the mechanical properties of the materials have been adapted to the biomechanical properties of the bone [7]. The mechanical compressive strength of scaffolds made of titanium alloys are comparable to that of human cortical bone [8]. Implants made of titanium are widely used in both orthopedic surgery and in the dental sector [9]. Porous structures are especially suitable for the management of large bone defects, since they show a high degree of mechanical stability [10]. Furthermore, good bone cell integration was already demonstrated for such structures in vitro [9,11,12]. The porosity of implants can either be provided by a foam-like structure with irregular pore size or by a lattice structure with regular pores. The latter are produced by additive manufacturing methods [8]. Porosity plays an important part in reducing stiffness mismatching between implants and the surrounding bone tissue. Furthermore, porosity, pore size and interconnected pores play an important biological role in ensuring bone ingrowth into the structures and hence in creating a lasting and stable bonding of the implant within the bone stock [13]. To ensure sufficient cell distribution, the structures of the 3D bone substitute materials have to offer an artificial surface on which the cells can migrate, proliferate and differentiate [14]. It should be kept in mind, however, that with increasing implant size gradients in cellular seeding and differentiation may occur between the internal and external structures [15][16][17]. This is mainly due to the fact that the cells in the interior are insufficiently supplied with nutrients and oxygen [17]. In living tissues, nutrients, oxygen and waste products are transported by the blood flow. Due to the proximity of the cells to a blood vessel, all cells are sufficiently supplied with nutrients [15]. However, the implantation of a bone substitute material leads to a temporary interruption of the blood flow, so that oxygen and nutrients have to be transported over several millimeters or centimeters by diffusion processes [15]. Since an adequate oxygen and nutrient supply to the cells is limited to a maximum of 200 µm in vitro [17], a higher porosity of the bone substitute materials should accelerate vascularization within the structures in order to ensure oxygen and nutrient supply as well as the removal of metabolic end products [18]. It takes several days to months for blood vessels to grow into cell-seeded scaffolds, so that an initial insufficient oxygen supply within the structures after implantation can be assumed [15,17]. Additionally, larger distances, in both native tissue and bone substitute materials, can cause imbalances between oxygen supply and oxygen consumption [19]. The objective of this in vitro study was to examine the oxygen supply and viability of human osteoblasts within 3D titanium scaffolds by using an established test setup [20]. For this purpose, scaffolds of the same size and porosity were produced by additive manufacturing processes using electron-beam (EBM) or laser-beam melting (LBM) techniques. Both titanium constructs were thus to be assessed for their biological suitability to draw conclusions about the different manufacturing processes and the design of pore size and pore arrangement with respect to bone cell viability and distribution. Isolation and Cultivation of Human Primary Osteoblasts Human primary osteoblasts were isolated and cultivated under standard conditions [20]. The cells were isolated under sterile conditions from femoral head spongiosa of patients who underwent implantation of total hip endoprosthesis. The femoral heads were made available after obtaining written consent of the patients and prior approval of the local ethics committee (registration number: A 2010-10). Bone cells of a total of 14 living donors (7 female, 62 ± 11 years; 7 male, 66 ± 9 years) were used for the in vitro tests. Human osteoblasts in the third cell passage were seeded on the scaffolds (see Chapter 2.3). For this purpose, supernatant liquid of the culture medium was removed, the cells were rinsed with PBS (PAA, Coelbe, Germany) and subsequently detached from the bottom of the cell culture flask by means of trypsin/EDTA (Gibco ® Invitrogen, Darmstadt, Germany). After a centrifugation step, the cell pellet was resuspended in a defined medium volume and a cell count was performed using a Thoma counting chamber. Titanium Scaffolds The titanium scaffolds used in the tests were produced by additive manufacturing methods based on a CAD model and selective laser-beam melting (SLM Solutions GmbH, Lübeck, Germany). In addition, this study included the use of titanium scaffolds that were produced by selective electron-beam melting (Institute for Materials Science, University of Erlangen-Nuremberg, Erlangen, Germany). Titanium powder (Ti6Al4V) was used for both manufacturing processes [10]. As described by Koike et al. [21], the additive manufacturing of the constructs comprised three steps: distribution of the titanium powder, heating and melting by laser or electron beam. The steps were repeated until the constructs were completely in accordance with the specific CAD design [21]. The scaffolds were 5 mm in height and 30 mm in diameter and had a pore size of 700 × 700 µm in all three spatial directions (Figure 1c,d,f,g). Test Setup and Seeding of the Titanium Scaffolds The test setup consisted of two scaffold discs which were placed on each other to form a 10-mm-high overall construct (Figure 1a,b,e). The lowest plane (plane 4) was in direct contact with the bottom of the cell culture plate. The upper disc had a central hole for inserting the microsensors. Figure 1. Presentation of the test setup with titanium scaffolds (a); (b-d) laser-beam melted titanium; (e-g) electron-beam melted titanium. Two scaffold discs were placed on each other (b, e), and each of these double constructs was then inserted into one well of a 6-well cell culture plate and seeded with cells. Pore arrangement in the titanium double modules (c, f). Scanning electron microscopy (SEM) of the two titanium surfaces (d, g). Two scaffold discs were positioned over each other in one well of a 6-well culture plate, so that plane 1 could be seeded with cells. Prior to seeding, the titanium scaffolds were covered with complete medium to eliminate air bubbles. Subsequently, a total of 4 × 10 5 cells were pipetted on the surface of plane 1 point by point in 10 µl drops. After an adherence period of 45 minutes, the cells were overlaid with complete medium containing osteogenic additives and then incubated under standard conditions for eight days. The cell culture medium was changed three times a week. Viability Testing and Quantification of Procollagen Type 1 To analyze the viability of cells on the two bone substitute materials, a metabolic activity test (WST-1, Roche, Penzberg, Germany) and live/dead staining (LIVE/DEAD ® viability/cytotoxicity kit; Invitrogen) were performed. The WST-1 test is used to determine the mitochondrial dehydrogenase activity of cells. The cells turn over the tetrazolium salt WST-1 into formazan, which resulted in a color change. This change can subsequently be quantified in a microplate reader (Dynex Technologies, Denkendorf, Germany) at 450 nm (reference: 630 nm). To evaluate the metabolic activity of cells on the scaffolds, the WST-1 test was performed both after 24 hours (day 1) and at the end of the test (day 8). The overall construct was overlaid with a defined volume of the WST-1/medium reagent (ratio 1:10) and incubated at 37 °C and 5% CO 2 for 60 min. A blank value was used in each test series. Subsequently, 200 µL aliquots were transferred into a 96-well cell culture plate for double measurement, and the absorption in the plate reader was determined. The live/dead staining reagent contains the two fluorescence dyes calcein AM and ethidium homodimer 1. Calcein AM is a membrane-permeable acetoxymethyl ester of calcein, which is hydrolysed intracellularly to calcein by endogenous esterases. As calcein is membrane-impermeable, it will remain within the intact cells, which are therefore fluorescent green (ex/em 495/515 nm). Ethidium homodimer is a nuclear stain which emits red fluorescence after DNA binding (ex/em 495/635 nm). It enters cells through damaged membranes and can therefore be used for identifying dead cells. Both fluorescent dyes were dissolved in PBS according to the manufacturer's instructions. Then, the scaffolds were incubated at room temperature in a darkened environment for 30 minutes. Unless stated otherwise, the cells were examined under a microscope with an objective lens with four-fold magnification. The respective images of live and dead cells were taken separately but in the same position, using a fluorescence microscope (Nikon ECLIPSE TS100, Nikon GmbH, Duesseldorf, Germany). Subsequently these images were superimposed by means of freely available image editing software (GIMP 2.6.6, GIMP-Team), so that a composite image of vital and dead cells was developed. The synthesis of type 1 pro-collagen by human osteoblasts was determined using an enzyme-linked immunosorbent assay (ELISA) (C1CP; Quidel, Marburg, Germany). For this purpose, supernatant medium was collected during each medium change and then stored at −20 °C. The test was carried out according to the manufacturer's instructions. To determine the protein quantity, standard curves with defined protein concentrations and a defined optical density were generated. The absorption of the samples was determined at 405 nm using a microplate reader (Dynex Technologies, Denkendorf, Germany). Monitoring of Oxygen and pH Value To measure the oxygen concentration and the pH value within the different titanium scaffolds, special microsensors were used (oxygen: Oxygen Micro-Optode, type PSt1; pH value: pH Microsensor; both manufactured by: Presens, Regensburg, Germany). These sensors consist of an optical fibre with a tip that is less than 150 µm in diameter. To protect the fragile sensors, they were sheathed in a hollow needle that was 0.4 mm in diameter. These hollow needles were placed in other hollow needles (1.02 mm in diameter) to increase their stability during measurements. Therefore, we could distinguish between two different regions: (a) the center within the scaffold (between plane 2 and 3); and (b) the periphery above plane 1 (Figure 1a). Before each test series, the oxygen sensors were calibrated in oxygen-free water and in a water-saturated environment according to the manufacturer's instructions. The pH sensors were also calibrated before each test series, using pH buffer solutions in ascending order from pH 4 to pH 7 (all manufactured by: Roth, Karlsruhe, Germany) according to the manufacturer's instructions. Over a period of eight days, the oxygen content and acidification were measured daily for 30 min both in the center and on the periphery. Statistical Evaluation Human bone cells from 14 separate donors were used for the respective analyses. The data obtained were presented as mean values ± standard deviations. The statistical significance levels of the differences between mean values were calculated using a one-way ANOVA (post-hoc LSD). All statistical calculations were conducted using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL, USA). The level of significance was p < 0.05. Cell Viability and Collagen Synthesis The metabolic activity of cells was determined by a WST-1 assay on both titanium scaffolds after 24 hours and after eight days of cultivation. Compared to the metabolic activity after 24 hours, a decrease in metabolic activity by 37% was measured after eight days in the EBM titanium constructs, whereas metabolic activity in the LBM titanium constructs showed an insignificant increase by 92%. Hereby, at the end of cultivation, both scaffolds showed the same level of cell metabolic activity. In addition to the cell viability tests, live/dead staining was performed at both time points. After 24 hours, a large number of vital cells were detected on planes 1 and 3 on both titanium scaffolds. Nevertheless, the cell distribution on both planes was inhomogeneous because of the pointwise cell seeding procedure. After eight days of incubation, plane 1 and 3 of the LBM titanium bodies were densely seeded with vital cells. Isolated dead cells were only identified on the first plane. In contrast, the EBM titanium showed only few spots with vital cells on plane 1 and many live cells on plane 3 after the end of the test. Additionally, on plane 3 a good cell distribution could be shown. However, a lot of dead cells were detected on both planes. After the end of testing period, on both titanium scaffolds, the second plane also showed initial seeding (Figure 2). For measuring the procollagen type 1 content during the cultivation period, the medium supernatant was collected on day 2, 4 and 7. For this purpose, the supernatants were removed with a standard syringe through the hollow needles of the center and the periphery and afterwards analyzed by ELISA. Both scaffolds showed an increase in procollagen type 1 levels in the course of cultivation, with higher levels being observed in the LBM titanium constructs (Table 1). Oxygen Supply and Acidification in Titanium Scaffolds In addition to the viability measurements, oxygen concentration and acidification were measured. The oxygen measurements in the titanium constructs showed slight differences in oxygen concentration from day 0 to day 4. At day 7, a significant difference (p = 0.002) between periphery and center was determined in the EBM titanium scaffold (Figure 3a). In the LBM titanium, a significant decrease (p = 0.013) was only detected on the eighth day (Figure 3b). Oxygen concentration in the center of the scaffolds decreased from 17.5% to 11.76% (EBM titanium) and from 16.56% to 15.26% (LBM titanium) (Figure 3). In the course of cultivation, both titanium constructs also showed a slight acidification both on the periphery and in the center. Discussion The use of synthetic materials is limited by the insufficient nutrient and oxygen supply to the cells seeding on such implants [17]. In particular with large bone substitute materials, oxygen is the limiting factor due to its low solubility and diffusion capacity in aqueous solution [15]. Since there is no vascularization from the outset, oxygen supply gradients between the inside and the outside develop after a short time, which can have far-reaching consequences for cell survival in the center. Therefore, it is necessary to analyze the oxygen partial pressures within the bone substitute materials, to draw conclusions with regard to optimize the pore design. In this study, an established test setup was used, which made it possible to examine different bone substitute materials in a static cell culture with regard to their seeding with human bone cells and the oxygen supply to these cells [20]. The bone substitute materials used had the same dimensions and were made of a titanium alloy by applying EBM and LBM techniques. Influence of the Bone Substitute Materials on Cell Survival and Synthesis Capacity Depending on the production methods, the titanium scaffolds showed different levels of cell survival. Initially, good cell seeding was demonstrated on both surfaces. In the course of cultivation, however, the EBM titanium scaffolds showed worse cell-seeding characteristics than LBM titanium. These results were confirmed by the metabolic activity tests. Although we could determine an initially higher metabolic activity of cells in the EBM scaffolds compared to the LBM ones, a clearly decrease after eight days was observed. In contrast, the metabolic activity in the LBM constructs increased. Therefore, in accordance with Hollander et al., better biological compatibility was thus demonstrated for LBM titanium [12]. In comparison to previously published data on tricalcium phosphate (TCP) scaffolds, which had the same dimensions, pore arrangement and pore size [20], the results of the present in vitro study demonstrate that, regardless of the manufacturing method, the biocompatibility of the titanium used was clearly lower. The titanium alloy Ti6Al4V, which was used as base material for the titanium constructs, could be the main reason for these results. In literature, this material is already widely discussed with regard to its toxic properties. In this context, it is assumed that vanadium can induce the release of reactive oxygen species (ROS), which adversely affect cell survival [22,23]. In the production process, the entire titanium surface is covered with a natural oxide coating to reduce the corrosion potential of the metal [24]. However, this coating can be destroyed by chemical substances or abrasion particles [25], which may lead to a release of ROS. Furthermore, cells that have come into contact with the titanium surface can produce larger quantities of ROS, such as H 2 O 2 . These ROS, in turn, react with the oxide coating, resulting in the production of additional free radicals. The cells are therefore exposed to permanent oxidative stress, which can have a negative effect on the viability and thus the survival of cells [25]. Due to the different biocompatibility of the titanium surfaces, the additive manufacturing methods can have a major influence on the survival of cells. At the macroscopic level, selective LBM produced mainly smooth surfaces and edges (Figure 1d). In contrast, the surfaces of the EBM scaffolds were significantly more uneven and showed many small particle inclusions, which could also be detected by scanning electronic microscopy (Figure 1g). These particle inclusions resulted from titanium powder residues that had not been melted down into the smooth titanium surface during the melting process. These particles could destroy the oxide coating of the titanium surface and accelerate corrosion processes. Moreover, we have demonstrated that abrasion particles from TiO 2 can have a negative impact on cell viability [26]. The EBM titanium scaffolds thus seem to have less favorable material properties, which may adversely affect biocompatibility in the static cell culture. To actively eliminate possible adverse factors (e.g., ROS, particles) in vitro, the titanium scaffolds should be integrated into a dynamic cell culture system in further studies. Another important aspect in the evaluation of bone substitute materials is their influence on the differentiation potential of cells. On the one hand, the materials should constitute an artificial extracellular matrix and, on the other hand, they should support endogenous matrix synthesis by the cells cultivated on them. In the course of cultivation, an increase in the synthesis of collagen type 1 was demonstrated in both materials examined. Nevertheless, we tried to determine the concentrations between center and periphery, it should be noted that the verification is methodically limited. Therefore, an immunohistological staining could be performed to receive results. This should be established in further studies. However, an increase of osteoblast differentiation was detected on both titanium scaffolds. Oxygen Supply and Acidification within the Bone Substitute Materials Currently, the knowledge on optimal oxygen conditions in human bone tissue is insufficient, but it is known that the mean tissue oxygen levels are between 1% and 9% [27]. The data obtained in this study confirm that sufficient oxygen supply to human bone cells within the titanium constructs could be ensured over the test period. The oxygen content within the titanium scaffolds showed a smaller decrease between day 0 and 8 day (EBM titanium: −5.74% and LBM titanium: −1.3% in comparison with TCP: −7.55% [20]). In the EBM titanium, a significant difference between periphery and center was only observed after seven days of cultivation. In the LBM titanium, a significant difference was not observed until the end of the test. This divergent oxygen supply within the scaffolds is mainly due to their different colonization with bone cells. In the course of the test period, it was shown that the sintered TCP scaffolds [20] allowed significantly better seeding of human bone cells than the titanium scaffolds. Therefore, the oxygen consumption is increased due to an increase in the number of cells during the cultivation period. However, it can be concluded from the results that the evenly distributed macropores allow an adequate oxygen supply by diffusion in a static cell culture. Tissue hypoxia leads to local acidification, which is caused by the anaerobic cell metabolism and the lactic acid production associated with it. The exposure to hypoxia as well as reduced pH values can stimulate osteoclasts and their resorptive properties [28]. Microsensor-based pH monitoring revealed a slight acidification in the bone substitute materials investigated. Slight pH deviations can already inhibit the mineralization of organic matrix by osteogenic cells [28]. However, the tests performed within this study showed an increase in the synthesis of extracellular matrix in both bone substitute scaffolds. The acidification within the scaffolds thus revealed no side effect on the collagen synthesis. In addition, the diffusion of nutrients and oxygen can be reduced by increasing collagen deposition in the pores of the scaffolds [15], which favors hypoxia within the structures. Since no hypoxia occurred in the center of the titanium constructs, it can be assumed that cell seeding and the deposition of extracellular matrix did not reduce the size and regular arrangement of macropores to such an extent that diffusion processes in the static cell culture were measurably affected. Conclusions In conclusion, the results of the present study demonstrate that monitoring of the oxygen concentration and cell viability in large-area bone substitute materials is an essential requirement for in vitro assessment of materials, pore design and pore size. However, further studies are required to verify the biomechanical and biological suitability of the bone substitute materials in vivo in an adequate animal model.
v3-fos-license
2017-08-03T02:57:29.604Z
2013-08-23T00:00:00.000
14403584
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://zoologicalstudies.springeropen.com/track/pdf/10.1186/1810-522X-52-7", "pdf_hash": "5767f8ec617b00fa23d24bb5b9e7ba48b4cdc712", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43907", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "f982029950ee847cbcf380bee28701666c13c0ef", "year": 2013 }
pes2o/s2orc
Occurrence characteristics of two sibling species, Pseudodiaptomus inopinus and Pseudodiaptomus poplesia (Copepoda, Calanoida, Pseudodiaptomidae), in the Mankyung River estuary, South Korea Abundances of two closely related Pseudodiaptomus species, Pseudodiaptomus inopinus and Pseudodiaptomus poplesia, and salinity, temperature, and chlorophyll (Chl) a levels were measured monthly at a station in the Mankyung River estuary, South Korea, through a spring tide flood-ebb series. Both species occurred mostly under mesohaline to polyhaline conditions throughout the year. P. poplesia was abundant under winter polyhaline conditions and reached its peak abundance under mesohaline conditions in spring, when the Chl a concentration was highest. P. inopinus had lower densities than P. poplesia at all salinities in spring and had peak densities under mesohaline and polyhaline conditions in November, when a second Chl a peak concentration occurred. Egg-bearing females of both P. poplesia and P. inopinus were present in spring and fall, but the ratio of gravid females of the former was higher under mesohaline and polyhaline conditions in April and May, while that of the latter was higher under polyhaline conditions in March. These facts indicate that abundances of P. poplesia and P. inopinus may be controlled by Chl a concentrations and salinity conditions. Background Estuaries are dynamic and variable environments, and the spatiotemporal distribution of estuarine species is affected by environmental factors such as temperature and salinity. These fluctuations in environmental factors may also result in a low level of species diversity and promote the coexistence of congeneric species (Jeffries 1962;Wooldridge and Melville-Smith 1979;Sullivan and McManus 1986). Most studies on the coexistence of zooplankton species in estuarine and coastal environments addressed the mechanisms responsible for interspecific competition, spatial segregation, and reproductive isolation (Greenwood 1981;Ueda 1987;Laprise and Dodson 1993). Strategies by which congeneric species may avoid or reduce direct competition include maintaining different temporal and spatial distributions and partitioning available food resources by selective feeding (Greenwood 1981;Ueda 1987;Laprise and Dodson 1993). The mesozooplankton of many estuaries are dominated by calanoid copepods, particularly the demersal calanoid family Pseudodiaptomidae, which accounts for 70% of the abundance of calanoid copepods in the Mankung River estuary, South Korea. In Korean estuaries, five species of the genus Pseudodiaptomus have been recorded, and two species, Pseudodiaptomus inopinus and Pseudodiaptomus poplesia, are predominant (Suh et al. 1991;Soh et al. 2001). Of these two species, P. inopinus is more common and widespread in brackish and/or freshwaters of northeast Asia in general and Korea in particular (Chen and Zhang 1965;Shen and Song 1979;Chang and Kim 1986;Oka et al. 1991;Uye et al. 2000;Lee et al. 2007;Chang 2009;Sakaguchi et al. 2011). In contrast, P. poplesia has only been documented on the Yellow Sea side of the Korean Peninsula and in estuaries of the South China Sea (Shen and Song 1979;Soh et al. 2001;Tan et al. 2004;Lee et al. 2007;Shang et al. 2007;Chang 2009). In addition to its wide occurrence in Asia, P. inopinus was introduced into the Columbia River estuary on the Pacific coast of North America between 1980 and 1990, presumably by way of ballast water of ships, and has rapidly expanded its range since then (Cordell et al. 1992(Cordell et al. , 2008Cordell and Morrison 1996;Bollens et al. 2002). In estuaries where it was introduced, P. inopinus tends to seasonally dominate the mesozooplankton and may have resulted in changes in the food webs of those estuaries (Cordell and Morrison 1996; Cordell et al. 2007). Several studies focused on the biology of both native and introduced populations of P. inopinus (Ueda et al. 2004;Cordell et al. 2007), while little or no information of this type exists for P. poplesia. In this study, we examined the seasonal occurrence patterns of P. inopinus and P. poplesia, the dominant copepods in the Mankyung River estuary, in order to document patterns of occurrence in relation to temperature, salinity, food, and specifically chlorophyll (Chl) a concentrations. The goal of this study was to better understand how these two closely related species coexist in the estuary and provide more information regarding their biology. Methods The Mankyung River estuary is located on the central portion of the west coast of Korea ( Figure 1). The estuary is shallow and wellmixed, with semidiurnal tides that occur over a range of about 6 m. Annual rainfall in the area is ca. 1,371 mm, mainly occurring during the summer rainy season. Zooplankton were collected monthly between January and December 2000 at one station in the Mankyung River estuary (Figure 1). Sampling was done during one spring tide flood-ebb cycle at approximately 1-h intervals. Zooplankton samples were obliquely towed from near the bottom to the surface using a weighted conical net (with a mouth diameter of 45 cm and a mesh size of 200 μm). Samples were immediately preserved in a 5% neutralized formalin/seawater solution. In the laboratory, Pseudodiaptomus species were sorted and counted under a dissecting microscope. Counts were converted to individuals per cubic meter of seawater (ind./m 3 ). Water temperature and salinity were measured using a T-S meter (Model 30, YSI, Yellow Springs, OH, USA) from the surface to the bottom at 1-m intervals. To measure the Chl a concentration, 1,000 ml of seawater was collected from the surface layer once during each sampling. Chl a was extracted by grinding the filter paper in a dark room and placing it in 90% acetone, as recommended by SCOR-UNESCO (1980). The extracted sample was centrifuged, and the absorbance of the supernatant was measured at 750, 664, 647, and 630 nm using a spectrophotometer (UNICAM Helios Alpha, Gloucester, UK). Water column conditions for each sample were designated as oligohaline (0 to 5 practical salinity unit (psu)), mesohaline (5 to 18 psu), and polyhaline (>18 psu) on the basis of Ekman's classification system (Day et al. 1989). To evaluate correlations between abiotic factors and the abundances of the two Pseudodiaptomus species, data were log (x+1)-transformed, and a multivariate regression model analysis (Afifi et al. 2004) was conducted using SAS version 9.2 (Cary, NC, USA). For correlations between the abundances of P. poplesia and P. inopinus, Pearson's correlation coefficient was used. Water temperature and salinity The mean temperature ranged from a high of 29.6°C ± 1.6°C (standard deviation) in August 2000 to a low of 2.3°C ± 3.1°C in February 2000 ( Figure 2A). Water temperature varied seasonally: winter (December, January, and February) temperatures ranged from 2.3°C to 3.8°C; spring (March, April, and May) temperatures ranged from 9.1°C to 22.9°C; summer (June, July, and August) temperatures remained above 26°C; and fall (September, October, and November) temperatures ranged from 10.7°C to 22.4°C. Differences between the surface and bottom temperatures were <2°C, except in June, when they were <4°C. Salinity ranged from 0.8 to 27.7 psu, but there were seasonal differences in this parameter. In particular, salinities were lower during the rainy season. In July, conditions were oligohaline to mesohaline, with salinities remaining below 18 psu, while in August, only oligohaline conditions occurred (<5 psu). During the dry season in October to December, salinities increased again and, for the most part, remained in the mesohaline to polyhaline ranges regardless of the tide or sampling interval. Differences between the surface and bottom layers were also <1 psu ( Figure 2B). Chl a concentrations Chl a concentrations ranged from 6.8 to 342.7 μg/L during the study period and were more than three times greater during the spring phytoplankton bloom than during the other seasons ( Figure 2C). In oligohaline conditions, concentrations ranged from 16.0 to 200.1 μg/L, with the highest concentration in April and the lowest in January. In mesohaline conditions, concentrations ranged from 21.5 to 272.9 μg/L and, as in oligohaline conditions, were highest in April and lowest in January. Under polyhaline conditions, concentrations of Chl a were highest in May (94.0 μg/L) and lowest in November (7.1 μg/L). Overall mean Chl a concentrations were found under mesohaline conditions. Seasonal occurrence patterns of P. inopinus and P. poplesia P. inopinus and P. poplesia occurred throughout the salinity range and during the entire study period, with the exception of the summer rainy season. The density of P. poplesia ranged from 30 to 8,022 ind./m 3 under oligohaline conditions in spring, when the Chl a concentration was >162 μg/L, but decreased to 7 to 895 ind./m 3 in June and July with a decline in Chl a concentrations (<42.3 μg/L) and was lowest (<1 ind./m 3 ) in August ( Figure 3A). After that, the density increased (<62 ind./m 3 ) in October, when the Chl a concentration was >86.8 μg/L, but then declined to <17 ind./m 3 during the winter months. Densities of P. inopinus were somewhat higher than those of P. poplesia under oligohaline conditions, except in spring (Figures 4A and 5A) and October through December, when neither species occurred. Under mesohaline conditions, the density of P. inopinus increased after spring, with a peak density of 704,676 ind./m 3 in November ( Figure 4B). Under mesohaline and polyhaline conditions, P. inopinus densities remained high during the fall months, when Chl a concentrations were lower than in spring ( Figures 4B,C and 5B,C). In contrast, P. poplesia did not occur under mesohaline conditions in November and December or in polyhaline conditions in December ( Figures 3B,C and 5B,C). Results of the multivariate regression analysis of correlations of environmental factors, such as temperature, salinity, and Chl a, with densities of the twoPseudodiaptomus species showed that densities of P. poplesia and P. inopinus were significantly affected by both salinity (p < 0.05) and Chl a concentrations (p < 0.001), but were not significantly affected by temperature (p > 0.05) ( Table 1). In particular, densities of P. poplesia were higher than those of P. inopinus at high Chl a concentrations, while the latter species was more abundant under relatively low Chl a concentrations. Sex ratio and gravid females of P. poplesia and P. inopinus Males of P. poplesia were more abundant than females across the entire salinity range throughout the study period, except under oligohaline conditions in April, and the ratio of males to females reached 50% under mesohaline and polyhaline conditions in spring ( Figure 6). Males of P. inopinus were more abundant than females throughout the study period and across the entire range of salinity, but the ratio of males to females exceeded 50% under oligohaline conditions in January, mesohaline conditions in late fall and winter, and polyhaline conditions in early winter (Figure 7). Gravid females of P. inopinus appeared in oligohaline conditions only in March, July, and September and accounted for >50% of all females present in September (Figure 8). Under mesohaline conditions, they occurred in June to October, and under polyhaline conditions, they occurred only in June and October, and >50% of all females were gravid in October. Gravid females of P. poplesia were present in March to October, comprising 3% to 72% of the female population ( Figure 9). Under oligohaline conditions, >50% of the female population was gravid in July and also under mesohaline and polyhaline conditions in April and May. Percentages of gravid females of P. poplesia were very low during fall months, and they did not occur under oligohaline conditions. Discussion In the Mankyung River estuary of South Korea, two sibling species, P. inopinus and P. poplesia, co-occurred throughout the year under all salinity categories except in August, when most estuarine zooplankton are swept downstream because of increased freshwater flows during the rainy season (see Suh et al. 1991). In addition, water temperatures in the Mankyung River estuary were similar during spring and fall, being within the range of 9.0°C to 23.6°C in both seasons, while Chl a concentrations were higher in spring than fall ( Figure 2). However, this study showed that the abundance of P. inopinus was affected by the presence or absence of P. poplesia and/or low water temperatures, with no relation to salinity. However, abundances of the two species significantly differed with salinity and Chl a concentrations. Our finding that the abundances of the two species significantly differed on the basis of Chl a concentrations could be due to their distinct feeding strategies. In the case of Pseudocalanus minutus, non-living particles, mainly dead organisms or simply detritus, are supplementary food sources and serve as a basic food source (Poulet 1976). Likewise, the consumption of particles by the two Pseudodiaptomus species may be associated with changes in both the total concentration and the composition of suspended particulate materials, and nonoverlapping food niches are possible if two copepod species differ in size (Hutchinson 1967;Maly and Maly 1974). Sandercock (1967) suggested that the coexistence of species depends upon the additive effects of two factors or mechanisms. Therefore, the coexistence of P. poplesia and P. inopinus could be controlled by dietary differences and the salinity gradient. P. inopinus was introduced (probably via ballast water) to a number of estuaries along the Pacific coast of the USA, where it has become the dominant brackisholigohaline mesozooplankton species and has probably altered estuarine food webs (Cordell and Morrison 1996;Cordell et al. 2007). In northeastern Pacific estuaries, it reaches peak abundances in late summer/early autumn (17.4°C to 20.8°C) over a salinity range of 0 to10 psu (Cordell et al. 2007(Cordell et al. , 2010. Pseudodiaptomus koreanus from the Seomjin River estuary in south-central Korea is also restricted to oligohaline and mesohaline waters and differs from P. inopinus in that it mainly occurs in oligohaline conditions (Park et al. 2005;Soh et al. 2012). Around the Japanese mainland, P. inopinus and Pseudodiaptomus nansei coexist, but the latter species is restricted to Nansei Islands (Sakaguchi and Ueda 2010;Sakaguchi et al. 2011). Recently, Sakaguchi and Ueda (2010) distinguished a separate species, P. nansei, from P. inopinus on Kyushu Island, Japan, and also identified the presence of P. inopinus in the eastern Sea of Japan; the population on the Pacific side of Japan consists of a complex of species that are morphologically similar but genetically distinct. Populations of P. inopinus also differ genetically among the western and southern/eastern parts of Korea and from Japanese populations (Soh et al. 2012). Accordingly, it would be interesting to genetically characterize populations of the putative species, P. inopinus, introduced into the USA in order to provide clues as to the origins of the introductions and to establish whether or not it was introduced more than once. Unlike P. inopinus, P. poplesia has not been reported as an introduced species in any location. This is interesting since the two species nearly always co-occur in the Mankyung River estuary, and P. poplesia co-occurs in Chinese estuaries with Pseudodiaptomus forbesi and/or P. inopinus (Shen and Song 1979;Tan et al. 2004). Conclusions In this study, the annual maximal peak abundance of P. poplesia occurred during spring, when Chl a concentrations were highest (>150 μg/L). Food conditions at that time may be sufficient to allow the coexistence of P. poplesia and P. inopinus. However, there may be more competition for food between the two species when Chl a concentrations decrease in fall (<30 μg/L). Under these conditions, differences in their body sizes and shapes would likely be a significant factor in partitioning their food niches. In addition, P. poplesia has an enlarged 'naupliar eye', which could be more effective in food selectivity, while P. inopinus has two small, ordinary naupliar eyes. P. poplesia occurs under stenohaline conditions and is adapted to a narrower salinity range than species adapted for euryhaline conditions, such as P. inopinus. This provides a plausible reason why P. poplesia has not been introduced to estuaries of the Pacific coast of the USA, unlike other Asian estuarine species (Cordell and Morrison 1996;Orsi and Ohtsuka 1999).
v3-fos-license
2020-12-01T14:11:52.893Z
2020-12-01T00:00:00.000
227231036
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.598117/pdf", "pdf_hash": "0dfb15f3065215d73631b94390c7cca7d7d7e57b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43908", "s2fieldsofstudy": [ "Biology" ], "sha1": "0dfb15f3065215d73631b94390c7cca7d7d7e57b", "year": 2020 }
pes2o/s2orc
Laterality of Ovulation and Presence of the Embryo Do Not Affect Uterine Horn Blood Flow During the First Month of Gestation in Llamas We determined if laterality of ovulation and intrauterine embryo location differentially induces changes in the mesometrial/endometrial vascularization area (MEVA) between uterine horns, during and after embryo migration, elongation and implantation in llamas. Adult, non-pregnant and non-lactating llamas (n = 30) were subjected to daily B-mode ultrasound scanning of their ovaries. Llamas with a growing follicle ≥8 mm in diameter in the left (n = 15) or right (n = 15) ovary were assigned to a single mating with an adult fertile or vasectomized male. Power-doppler ultrasonography was used to determine the MEVA in a cross section of the middle segment of both uterine horns. MEVA was determined by off-line measurements using the ImageJ software. MEVA measurements were performed before mating (day 0) and on days 5, 10, 15, 20, 25, and 30 after mating in pregnant [llamas with left- (n = 6) or right-sided (n = 6) ovulations] and non-pregnant [llamas with left- (n = 6) or right-sided (n = 6) ovulations] females. Ovulation was confirmed by the disappearance of a follicle (≥8 mm) detected previously. Pregnancy was confirmed by the presence of the embryo proper. MEVA was analyzed by one-way ANOVA for repeated measures using the MIXED Procedure in SAS. If significant (P ≤ 0.05) main effects or interactions were detected, Tukey's post-hoc test for multiple comparisons was used. Ovulation rate did not differ (P = 0.4) between females mated to an intact or vasectomized male and between right- or left-sided ovulations. Three females mated to the intact and 3 to the vasectomized male did not ovulate and were excluded of the study. First observation of fluid inside the gestational sac and of embryo proper, were made exclusively in the left uterine horn, on day 15.8 ± 3.8 and 22 ± 2.7, and 16.7± 2.6 and 27.5 ± 2.8 for pregnant llamas ovulating in the right and left ovary, respectively. Although the MEVA of both uterine horns was affected by time (P < 0.05), it was not affected by physiological status (pregnant vs. non-pregnant; P = 0.9) or laterality of ovulation (P = 0.4). Contrary to expectations, regardless of the laterality of ovulation, in pregnant llamas the left horn did not display a greater MEVA before or after embryo arrival, a trend that was observed during the first 30 days of gestation. We determined if laterality of ovulation and intrauterine embryo location differentially induces changes in the mesometrial/endometrial vascularization area (MEVA) between uterine horns, during and after embryo migration, elongation and implantation in llamas. Adult, non-pregnant and non-lactating llamas (n = 30) were subjected to daily B-mode ultrasound scanning of their ovaries. Llamas with a growing follicle ≥8 mm in diameter in the left (n = 15) or right (n = 15) ovary were assigned to a single mating with an adult fertile or vasectomized male. Power-doppler ultrasonography was used to determine the MEVA in a cross section of the middle segment of both uterine horns. MEVA was determined by off-line measurements using the ImageJ software. MEVA measurements were performed before mating (day 0) and on days 5, 10, 15, 20, 25, and 30 after mating in pregnant [llamas with left-(n = 6) or right-sided (n = 6) ovulations] and non-pregnant [llamas with left-(n = 6) or right-sided (n = 6) ovulations] females. Ovulation was confirmed by the disappearance of a follicle (≥8 mm) detected previously. Pregnancy was confirmed by the presence of the embryo proper. MEVA was analyzed by one-way ANOVA for repeated measures using the MIXED Procedure in SAS. If significant (P ≤ 0.05) main effects or interactions were detected, Tukey's post-hoc test for multiple comparisons was used. Ovulation rate did not differ (P = 0.4) between females mated to an intact or vasectomized male and between right-or left-sided ovulations. Three females mated to the intact and 3 to the vasectomized male did not ovulate and were excluded of the study. First observation of fluid inside the gestational sac and of embryo proper, were made exclusively in the left uterine horn, on day 15.8 ± 3.8 and 22 ± 2.7, and 16.7± 2.6 and 27.5 ± 2.8 for pregnant llamas ovulating in the right and left ovary, respectively. Although the MEVA of both uterine horns was affected by time (P < 0.05), it was not affected by physiological status (pregnant vs. non-pregnant; P = 0.9) or laterality of ovulation (P = 0.4). Contrary to expectations, regardless of the laterality of ovulation, in pregnant llamas the left horn did not display a greater MEVA before or after embryo arrival, a trend that was observed during the first 30 days of gestation. Keywords: llamas, ovulation, embryo, gestation, uterine vascularization INTRODUCTION Llamas and alpacas have several unique reproductive characteristics, one of which is the establishment of embryo implantation and gestation exclusively in the left uterine horn, regardless of laterality of ovulation (1)(2)(3). Females from both species have a bicornate uterus that presents a clear asymmetry between uterine horns, with the left horn being larger than its right counterpart (4,5). This asymmetry is not only observed in pluriparous and pregnant females but also in nulliparous and even in female fetuses, therefore it is not induced by pregnancy (5). Also, the arterial irrigation and venous drainage differ between both uterine horns in llamas. The presence of a prominent cross-over arterial branch extending from the right uterine artery to the left horn suggests that this is irrigated with a greater blood flow (4). Besides, llamas, and alpacas present a peculiar pattern of intrauterine embryo migration. Although ovulation occurs with the same frequency in the left and right ovary (2, 6), embryos originated from right-ovary ovulations must migrate into the left uterine horn before the day of the beginning of luteolysis (Day 9 after ovulation) for the pregnancy to be successfully established (3,7). In horses and cattle (10,20) the establishment of pregnancy gradually increases uterine blood flow in close relationship with embryo/fetal growth during gestation. These hemodynamic changes begin before embryo implantation occurs (11,16,17) and exponentially increase thereafter (10). Interestingly, the increase in uterine blood flow begins before an intimate contact between the embryo and the endometrium is established (17), and is closely influenced by embryo location (11,16,17). Embryo location induces significant differences in blood flow between both uterine horns in cows (10) and mares (17,20), generating an asymmetrical blood flow in the former and a symmetrical blood provision in the latter before embryo fixation/implantation as a consequence of different intrauterine embryo migration patterns. As mentioned before, more than 98% of gestations in llamas are carried out in the left uterine horn, therefore embryos originated from right ovulations must migrate to the left horn in order to achieve a successful pregnancy. The striking features of embryo migration and the special uterine vascular arrangement make this species an interesting model to study uterine vascular perfusion and pregnancy development. Therefore, the goal of this study was to determine if intrauterine embryo location differentially induces changes in mesometrial/endometrial vascularization (MEVA) between the right and left uterine horn, during embryo migration, elongation and implantation in llamas. Since an adequate endometrial blood supply is essential for a successful embryo implantation and survival (8,21,22), studies on the spatial relationship between the location of the early embryo/conceptus and the degree of uterine vascular perfusion in llamas may shed some light into the mechanisms controlling embryo implantation in the left uterine horn. MATERIALS AND METHODS The present study was conducted during the breeding season (November-January) at the Universidad Católica de Temuco, Temuco, Chile (38 • 45 ′ S−72 • 40 ′ W and 122 m above sea level). All procedures were reviewed and approved by the University Bioethics Committee and were performed in accordance with the animal care protocols established by the same institution. Animals Adult non-pregnant, non-lactating llamas [n = 30; age: 5-8 y; weight: 120.5 ± 14.1 Kg; mean Body Condition Score: 3.5 out of 5 (range: 3.0-4.0); parity: 3 ± 2] were maintained on pasture supplemented with hay and water ad libitum. Llamas were housed indoors at night and offered 250 g/animal of a commercial diet supplement containing 140 g/kg crude protein and 150 g/kg crude fiber. Also, one intact fertile and one vasectomized adult male (ages: 3 and 5 y; weight: 147.5 ± 8.1 Kg; Body Condition Score: 4 and 5, respectively) were kept under similar conditions as the females, but separate at all times from the female herd. Malefemale contact was only allowed during the supervised matings. Vasectomy was performed by a standard surgical procedure 1 year before the start of the present experiment in the context of a previous study. Experimental Design Females were examined once daily by transrectal ultrasonography to monitor follicular growth and then by simple randomization were assigned to the following treatment groups: (a) presence of a growing follicle ≥8 mm in diameter in the right ovary and mating with an intact male (n = 8), (b) presence of a growing follicle ≥8 mm in diameter in the left ovary and mating with an intact male (n = 7), (c) presence of a growing follicle ≥8 mm in diameter in the right ovary and mating with a vasectomized male (n = 8), or (d) presence of a growing follicle ≥8 mm in diameter in the left ovary and mating with a vasectomized male (n = 7). Mating was validated only if the receptive female adopted the prone position soon after contact with the male and if intromission and copula lasted more than 5 min. After mating, females were examined using B-mode transrectal ultrasonography every 12 h until ovulation or 48 h, whichever came first. Ovulation was confirmed by the sudden disappearance of a follicle (≥8 mm) detected during previous examinations and only ovulated females were incorporated for the transrectal Power-doppler ultrasound examination. Power-Doppler Ultrasonographic Evaluation The area of mesometrial/endometrial vascularization of both uterine horns was evaluated by Power-doppler ultrasonography in all ovulated females using a 5.0 MHz lineal array transducer coupled to a ultrasound monitor (Sonosite M-Turbo, USA) before mating (Day 0 = Day of mating) and on days 5, 10, 15, 20, 25, and 30 between 08:00 a.m. and 12:00 p.m. as described previously (11,16,17). In brief, the transducer was placed over a cross section of the middle segment of each uterine horn where a 10 s video-clip was registered. The area of mesometrial/endometrial vascularization was objectively assessed by off-line measurements of the number of colored pixels as an indicator of blood flow area. Three still images of each horn were selected by a blind procedure, and then used for the determination of the number of colored pixels, and the average was used for the statistical analyses. Power Doppler images were selected based on two criteria: (a) proper cross section of the uterine horn and, (b) absence or minimal presence of Powerdoppler noise interference. Then, images were recorded, edited, and analyzed using the ImageJ software (NIH open access, USA). A female was considered pregnant when the gestational sac and the embryo proper were detected by ultrasonography. Statistical Analyses Statistical analyses were performed using the Statistical Analysis System software package SAS Learning Edition, version 4.1 (SAS Institute Inc., Cary, NC, USA, 2006). Serial data were compared by analysis of variance for repeated measures (Procmixed procedure) to determine the effects of female physiological status (pregnant vs. non-pregnant), laterality of ovulation (right or left ovary), time and treatments-by time interaction on left and right uterine horn MEVA. If significant (P ≤ 0.05) main effects or interactions were detected, Tukey's post-hoc test for multiple comparisons was used to locate differences. All data are reported as mean ± SEM, and probabilities ≤ 0.05 were considered significant. RESULTS There was not a significant difference (P = 0.4) in ovulation rate between llamas mated with an intact fertile or vasectomized male. Six out of 8 and 6/7 llamas with a preovulatory follicle ≥8 mm diameter located in either the right or left ovary ovulated and became pregnant after mating with the intact fertile male. Similarly, 6/8 and 6/7 llamas with a preovulatory follicle ≥8 mm diameter located either in the right or left ovary ovulated after mating with the vasectomized male. In pregnant females the earliest ultrasound signs of gestations were observed exclusively in the middle section of the left uterine horn. First observations of fluid inside the gestational sac (i.e., embryonic vesicle) and the embryo proper were recorded on day 15.8 ± 3.8 and 22 ± 2.7, and 16.7± 2.6 and 27.5 ± 2.8, for pregnant llamas ovulating in the right and left ovary, respectively. Representative images of MEVA in gravid uterus (i.e., left uterine horn) 30 days after mating for 6 different females are shown in Figure 1. There was an effect of time (P < 0.05) on the MEVA of both uterine horns, but this parameter was not affected by physiological status of the female (pregnant vs. non-pregnant; P = 0.9), laterality of ovulation (P = 0.4), nor by interactions between any of the variables measured. In pregnant and nonpregnant llamas with left-ovary ovulations the mean MEVA of right uterine horn displayed a significant (P < 0.05) decrease, compared to basal value, on day 10. On the contrary, in nonpregnant llamas with right-ovary ovulations the MEVA of the left uterine horn displayed a significant (P < 0.05) increase on day 20. The mean MEVA for both uterine horns, in pregnant and nonpregnant llamas with right or left ovary ovulations, during the entire period of evaluation are shown separately in Figure 2. DISCUSSION In the present study regardless of laterality of ovulation, intrauterine embryo location did not induce changes in mesometrial/endometrial vascularization area between the right and left uterine horn, during the phases of embryo migration, elongation and implantation in llamas. The measurement of MEVA has been reported to be a reliable and sensitive tool to evaluate uterine blood flow during early gestation in mares and heifers (11,16,17). Also, using this ultrasonographic method our research group has demonstrated in previous studies (23,24) that significant changes in uterine blood flow and vascularization area occur in llamas during the follicular growth phase or after mating. However, in the present study MEVA was similar for pregnant and non-pregnant females and between right and left uterine horns during the evaluation period. A macroscopic anatomical study of uterine vascularization in llamas (4) has described the presence of a peculiar arrangement involving a prominent cross-over arterial branch extending from the right uterine artery to the left uterine horn, which could suggest that the left uterine horn is irrigated with a greater blood flow. However, the results of the present study were not able to detect a differential vascularization between uterine horns, regardless of the of female's physiological status or laterality of ovulation, therefore, not supporting the previously cited study (4). Moreover, a significant individual variation regarding basal MEVA (i.e., pre mating) was observed among female llamas; however, no trend favoring a greater vascularization toward the left uterine horn was established. This great individual variation in uterine hemodynamic parameters has been reported by other studies in mares and it was not related to the stage of the cycle, age or parity (18,20). Although several studies have described the hemodynamic changes during gestation in a variety of farm animal species, only a few (11,16,17) were conducted to evaluate hemodynamic changes during the embryonic peri implantation period. In mares and cows (10,20) the establishment of pregnancy gradually increases uterine arterial blood flow in accordance with embryo/fetal growth during the entire length of gestation period. These modifications seem to begin during the pre-implantation phase of embryo development (11,16,17) and exponentially increase thereafter (10). Interestingly, these modifications in uterine and endometrial vascular irrigation already begin during the histotrophic phase of embryo nutrition (17), when there is no intimate contact between the embryo and the endometrium, and are closely related to embryo location (11,16,17). In the bovine, a species that does not present intrauterine embryo migration (i.e., the embryo remains in the uterine x,x ′ Within uterine horn, the first significant increase from basal MEVA (P < 0.01). y Within uterine horn, the first significant decrease from basal MEVA (P < 0.01). Frontiers in Veterinary Science | www.frontiersin.org horn ipsilateral to the ovary from which ovulation occurred) compared to the equine, there is a clear increase in blood flow in the uterine artery ipsilateral to the uterine horn containing the embryo during the first weeks of gestation (11,25). In heifers, the increase in uterine blood flow is directly correlated to subtler changes in endometrial vascular perfusion, and it begins as early as Day 13 (25) or 18 (11) of pregnancy. This last study demonstrated a temporal synchrony between the increase in uterine/endometrial vascular irrigation and embryo elongation, which in turn is closely related to the beginning of adhesiveness of the chorion to the endometrium (Day 20; 11), suggesting that the direct contact of the embryo with the endometrium induces local changes in uterine/endometrial blood flow. On the contrary, in mares in which pre-implantation embryo develops an intense intrauterine migration before fixation occurs (26), endometrial vascular irrigation increased in an alternate manner between uterine horns, which was tightly synchronized with embryo location. Accordingly, during the period of intense intrauterine migration even the presence of the embryo for periods of 7 min, or longer in one location, determined a localized increase in endometrial vascular perfusion (16); thus, during the pre-implantation phase embryo-induced changes in endometrial vasculature parallel embryo migration between uterine horns (16,17). However, shortly after embryo fixation the increase in endometrial blood flow was only observed in the endometrium surrounding the fixed embryonic vesicle (16). Moreover, from fixation day onwards, the blood flow of the uterine artery ipsilateral to the horn containing the embryo increased drastically compared to its contralateral counterpart (20). However, in the present study the MEVA was similar in all the categories evaluated. There was no change in uterine vascularization between uterine horns in pregnant and nonpregnant females or between those whose embryos were originated from left-or right-sided ovulations. Furthermore, MEVA did not increase significantly over time during the first 30 days of gestation in the pregnant group as was described for the bovine, where it increased from Day 13 or 18 of pregnancy (11,25). The reasons for these differences with observations made in other species could be due to a slower rate of llama embryo/fetus development during the first 3 months of gestation, as measured by crown-rump length, compared to cattle, sheep and horses (27,28). The vast differences in uterine and endometrial vascular irrigation during the early phase of embryo development, between species that display different embryonic strategies to signal its presence to the dam, could be related to the secretion of vascular stimulants into the uterine lumen by the embryo (17). In this regard, several studies have demonstrated that the Day 16 bovine embryo (29), and specially the equine embryo, as early as Day 12 (30,33) produce and secrete estrogen, a molecule involved in inducing uterine contractility (26) and significant increases in uterine blood flow (31). Thus, during preimplantation embryo development, in the bovine this molecule would be secreted into just one uterine horn, while in the mare it would be evenly distributed between both horns and the uterine body, inducing the previously described vascular changes. Despite the fact that estradiol has also been suggested as the most probable signaling candidate responsible for maternal recognition of pregnancy and intrauterine migration for the llama blastocyst (32), our results do not show an effect of embryo signaling on uterine blood flow. Larger quantities of estradiol secreted by the equine blastocyst compared to the llama embryo (32,33), could explain the described effect in mare uterine blood flow and the absence of it in llamas. Although in the present study embryo location did not induce changes in MEVA between the right and left uterine horn during the first month of gestation in llamas, there was an effect of time on uterine horn blood flow. Considering the slower rate of development of llama embryo/fetus during the first months of gestation, future investigations should consider a longer observational period to determine potential interactions between embryo and uterine blood flow and should increase the number of animals per group. Finally, similar to our results, Travassos-Beltrame et al. (13) did not find differences between hemodynamic parameters between left and right uterine horns in pregnant sheep. Even though they started the Doppler ultrasonographic evaluation after pregnancy diagnosis was made on Day 28, hemodynamic variables were not affected by uterine horn nor single vs. multiple gestations. Contrary to expectations, based on our results we can conclude that regardless of laterality of ovulation, in pregnant llamas the left horn did not display a greater MEVA before or after embryo arrival, a trend that was observed during the first 30 days of gestation. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by Universidad Católica de Temuco. AUTHOR CONTRIBUTIONS MS and MR designed the experiment and wrote the manuscript. MS and FU developed the field work and analyzed the data. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the Chilean National Science and Technology Research Council (Fondecyt 11140396) awarded to MS.
v3-fos-license
2024-07-15T13:21:26.807Z
2024-07-15T00:00:00.000
271162698
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "a5bc9bd2a45b0482abc42aaf7622eabb9af3e128", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43910", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "886fe7e2eb2b669980692ce4dc249aff48e55e27", "year": 2024 }
pes2o/s2orc
Changes in the anteroposterior position of the femur relative to the tibia impact patient satisfaction in total knee arthroplasty Background In this study, we aimed to investigate the preoperative and postoperative anteroposterior position (AP) of the femur relative to the tibia in total knee arthroplasty (TKA) and assess the influence of change in the AP position on clinical outcomes. Methods We evaluated 49 knees that underwent bi-cruciate-substituted TKA using a navigation system. The preoperative and postoperative AP position of the femur relative to the tibia at maximum extension, 15°, 30°, 45°, 60°, 90°, 105°, and 120° and maximum flexion angles were calculated. The 2011 Knee Society Score was evaluated preoperatively and 1 year postoperatively. The Wilcoxon signed rank and Spearman’s rank correlation tests were performed, with statistical significance set at P < 0.05. Results The postoperative AP position was significantly correlated with the preoperative AP position at each measured angle. The postoperative AP positions were statistically more anterior than those preoperatively. Furthermore, the changes in the AP position after TKA negatively correlated with the symptom (P = 0.027 at 30°, P = 0.0018 at 45°, P = 0.0003 at 60°, P = 0.01 at 90°, and P = 0.028 at 105°) and patient satisfaction (P = 0.018 at 60° and P = 0.009 at 90°) scores at 1 year postoperatively. Conclusion The postoperative AP position of the femur relative to the tibia was strongly influenced by the preoperative those in TKA. Postoperative anterior deviation of the femur relative to the tibia from mid-flexion to deep flexion could worsen clinical outcomes. Background Over the past decades, improving postoperative patient satisfaction following total knee arthroplasty (TKA) has been a significant challenge for knee surgeons and researchers [1].Although several elements have been found to be critical factors in patient satisfaction [2], a perfect solution for achieving excellent results in all cases remains unknown.With progress in evaluation technology, assessing pre-, intra-, and postoperative knee status using various technical tools has become possible [3,4].Numerous studies have reported the influence of intraoperative elements on clinical outcomes [5,6].For instance, Nishio et al. reported that an intraoperative medial pivot motion improved postoperative patient satisfaction [5].In addition, medial joint laxity and excessive tibial external rotation have been reported as unfavorable factors for clinical outcomes [6].However, almost all previous studies have focused on the relationship between postoperative knee status and postoperative clinical results.Exploring changes in knee status and kinematics throughout TKA can present a solution to the problem of patient satisfaction after TKA. Additionally, in almost all previous studies on knee kinematics, the central concern was the rotational kinematics of the medial pivot motion [5,7,8].In addition to reporting normal knee rotational kinematics, some studies have proposed the occurrence of anterior paradoxical motion after TKA [9][10][11].Moreover, the degree of preoperative varus deformity has been related to preoperative anterior paradoxical motion [12].In these studies, the knees of many patients exhibited non-anatomical anteroposterior (AP) movement pre-and postoperatively that affected clinical outcomes.However, little attention has been paid to the AP kinematics and AP position.To restore normal knee kinematics, surgeons should be familiar with AP movement and understand changes in the AP knee position of the femur relative to the tibia in patients undergoing TKA. In this study, we aimed to investigate the pre-and postoperative AP positions of the femur relative to the tibia using a navigation system and assess the influence of changes in the AP position of the femur relative to the tibia on clinical results.We hypothesized that the preoperative AP position of the femur could be related to the postoperative AP position of the femur and that postoperative fixation of the AP position of the femur would lead to better clinical results. Methods This study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of Ehime University (identification number: 1,411,020).Additionally, written informed consent was obtained from all patients.This study evaluated 56 knees of 55 Japanese patients with osteoarthritis who underwent bicruciate-stabilized TKA (Journey II BCS: Smith & Nephew, London, UK).To accurately assess and minimize the influence of clinical variables, patients with preoperative flexion contracture > 15° (n = 5) and severe flexion restriction < 120° (n = 2) were excluded.The patient population comprised 43 female and 6 male, with a mean age of 75.9 ± 6.4 years (61-87 years).All patients presented with a varus deformity. A navigation system (version 4.0, Precision Knee Navigation Software, Stryker, Kalamazoo, MI, USA) was used to evaluate the preoperative knee status.The air tourniquet was inflated to 250 mmHg when the patients were under general anesthesia.Furthermore, specific anatomical reference points were located by anchoring infrared signal transducers to the femur and tibia using pins.A midline skin incision was made to expose the subcutaneous tissue.Then, a knee joint was exposed using a medial parapatellar approach.Registration was performed using osteophytes and soft tissues, and the anterior cruciate ligament was preserved.The AP and rotational axes of the femur and tibia were identified based on the anatomical landmarks.In cases where it was difficult to determine the femoral axis due to deformity, Whiteside's line was primarily used for the registration of the navigation system.The tibial rotational axis was set parallel to the line connecting one-third of the tibial tubercle to the center of the transverse diameter.After registration, the joint capsule was temporarily closed using four suture strands.Mild passive knee flexion was manually performed without angular acceleration while moving the leg from full extension to deep flexion.Then, the AP and compressiondistraction status of the tibia center relative to the femur center at 0° (or maximum extension angle), 15°, 30°, 45°, 60°, 90°, 105°, and 120° and maximum flexion angles were automatically measured using the navigation system.Data were measured every 0.5° or 1 mm.Regarding the AP position of the femur relative to the tibia, we evaluated femoral center movement relative to the tibia as previously described [12,13].We calculated the AP position of the femur relative to the tibia using the status of the tibia relative to the femur obtained using a navigation system (Figs. 1 and 2).For the anteroposterior position, positive values indicated the anterior, whereas negative values represented the posterior position of the tibia relative to the femur.For the compression-distraction position, positive values indicated the compression, whereas negative values indicated the distraction position of the tibia relative to the femur.Therefore, the positive and negative signs of the AP and compression-distraction values changed depending on the position of the femur and tibia. Subsequently, the distal femur was cut using a measured resection technique.To determine the rotational angle of the femoral component, we utilized the surgical transepicondylar axis as the index of femoral rotation.Before the surgery, we calculated the angle gap between the surgical transepicondylar axis and the posterior condylar axis on the axial view of computed tomography to determine the rotational angle.Concerning bone resections, the distal femoral cut was made perpendicular to the mechanical axis of the femur, and the proximal tibial cut was made perpendicular to the mechanical axis of the tibia based on the concept of mechanical alignment.The posterior tibial slope was set at 3° in all cases in this study.After removing the osteophytes, we placed trial components and a trial insert.We typically began with a 9-mm insert (the thinnest insert thickness in this Fig. 2 Measurement of the anteroposterior position of the femur relative to the tibia.The left picture shows the lateral view of the knee joint after bicruciate-stabilized total knee arthroplasty, and the right image shows a schematic representation of the navigation monitor evaluating the knee status of the same knee joint.The depicted equation was used to calculate the anteroposterior distance of the femoral center relative to the tibial center based on the parameters obtained from the navigation system.θ, knee flexion angle; AP, anterior-posterior distance of the tibial center relative to the femoral center; CD, distraction-compression distance of the tibial center relative to the femoral center Fig. 1 Measurement of the status of each knee using a navigation system.The left picture shows the varus-valgus and compression-distraction position of the tibia center relative to the femoral center.The middle picture shows the knee flexion angle and anteroposterior position of the tibia center relative to the femoral center.The right picture shows the rotational and medio-lateral position of the tibia center relative to the femoral center.Min, minimum; Max, maximum; Med, medial; Lat, lateral TKA procedure).Then, we evaluated the knee stability using the manual varus-valgus test throughout extension to deep flexion in that condition.Finally, we performed the POLO test to confirm the stability in the 90° flexion position [14].We increased the size of the insert in cases showing excessive medial laxity and excessive extension and flexion laxity.Conversely, in cases that showed flexion contracture or inappropriate soft-tissue balance, the posterior knee capsule, medial collateral ligament, or other tissues were carefully and selectively released to achieve intraoperative full extension and correct soft-tissue balance throughout the range of motion [15].After the trial, the components and inserts of the proper thickness were placed in the appropriate position with cement.Thereafter, the surgical incision was closed.Subsequently, we assessed the AP position of the femur relative to the tibia using the status of the tibia relative to the femur obtained by a navigation system, similar to procedures performed before TKA.The same surgeon performed all surgeries. The test-retest reliability of each status obtained using the navigation system was calculated to confirm the accuracy of measurements.The test-retest reliability of the AP and compression-distraction status was evaluated, yielding sufficiently high interclass and intraclass correlation coefficients (> 0.9 at each measured angle of knee flexion).In addition, the range of motion of the knee joint was assessed preoperatively and 1 year postoperatively.The 2011 Knee Society Score (KSS) was used to evaluate clinical outcomes [16].This questionnaire was used for all patients preoperatively and 1 year postoperatively.For radiographic evaluation, PTS was evaluated preoperatively and 1 year postoperatively using short knee radiographs.The PTS was measured using short knee lateral radiographs and evaluated as the angle between the medial tibial plateau and the posterior cortical line of the proximal tibia.The changes in patient characteristics are presented in Table 1. Statistical analysis Statistical analyses were performed using JMP (version 14.0, SAS Institute, Tokyo, Japan).Non-parametric tests were performed in this study because the data were found to be non-normally distributed using the Shapiro-Wilk test.The non-parametric Wilcoxon signed-rank test was performed to determine the differences between the anteroposterior position of the femur relative to the tibia before and after TKA.Spearman's rank correlation coefficient (ρ) was used to evaluate the relationship among the AP position of the femur relative to the tibia, PTS, and KSS.A power analysis was conducted based on the mean and standard deviation calculated from three preliminary consecutive measurements.The required minimum sample size of 34 was determined to achieve a correlation of δ = 5 and σ = 5, with 80% power and α = 0.05, accounting for the results of the mean difference in AP position of the femur relative to the tibia before and after TKA.Accordingly, we assessed 49 participants to compensate for the small sample size in this study.Statistical significance was set at a P value of < 0.05. AP position of the femur relative to the tibia The postoperative AP position of the femur relative to the tibia correlated with the preoperative AP position of the femur relative to the tibia at each measured angle (Table 2).Tables 3 and 4 show the relationship between the AP position of the femur relative to the tibia and PTS.The changes in PTS after TKA did not correlate with those in the AP position of the femur relative to the tibia, except during extension to early knee flexion (Table 5).Figure 3 shows the preoperative and postoperative AP position of the femur relative to the tibia throughout the range of motion.The postoperative AP positions of the femur relative to the tibia at all measured angles were statistically more anterior than they were preoperatively (ρ = 0.46, 0.47, 0.61, 0.58, 0.46, 0.41, 0.35, 0.36, 0.64 at Clinical outcomes No significant correlation was observed between the KSS and postoperative AP position of the femur relative to the tibia at each measured angle.Table 6 summarizes the correlation coefficients between the KSS and change in the AP position of the femur relative to the tibia.The postoperative changes in the AP position of the femur relative to the tibia at 30°,45°, 60°, 90°, and 105° were negatively correlated with the symptom score (ρ=-0.33,-0.46, -0.52, -0.38, -0.33, respectively; Figs. 4 and 5).Moreover, the postoperative change in the AP position of the femur relative to the tibia at 60° and 90° was negatively correlated with the patient satisfaction score (ρ=-0.35,-0.39, respectively; Figs. 6 and 7).However, no statistically significant correlation was observed between the KSS and change in the PTS after TKA. Discussion The most important finding of this study was that the change in AP position of the femur relative to the tibia was associated with the clinical results after TKA.Postoperative anterior deviation of the femur relative to the tibia during mid-flexion led to unfavorable clinical outcomes.Moreover, the preoperative AP position of the femur relative to the tibia was strongly related to the postoperative AP position of the femur relative to the tibia.To the best of our knowledge, this is the first study to demonstrate that a change in the AP position of the femur relative to the tibia impacts patient satisfaction after TKA.These results may aid in alleviating persistent issues regarding TKA.Previous studies on the AP knee status focused on three factors: position, kinematics, and stability [17][18][19][20][21][22][23][24][25][26][27].A previous study demonstrated the influence of the implant design on the AP position of the femur relative to the tibia [14].The unique implant design in patients undergoing BCS-TKA induced proper positioning of the femur, resulting in a lower offset ratio closer to that of the normal knee, at knee extension [17].Another study demonstrated the relationship between the intraoperative factor and the AP position of the femur and found that the PTS was correlated with the AP position of the femur in patients undergoing TKA with cruciate-substituting inserts, but not in those undergoing TKA with cruciateretaining inserts [18].In the present study, which utilized BCS-TKA, the AP position of the femur relative to the tibia during mid-flexion was negatively correlated with the PTS before and after TKA.In addition, postoperative anterior deviation of the femur during knee flexion has been shown to lead to poor clinical outcomes.Thus, drastic postoperative changes in the AP position of the femur relative to the tibia should be avoided in TKA.However, further validation is needed to establish a technique to Studies on AP kinematics [19][20][21] have demonstrated the non-physiological anterior femoral movement during knee flexion after TKA.Moreover, in knees with osteoarthritis, the degree of deformity has been shown to contribute greatly to such an anterior paradoxical motion in knees with preoperative osteoarthritis [3].Such non-physiological anterior femoral movement impacts the postoperative clinical outcomes [22].Sakai et al. researched the influence of the AP position of the femur and AP kinematics in cruciate retaining TKA, and demonstrated that anterior position of the femur during mid-flexion correlated with postoperative functional activities score [23].These results might be derived from PCL tension and patella-femoral pressure.Konno et al. demonstrated the normal knee rotational kinematics reduced the patella-femoral pressure [24].From the point of view, the restore of the normal knee kinematics after TKA has a possibility to resolve these problems including our results.Furthermore, postoperative AP stability in TKA has been reported in previous studies [25][26][27][28][29]. Mochizuki et al. reported that excessive postoperative AP instability at mid-flexion directly led to anxiety during daily movement [26].Although the influence of AP kinematics and AP stability on clinical results was not assessed in the present study, changes in the AP position of the femur relative to the tibia before and after surgery were observed to be related to the pain and satisfaction scores after TKA.This may be attributed to the soft tissue strain due to drastic changes in the AP position.In this study, the preoperative AP position of the femur was correlated with postoperative AP positions of the femur.The results of this study suggest that surgeons should pay attention to the preoperative AP position, which varies considerably across patients.To address various AP factors such as position, stability, and kinematics, further research is needed to determine the appropriate surgical method to avoid anterior deviation of the femur relative to the tibia. This study has some limitations.First, the evaluation was not performed in a weight-bearing state due to intraoperative evaluation.Although knee kinematics have been reported to show the same pattern under weightbearing and non-weight-bearing conditions [30], further research is required.Second, the preoperative conditions of cartilage wear, the anterior cruciate ligament, and the posterior cruciate ligament, which may influence the AP position of the femur relative to the tibia, were not investigated.Third, this study did not clarify the specific Conclusions The postoperative AP position of the femur relative to the tibia was strongly influenced by the preoperative AP position of the femur relative to the tibia in TKA.Postoperative anterior deviation of the femur relative to the tibia from mid-flexion to deep flexion could worsen clinical outcomes. Table 5 Fig. 3 Fig. 3 Preoperative and postoperative anteroposterior position of the femur relative to the tibia.The mean anteroposterior position of the femur relative to the tibia at each knee flexion angle.The graph shows changes in the anteroposterior position of the femur relative to the tibia throughout the range of motion.The horizontal line shows the knee flexion angle, and the vertical line shows the anteroposterior position of the femur relative to the tibia (a positive value indicates the anterior position of the femur relative to the tibia).Asterisks: P < 0.01; dagger: P < 0.05.AP, anteroposterior position; pre-OP, preoperative; post-OP, postoperative; maximum extension, maximum knee extension angle; maximum flexion, maximum knee flexion angle Fig. 4 Fig. 4 Correlation between the symptom score and the change in the anteroposterior position of the femur at 60°.The graph shows the scatterplots of the symptom scores of the KSS and changes in the anteroposterior position of the femur relative to the tibia at 60°.Symptoms, symptom score of KSS; KSS, 2011 Knee Society Score Fig. 5 Fig. 5 Correlation between the symptom score and the anteroposterior position of the femur at 90°.The graph shows scatterplots of the symptom score of the KSS and the change in the anteroposterior position of the femur relative to the tibia at 90°.Symptoms, symptom score of KSS; KSS, 2011 Knee Society Score Fig. 6 Fig. 6 Correlation between patient satisfaction and the change in the anteroposterior position of the femur at 60°.The graph shows the scatter plots of the patient satisfaction score of the KSS and the change in the anteroposterior position of the femur relative to that of the tibia at 60°.Patient satisfaction, patient satisfaction score of KSS; KSS, 2011 Knee Society Score Fig. 7 Fig. 7 Correlation between the satisfaction score and the change in the anteroposterior position of the femur at 90°.The graph shows the scatter plots of the patient satisfaction score of the KSS and the change in the anteroposterior position of the femur relative to that of the tibia at 90°.Patient satisfaction, patient satisfaction score of KSS; KSS, 2011 Knee Society Score Table 2 Correlation coefficients between preoperative and postoperative AP positions at each angle Table 3 Correlation coefficients between the preoperative posterior tibial slope and AP position at each angle AP position, anteroposterior position of the femur relative to the tibia; n.s., non-significant * P < 0.05; ** P < 0.01 Table 4 Correlation coefficients between the postoperative posterior tibial slope and AP position at each angle AP position, anteroposterior position of the femur relative to the tibia; n.s., non-significant * P < 0.05; ** P < 0.01 Table 6 Correlation coefficients between changes in the AP position and KSS
v3-fos-license
2019-06-13T13:23:42.273Z
2018-01-18T00:00:00.000
188153602
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.ijert.org/research/effects-of-collagen-nanofibrils-on-turbid-water-IJERTV7IS010066.pdf", "pdf_hash": "0adaf63d9a2e370b7609cbccc32a1e32f4e3ae2f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43912", "s2fieldsofstudy": [ "Engineering" ], "sha1": "18a27b6ab34e8a9c92d66b69480887a0f7e518ac", "year": 2018 }
pes2o/s2orc
Effects of Collagen Nanofibrils on Turbid Water Having clean water is a problem throughout the whole world. The purpose of this paper is to test different amounts and concentrations of collagen dispersions in turbid water and use different methods to see which is more efficient in cleaning the water. We studied two different experiments, one using a shake and settle method and one using a refrigerated centrifuge. We found that the refrigerated centrifuge produced lower NTU values than the shake and settle method. This conclusion means that the refrigerated centrifuge cleaned the turbid water more effectively than shaking a sample cell and letting it settle on its own. INTRODUCTION According to Maffia's research (Maffia), collagen is made up of nanofibrils that can hold almost 500 times its weight in water. These fibers promote the agglomeration of suspended particles in water. As the particles agglomerate, flocs are formed. When compared to smaller particles, these flocs have a faster settling time, thus making it easier to filter out of water. The experiment was conducted using previously prepared collagen dispersions following Maffia's patent. Before performing this experiment we realized that obtaining clean water is an ongoing issue throughout the world. Additionally, past methods of cleaning turbid water prove to be detrimental to humans and the environment. According to Maffia's patent, it was discovered that, "settling rates are more than ten times faster with collagen than with conventional chemical treatment (Maffia).." Not only does collagen prove to be more effective in settling particulates, it is also a safer alternative than chemical treatment as it is a biological material found in the human body in connective tissue and not harmful to humans or the environment. A. Collagen Research Group The collagen research began in the early 1980s at Dartmouth College (ref). It began as a response to the need within the cell culture scientific community for platforms, which were being utilized for immobilizing Chinese Hamster Ovary cells (CHO) among others. A group of interested researcher organized to share collagen nanofibrils and develop new applications. Collagen nanofibrils are about 50 nm in diameter and several hundred nm long. An atomic force micrograph is shown in Fig. 1 above. Thus the Collagen Research Group, CRG, was formed. It continued in industry at Verax, Inc. a spin-off of the original Dartmouth work., until about 1995. Concurrently, the CRG The focus of this current work is to use collagen nanofibrils to disrupt the colloidal nature of contaminated water, causing flocculation and settling of contaminants. II. MATERIALS AND METHODS In moving forward in this experiment we wanted to find the most efficient way of decreasing the turbidity in contaminated water. Two different methods, shake and settle and centrifuging were devised for the purposes of this experiment. For the shake and settle method, we tested four different percentages of collagen dispersions (0%, 0.3%, 0.6%, 0.9%) with different dosing amounts in the turbid water. The formula for the collagen dispersions is generically stated as: Collagen nanofibrils X % Weak Organic Acid 5% in the case of acetic acid Water (deionized) 100% -5% -X Blending of these materials requires about 10 minutes in a high shear blender. The resulting material exhibits non-Newtonian behavior as well as time dependency. The rheology of this material is the subject of a recent report by Sandy Shivakumar. A. Current Experiment A stock solution of "mock" contaminated water was prepared using kaolin. Every ten minutes, we recorded a reading from the turbidimeter of each sample cell. This was repeated for an hour only shaking the sample cell initially and letting it settle on its own. For the centrifuge, we tested the same four percentages of collagen at different amounts of drops in the turbid water just like the shake and settle method. We used a refrigerated centrifuge at 5 degrees Celsius and a speed of 3000 rpm to force the collagen and kaolin to interact and eventually settle out. We recorded a reading off the turbidimeter of each sample cell again, but for five different amounts of time instead of every ten minutes for an hour. This experiment was performed to test which dispersions of collagen would cause better settling and in the end a lower turbidity. The two methods previously explained were chosen to test the most efficient and cost effective way to distribute the collagen. If collagen dispersions are added to the water then the kaolin will conglomerate and settle out, causing the turbidity to decrease. Additionally adding more drops of collagen with a higher concentration will yield a lower turbidity. Centrifuging will cause too much disruption in the water, stopping the collagen fibers from interacting well with the kaolin particles. However, centrifuging for longer periods of time will increase the contact time between the kaolin particles and collagen fibers, yielding a lower turbidity reading. IV. METHODS To set up this experiment, gather premade collagen dispersions formulated based on Dr. Maffia's patent. Then, use the Hach 2100 Q IS Portable Turbidity Meter and Denver Instrument Company TR Series Toploading Balance to find out how much kaolin needs to be added to 15mL of deionized water to produce a target of around 1000 NTU. A. Shake and Settle: Initially, one needs to find a reading for the kaolin/water solution without adding any collagen. To do this, add 15mL of deionized water and the sample weight of kaolin to a sample cell and, using the shake and settle method, shake ten times before checking the reading off of the turbidity meter. Then, leave the sample cell on the table and check the reading every ten minutes for an hour to see how much the kaolin settles. Record the turbidity value at ten-minute increments. Now, one must test different amounts of different percentages of collagen mixed with the kaolin/water solution using the shake and settle method. First, start with the 0.6% collagen and test 1, 2, 3, 4, and 5 drops for an hour. Measure the weight of each drop and find the ppm. To find the ppm, multiply the weight of the drop by the percentage of collagen one is using, then divide by 10 and multiply by 10^6. Let the solution in the sample cell sit on the table and check the reading every ten minutes for an hour using the turbidity meter. Next, use the 0.9% collagen and test 1, 2, 3, 4, and 5 drops for an hour. Measure the weight of each drop and find the ppm using the method mentioned before. Let the solution in the sample cell sit on the table and check the reading every ten minutes for an hour using the turbidity meter. Then, use the 0.3% collagen and test 1, 2, 3, 4, and 5 drops for an hour. Measure the weight of each drop and find the ppm using the method mentioned before. Let the solution in the sample cell sit on the table and check the reading every ten minutes for an hour using the turbidity meter. B. Refrigerated Centrifuge: As in the shake and settle method, initially, one needs to find a reading for the kaolin/water solution without adding any collagen. To do this, add 15mL of deionized water and the sample weight of kaolin to a sample cell and, using the refrigerated centrifuge method, test at 0, 3, 5, 10, and 15 minutes before checking the reading off of the turbidity meter. Now, one must test different amounts of different percentages of collagen mixed with the appropriate weight of kaolin and use a refrigerated centrifuge at 5 degrees Celsius to mix the kaolin/collagen solution at 3000 rpm. First, start with 0.3% collagen and test 1, 2, 3, 4, and 5 drops at 0, 3, 5, 10, and 15 minutes in the refrigerated centrifuge. Measure the weight of each drop and find the ppm. To find the ppm, multiply the weight of the drop by the percentage of collagen one is using, then divide by 10 and multiply by 10^6. Pour the mixture from the polypropylene centrifuge tubes into the sample cells and record the NTU reading using the turbidity meter. Next, use 0.6% collagen and test 1, 2, 3, 4, and 5 drops at 0, 3, 5, 10, and 15 minutes in the refrigerated centrifuge at the same temperature and rpm as used Initial Solution: No Collagen before. Measure the weight of each drop and find the ppm using the method mentioned before. Pour the mixture from the polypropylene centrifuge tubes into the sample cells and record the NTU reading using the turbidity meter. Then, use 0.9% collagen and test 1, 2, 3, 4, and 5 drops at 0, 3, 5, 10, and 15 minutes in the refrigerated centrifuge at the same temperature and rpm as used before. Measure the weight of each drop and find the ppm using the method mentioned before. Pour the mixture from the polypropylene centrifuge tubes into the sample cells and record the NTU reading using the turbidity meter. V. RESULTS The following is a selection of the turbidity readings and percent recovery from both the shake & settle method and centrifuge method: VI. DISCUSSION This research was conducted to experiment with alternate ways of cleaning turbid water since clean water is not easily found throughout the world. The methods used in this experiment are safer for humans as well as the environment. The results show that centrifuging rapidly decreases the NTU values as you increase the amount of time being centrifuged. After letting shake and settle test tubes sit for a few days, they still did not get as low of NTU values as centrifuged test tubes recorded. In both experiments, the amount of drops did not affect the NTU values greatly at the initial times, however, as time went on, the more drops added, the greater the settling rates. We did not expect our initial NTU values to vary as much as they did between percentage of collagen and amount of drops. There was no consistent NTU value decrease as the percentages and amount of drops increased, as one would expect. The NTU value fluctuated or leveled out as time went on, which yielded inconsistent results. These inconsistencies could be due to human error, in which case, the experiment must be performed more than once to obtain accurate results. When centrifuged, there was a dramatic decrease in NTU values between zero and three minutes no matter the percentage or amount of drops. In the shake & settle method, there was also a decrease between zero minutes of settling time and the first ten minutes of settling, but the initial decrease was not as dramatic as it was for the centrifuged trials. The results show that adding more drops of collagen with a higher concentration will yield a lower turbidity. It was also found that centrifuging would cause too much disruption in the water, stopping the collagen fibers from interacting well with the kaolin particles. However, centrifuging for longer periods of time of at least an hour will increase the contact time between the kaolin particles and collagen fibers, yielding a lower turbidity reading. VII. CONCLUSION It was shown that when using a kaolin/water solution, different amounts and concentrations of collagen had different effects on floc settling. This experiment provided environmentally friendly benefits as well as cost effective benefits. The shake and settle method costs less than centrifuging, but lower NTU values show that it is worth buying a refrigerated centrifuge. Both methods are more effective and cost efficient than chemical treatment and machinery currently in use.
v3-fos-license
2018-12-01T11:01:39.548Z
2018-11-22T00:00:00.000
54083191
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3417/8/12/2335/pdf?version=1542867030", "pdf_hash": "0689e766c5da0f3b8c51e2ca0bdee3510ff6e13a", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43914", "s2fieldsofstudy": [ "Engineering" ], "sha1": "0689e766c5da0f3b8c51e2ca0bdee3510ff6e13a", "year": 2018 }
pes2o/s2orc
Dead-Beat Control Cooperating with State Observer for Single-Phase Electric Springs Aiming at improving the performance of the existing for single-phase electric springs (ESs), such as the fastness of the voltage stabilization and the mitigation of the voltage harmonics across the critical loads (CLs), the dead-beat control cooperating with state observer is proposed in this paper. First, the δ control is reviewed, outlining its features of regulation of the CL voltage while keeping the ES operation stable. After describing the operation of an ES in the continuous-time domain by the state-space technique, its discrete-time model is formulated using the zero-order-hold (ZOH) algorithm. Then, the control system for an ES is designed around the dead-beat control cooperating with a state observer and implementing the two typical compensation functions achievable with the δ control, namely the pure reactive power compensation and the power factor correction. Results obtained by simulation demonstrate that the control system is able to both properly drive an ES and to implement the two functions. The results also show that the proposed control system has the advantage of eliminating harmonic components in CL voltage when grid voltage distorts. Introduction Electric springs (ESs) have been proposed five years ago as a new solution to fully exploit the unpredictable power generated from intermittent renewable energy sources (RESs) [1].Compared to the traditional operating way of the power system, they carry out the paradigm that load demand matches power generation automatically [2,3].From the topology point of view, an ES is built up by connecting a circuit which behaves like a voltage source inverter (VSI), in series to the non-critical loads (NCLs) of a user with the purpose of keeping constant the magnitude of the voltage across its critical loads (CLs).Consequently, the control approaches for VSI have been initially introduced to drive the ES systems [4,5].Subsequently, the control approaches for power converters [6,7] and microgrids [8,9] have been adopted.With the increasing power generation from RESs [10], the ESs have gained an increasing interest and many papers have appeared, reporting on system modeling [11], reactive power compensation [2], power decoupling [12,13], voltage and frequency control [14], power balance control for a three-phase system [15], and so on. Among the compensation functions that the control of an ES can implement, the pure reactive power compensation is the key one because it stabilizes the voltage across the CLs without exchanging (absorbing or delivering) active power.The basic control of an ES implementing such compensation was reported together with the ES concept in Reference [1].After that, the δ control was proposed in Reference [2]; it imposes the instantaneous phase angle δ by which the CL voltage lags the line voltage to ensure that ES does not exchange any active power at the steady state.Once calculated the angle δ, the control scheme uses a proportional resonant (PR) controller for the outer-loop regulation of the CL voltage and a proportional (P) controller for the inner-loop adjustment of the ES current.This setup of the δ control exhibits some shortcomings.For instance, its application requires the design and tuning of the three parameters of the PR and P controllers.Another shortcoming is that the PR controller is not suitable for nonlinear systems so that the CL voltage may be severely distorted when the line voltage is affected by harmonics.What is more, system modeling in [2] is not enough accurate since the line voltage is regarded as a disturbance which results in unexpected dynamic performance.Considering the benefits of the δ control and side effects of the PR controller, the goal of this paper is to find out a more practical controller that facilitates the application of the δ control and, at the same time, improves the performance of the existing setup. The dead-beat control uses the discrete-time model of a system to predict the amplitude of the controlled variable one or more sampling times in advance.By forcing this amplitude to track that one of the reference, the error between the controlled variable and the reference is zeroed [16].The characteristics of the relevant controller are the low complexity and a high-performance system [17].Therefore, it is utilized as the control tool that allows the achievement of the goal of this paper. As explained in Reference [2], an ES is a multi-input and multi-output system where only two control objectives can be achieved at the same time.The main objective is the regulation of the CL voltage that is expected to have a sinusoidal shape with a preset magnitude.The other objective cis to implement a specific compensation function by imposing the phase angle δ.In this paper, the two most significant functions, i.e., the pure reactive power compensation and the power factor correction (PFC), are chosen as objective.The effectiveness of the resulting control system is highlighted by comparing its performance to that obtained with the existing PR and P controllers. In detail, this paper is organized as follows.In Section 2, the operating principle of a single-phase ES driven by the δ control is introduced.In Section 3, system modeling and design of the dead-beat controller are provided; the design of a state observer to reduce the control period of the dead-beat control is also given.In Section 4, two controllers based on the same δ control, one using the dead-beat control cooperating with state observer and the other one using the PR and P controllers, are investigated by simulation under implementation of both the compensation functions.The simulation results reveal the better performance attained by the proposed controller such as the fast dynamics and the harmonic suppression.Finally, the conclusions are drawn in Section 5. The Topology of Single-Phase ES The typical topology of a single-phase ES embedded in a power system is shown in Figure 1.In the figure, the ES is drawn within the dashed line, box, and consists of a current-controlled single-phase VSI that impresses the voltage v ES across capacitor C with help from the inductance L.Moreover, in the figure, Z 2 is the CL with limited operating voltage range and Z 3 is the NCL with wide operating voltage range; the series of the ES and Z 3 constitutes the so-called smart load (SL) and is connected in parallel with Z 2 , where the point of common coupling is designated with point of common coupling (PCC).Other variables in the figure are the voltages across Z 2 and Z 3 , denoted with v S and v NC , the output voltage of the VSI, denoted with v i , the currents through Z 3 and the line, denoted with i 3 and i 1 , respectively, and the output current of the ES, denoted with i L .Variable v G represents the voltage at the injection point of RES, whereas R 1 and L 1 are the line resistance and inductance.Vector subtraction of v G from the voltage drop across the line impedance gives v S . As explained in Reference [1], the ES is an electrical circuitry that generates an ac voltage intended to regulate the CL voltage while passing the voltage fluctuations of the RESs to the NCLs. δ Control of Single-Phase ES The scheme of the existing setup of the δ control for a single-phase ES is depicted in Figure 2a.In the figure, a double loop control is arranged with a PR controller in the outer CL voltage loop and a P controller in the inner ES current loop.The purpose of the δ control is to impose the phase lag of vS with respect to vG.The calculation of δ is based on the vector diagram of the circuit in Figure 1 and is executed for the ES to implement a certain compensation function The details of the control scheme in Figure 2a are as follows.Signal vS_ref is the sinusoidal reference of voltage for vS.Its magnitude is fixed by the user while its phase angle lags that of vG ofδ.The error between vS and vS_ref is fed into the PR controller to generate the reference signal iref for the ES of current.In turn, the error between the actual ES current iL and iref is fed into the P controller to generate, via a limiter, the signal v_comp for the pulse width modulation (PWM) generator.The latter one delivers the four commands for the VSI switches.It is a matter of fact that the calculation of δ, schematized within the dashed line box in Figure 2a, depends on the compensation function to be achieved and, accordingly, determines the voltage reference signal As explained in Reference [2], θ 0 denotes the phase of the grid; φ1 is the impedance angle of line impedance; a,b and m denotes the coefficients to calculate φ 4 ; Besides, θ and φ3 are the complementary results during the δ calculation. δ Control of Single-Phase ES The scheme of the existing setup of the δ control for a single-phase ES is depicted in Figure 2a.In the figure, a double loop control is arranged with a PR controller in the outer CL voltage loop and a P controller in the inner ES current loop.The purpose of the δ control is to impose the phase lag of v S with respect to v G .The calculation of δ is based on the vector diagram of the circuit in Figure 1 and is executed for the ES to implement a certain compensation function. The details of the control scheme in Figure 2a are as follows.Signal v S_ref is the sinusoidal reference of voltage for v S .Its magnitude is fixed by the user while its phase angle lags that of v G of δ.The error between v S and v S_ref is fed into the PR controller to generate the reference signal i ref for the ES of current.In turn, the error between the actual ES current i L and i ref is fed into the P controller to generate, via a limiter, the signal v _comp for the pulse width modulation (PWM) generator.The latter one delivers the four commands for the VSI switches.It is a matter of fact that the calculation of δ, schematized within the dashed line box in Figure 2a, depends on the compensation function to be achieved and, accordingly, determines the voltage reference signal As explained in Reference [2], θ 0 denotes the phase of the grid; ϕ 1 is the impedance angle of line impedance; a,b and m denotes the coefficients to calculate ϕ 4 ; Besides, θ and ϕ 3 are the complementary results during the δ calculation. δ Control of Single-Phase ES The scheme of the existing setup of the δ control for a single-phase ES is depicted in Figure 2a.In the figure, a double loop control is arranged with a PR controller in the outer CL voltage loop and a P controller in the inner ES current loop.The purpose of the δ control is to impose the phase lag of vS with respect to vG.The calculation of δ is based on the vector diagram of the circuit in Figure 1 and is executed for the ES to implement a certain compensation function The details of the control scheme in Figure 2a are as follows.Signal vS_ref is the sinusoidal reference of voltage for vS.Its magnitude is fixed by the user while its phase angle lags that of vG ofδ.The error between vS and vS_ref is fed into the PR controller to generate the reference signal iref for the ES of current.In turn, the error between the actual ES current iL and iref is fed into the P controller to generate, via a limiter, the signal v_comp for the pulse width modulation (PWM) generator.The latter one delivers the four commands for the VSI switches.It is a matter of fact that the calculation of δ, schematized within the dashed line box in Figure 2a, depends on the compensation function to be achieved and, accordingly, determines the voltage reference signal As explained in Reference [2], θ0 denotes the phase of the grid; φ1 is the impedance angle of line impedance; a,b and m denotes the coefficients to calculate φ4; Besides, θ and φ3 are the complementary results during the δ calculation.Figure 2b shows the vector diagram of the power system in Figure 1 when ES operates in the so-called capacitive mode which occurs when the CL voltage is lower than its rated value.Details on the δ calculation can be found in Reference [2].The inductive mode can be explained in a similar way.It is worth to remark that the δ control assumes that the length of transmission line is known in order to get acquainted of the line resistance and inductance since their values are necessary to calculate δ; the way to derive the line impedance can be found in Reference [1]. Issues in the δ Control with PR and P Controllers A single-phase ES is satisfactorily controlled by the existing setup of the δ control under ideal grid conditions.However, when vG is distorted to some extent, the total harmonic distortion (THD) of the CL voltage can be somewhat high and, sometimes, is out of the specifications.Although the THD value can be enhanced by properly tuning the parameters of the PR and P controllers, there are still limitations in the real application.For instance, it is convenient that the parameter kp of the PR controller does not exceed 3. Referring to the study case reported below, when the parameters kp and kr of the PR controller are selected as 2 and 20, and the gain of the P controller is selected as 0.5, the THD value of the CL voltage is up to 4.48% for a THD value of vG of 22.9%, as pointed out in Figure 2c.This calls for an improvement of the performance of an ES by developing a different solution for the setup of the δ control.Figure 2b shows the vector diagram of the power system in Figure 1 when ES operates in the so-called capacitive mode which occurs when the CL voltage is lower than its rated value.Details on the δ calculation can be found in Reference [2].The inductive mode can be explained in a similar way.It is worth to remark that the δ control assumes that the length of transmission line is known in order to get acquainted of the line resistance and inductance since their values are necessary to calculate δ; the way to derive the line impedance can be found in Reference [1]. Issues in the δ Control with PR and P Controllers A single-phase ES is satisfactorily controlled by the existing setup of the δ control under ideal grid conditions.However, when v G is distorted to some extent, the total harmonic distortion (THD) of the CL voltage can be somewhat high and, sometimes, is out of the specifications.Although the THD value can be enhanced by properly tuning the parameters of the PR and P controllers, there are still limitations in the real application.For instance, it is convenient that the parameter k p of the PR controller does not exceed 3. Referring to the study case reported below, when the parameters k p and k r of the PR controller are selected as 2 and 20, and the gain of the P controller is selected as 0.5, the THD value of the CL voltage is up to 4.48% for a THD value of v G of 22.9%, as pointed out in Figure 2c.This calls for an improvement of the performance of an ES by developing a different solution for the setup of the δ control. System Modeling of Single-Phase ES As explained in Reference [11], by neglecting the dynamics of the ES DC bus, the equations of an ES are linear and time-invariant as explicated hereafter to simplify the ES modeling, both CL and NCL are taken of resistive type. Applying Kirchhoff's Current Law (KCL) to the circuit in Figure 1 yields Applying Kirchhoff's Voltage Law (KVL) yields Solving Equations ( 1)-( 4) yields Applying again KVL to the circuit of Figure 1 also yields Solving Equations ( 5)-( 7) yields Based on the equations above, the state-space model of the ES can be written as follows: . where, Appl.Sci.2018, 8, 2335 6 of 15 Dead-Beat Control for Single-Phase ES The control system proposed in this paper is still based on the δ control but replaces the PR and P controllers with a dead-beat controller.The scheme of the control system is drawn in Figure 3; its effectiveness, as well the comparison of its performance with that of the existing setup, are presented in the next sections. Dead-Beat Control for Single-Phase ES The control system proposed in this paper is still based on the δ control but replaces the PR and P controllers with a dead-beat controller.The scheme of the control system is drawn in Figure 3; its effectiveness, as well the comparison of its performance with that of the existing setup, are presented in the next sections.The operating principle of a dead-beat controller is illustrated in Reference [16].Since here the dead-beat control is applied for the first time to an ES, the design of the relevant controller is exposed step-by-step. The first step is to discretize Equation (9).By using the zero-order-hold (ZOH) algorithm, the discrete-time state space model of Equation ( 9) can be expressed as where, The system output at of the next control period is where, G = C*A = [a1 a2 a3] and H = C*B = [b1 b2].By making y(k + 1) equal to the reference value at the next control period, designated as r(k + 1), it follows that: Equation ( 12) holds on condition that the system is entered with a suitable value of the input vi(k).Let this condition be satisfied.Equation (12) outlines that the output of the controlled system equates the reference value at each control period, which means that the tracking error of the control system is zero. Substituting the respective coefficients into Equation ( 12) yields Solving Equation (13) gives the required input The operating principle of a dead-beat controller is illustrated in Reference [16].Since here the dead-beat control is applied for the first time to an ES, the design of the relevant controller is exposed step-by-step. The first step is to discretize Equation (9).By using the zero-order-hold (ZOH) algorithm, the discrete-time state space model of Equation ( 9) can be expressed as where, The system output at of the next control period is where, . By making y(k + 1) equal to the reference value at the next control period, designated as r(k + 1), it follows that: Equation ( 12) holds on condition that the system is entered with a suitable value of the input v i (k).Let this condition be satisfied.Equation (12) outlines that the output of the controlled system equates the reference value at each control period, which means that the tracking error of the control system is zero. Substituting the respective coefficients into Equation (12) yields Solving Equation ( 13) gives the required input The working equation of the dead-beat controller is given by Equation ( 14).v i (k) is calculated, it is made to enter the PWM generator at each control period.Note that r(k + 1) in Equation ( 14), which is defined as v S_ref in Figure 3, is calculated by the δ control. Dead-Beat Control Cooperating with State Observer for Single-Phase ES The execution of the dead-beat control could be quite demanding, to avoid the shortcoming of significantly increasing the maximum width of the voltage pulses at the VSI output, it is convenient to, calculate v i (k) s one control period ahead.This implies that v i (k) must be calculated in the (k − 1) control period by resorting to an estimation of the associated variables.A good tool to estimate the variables of a system is a state observer [18], as shown in Figure 4a,b.Here, a full order observer is used, expressed as Appl.Sci.2018, 8, x FOR PEER REVIEW 7 of 15 The working equation of the dead-beat controller is given by Equation (14).vi(k) is calculated, it is made to enter the PWM generator at each control period.Note that r(k + 1) in Equation ( 14), which is defined as vS_ref in Figure 3, is calculated by the δ control. Dead-Beat Control Cooperating with State Observer for Single-Phase ES The execution of the dead-beat control could be quite demanding, to avoid the shortcoming of significantly increasing the maximum width of the voltage pulses at the VSI output, it is convenient to, calculate vi(k) s one control period ahead.This implies that vi(k) must be calculated in the (k − 1) control period by resorting to an estimation of the associated variables.A good tool to estimate the variables of a system is a state observer [18], as shown in Figure 4a,b.Here, a full order observer is used, expressed as Dead-beat Control Eq. ( 14) where, H is the observer gain matrix.The error vector of the state observer is x where, H is the observer gain matrix.The error vector of the state observer is The dynamic properties of the error vector depend on the eigenvalues of the matrix (A − HC).For a stable (A − HC), the error vector tends to zero for any initial error vector e(0).In other words, x(k) will converge to x(k) regardless of the values of x(0) and x(0).If the control system is completely observable, it can be proven that H can be chosen in order that (A − HC) is asymptotically stable and has the desired dynamics o that the error vector moves toward zero (origin) at a speed fast enough.This outcome is achieved by a suitable placement of the eigenvalues of (A-HC).In practice, the poles of the observer are placed within three to five times far away from the imaginary axis than the poles of the control system. Simulation and Discussions To test the arranged control system, simulations are conducted under the environment of Matlab/Simulink for the study case of an ES with the data reported in Table 1.Both the T compensation functions mentioned above, i.e., the pure reactive power compensation and the PFC, have been implemented in the control system for a thorough evaluation of its performance.The control specifications are (1) the CL voltage sis regulated to 110 V; (2) for the pure reactive power compensation, the phase angle between ES current and ES voltage is 90 • ; for PFC, the voltage v G is in phase with the line current. Flowchart of Dead-Beat Control for δ Control In the block of Matlab function, the program flowchart for dead-beat control and δ control is depicted as shown in Figure 5 which can be summarized as follows. • Execute v i (k − 1) to the VSI within single-phase ES Pure Reactive Power Compensation Mode In this part, the simulations are divided into two steps, of which the one is under ideal grid conditions and the other one is with grid distortions. (1) Ideal Grid Conditions: The parameters for simulation are the same as Table 1 and there is no distortion on the line voltage.Figure 6 shows simulation waveforms under ideal grid conditions when dead-beat and δ control are both applied to the single-phase ES.As explained in Reference [6], three typical values such as 102 V, 115 V, and 123 V are selected to simulate capacitive mode, resistive mode, and inductive mode, respectively.In each subfigure, four channels are recorded as line voltage, CL voltage, ES voltage, and NCL voltage, respectively.Figure 6a is the overview all the full time ranges including three different modes.In Figure 6b when VG is set to 102 V, ES current which is in phase with NCL voltage leads ES voltage by 90°, which can be observed at 0.1832 s, meaning that the ES operates at the capacitive mode; In Figure 6b when VG is set to 115 V, ES voltage is very low and NCL voltage is almost the same as the CL voltage, meaning that the ES operates at the resistive mode; In Figure 6c when VG is set to 123 V, ES current lags ES voltage by 90° at 0.5873 s, meaning that the ES operates at the inductive mode.It can also be observed from Figure 6b-d that CL voltages are well regulated to be sinusoidal and RMS values are almost around 110 V.It is validated from the results above that the two control objectives under the pure reactive power compensation mode and also under ideal grid conditions have been achieved with the proposed dead-beat and δ control. (2) Grid Voltage Distorted: The parameters for simulation are the same as Table 1.The only difference is that the grid voltage is set as follows.From 0 to 0.3 s, the nominal value of VG is set to 102 V without any distortion.From 0.3 s to 0.6 s, the 3rd, 5th, and 7th harmonic components are added to the fundamental element, of which the amplitudes are 20 V, 10 V, 5 V, respectively. As a result, the THD value of vG is up to 22.46%, which is the same as that in Figure 2c. Figure 7a shows the simulation waveforms before and after grid distortion.Before 0.2 s, the ES operates at the capacitive mode and both ES voltage and CL voltage are sinusoidal.However, after 0.3 s, it is seen that the distortion on the line voltage has been passed to NCL voltage by the ES.It can also be seen that CL voltages are regulated well during the full time range.Figure 7b shows that the THD value of CL voltage is controlled to 1.54%, which is far smaller than that with the PR and P controllers.It should be noticed that when the grid voltage is distorted, the rms value of its fundamental Pure Reactive Power Compensation Mode In this part, the simulations are divided into two steps, of which the one is under ideal grid conditions and the other one is with grid distortions. (1) Ideal Grid Conditions: The parameters for simulation are the same as Table 1 and there is no distortion on the line voltage.Figure 6 shows simulation waveforms under ideal grid conditions when dead-beat and δ control are both applied to the single-phase ES.As explained in Reference [6], three typical values such as 102 V, 115 V, and 123 V are selected to simulate capacitive mode, resistive mode, and inductive mode, respectively.In each subfigure, four channels are recorded as line voltage, CL voltage, ES voltage, and NCL voltage, respectively.Figure 6a is the overview all the full time ranges including three different modes.In Figure 6b when V G is set to 102 V, ES current which is in phase with NCL voltage leads ES voltage by 90 • , which can be observed at 0.1832 s, meaning that the ES operates at the capacitive mode; In Figure 6b when V G is set to 115 V, ES voltage is very low and NCL voltage is almost the same as the CL voltage, meaning that the ES operates at the resistive mode; In Figure 6c when V G is set to 123 V, ES current lags ES voltage by 90 • at 0.5873 s, meaning that the ES operates at the inductive mode.It can also be observed from Figure 6b-d that CL voltages are well regulated to be sinusoidal and RMS values are almost around 110 V.It is validated from the results above that the two control objectives under the pure reactive power compensation mode and also under ideal grid conditions have been achieved with the proposed dead-beat and δ control. (2) Grid Voltage Distorted: The parameters for simulation are the same as Table 1.The only difference is that the grid voltage is set as follows.From 0 to 0.3 s, the nominal value of V G is set to 102 V without any distortion.From 0.3 s to 0.6 s, the 3rd, 5th, and 7th harmonic components are added to the fundamental element, of which the amplitudes are 20 V, 10 V, 5 V, respectively. As a result, the THD value of v G is up to 22.46%, which is the same as that in Figure 2c. Figure 7a shows the simulation waveforms before and after grid distortion.Before 0.2 s, the ES operates at the capacitive mode and both ES voltage and CL voltage are sinusoidal.However, after 0.3 s, it is seen that the distortion on the line voltage has been passed to NCL voltage by the ES.It can also be seen that CL voltages are regulated well during the full time range.Figure 7b shows that the THD value of CL voltage is controlled to 1.54%, which is far smaller than that with the PR and P controllers.It should be noticed that when the grid voltage is distorted, the rms value of its fundamental component is used for δ calculation.If the δ is not accurate, the rms value of the PCC voltage will not be affected, only the ES will deviate a little bit from the pure reactive power compensation mode. component is used for δ calculation.If the δ is not accurate, the rms value of the PCC voltage will not be affected, only the ES will deviate a little bit from the pure reactive power compensation mode. PFC Mode The purpose of this part is to double check the performance of the proposed dead-beat control cooperating with observer and δ control.Parameters are the same as Table 1 and the flowchart is also the same as Figure 5.The difference between the PFC mode and pure reactive power compensation mode is the codes for δ calculation, which is explained in Reference [2] in detail.The simulation is divided into two time intervals.In the first time interval which is from 0 to 0.2 s, V G is set to 108 V.In the second time interval from 0.2 s to 0.4 s, V G is set to 112 V.The simulation results are shown in Figure 8, where four channels are recorded.Line voltage and line current are recorded together in the first channel to show the effectiveness of PFC. PFC Mode The purpose of this part is to double check the performance of the proposed dead-beat control cooperating with observer and δ control.Parameters are the same as Table 1 and the flowchart is also the same as Figure 5.The difference between the PFC mode and pure reactive power compensation mode is the codes for δ calculation, which is explained in Reference [2] in detail.The simulation is divided into two time intervals.In the first time interval which is from 0 to 0.2 s, VG is set to 108 V.In the second time interval from 0.2 s to 0.4 s, VG is set to 112 V.The simulation results are shown in Figure 8, where four channels are recorded.Line voltage and line current are recorded together in the first channel to show the effectiveness of PFC.In Figure 8, it is obviously seen that zero crossing points in the first channel are the same and also in the same direction, which means that line current is controlled in phase with that of line voltage, even if VG changes.CL voltage in the second channel is controlled at 110 V. ES voltage and NCL voltage are recorded in the third and fourth channels, respectively.It is noticed that the phase angle between ES voltage and ES current are not strictly 90°.For instance, at 0.105 s when VG is 108 V, ES current which is in phase with NCL voltage, leads ES voltage for more than 90°, meaning that ES provides some active power.At 0.365 s when VG is 112 V, ES current leads ES voltage by less than 90°, meaning that ES absorb some active power.The reason is that only two control objectives can be achieved at the same time, of which one is the CL voltage and another one is the PFC function. The effectiveness of the proposed dead-beat and δ control has been validated by the simulations results above. Sensitivity Analysis of Circuit Parameters In order to verify the influences of the circuit parameters on the controller, sensitivity analysis is carried out based on Matlab/Simulink, as shown in Figure 9, where only one parameter is scanned in each subfigure.In Figure 8, it is obviously seen that zero crossing points in the first channel are the same and also in the same direction, which means that line current is controlled in phase with that of line voltage, even if V G changes.CL voltage in the second channel is controlled at 110 V. ES voltage and NCL voltage are recorded in the third and fourth channels, respectively.It is noticed that the phase angle between ES voltage and ES current are not strictly 90 • .For instance, at 0.105 s when V G is 108 V, ES current which is in phase with NCL voltage, leads ES voltage for more than 90 • , meaning that ES provides some active power.At 0.365 s when V G is 112 V, ES current leads ES voltage by less than 90 • , meaning that ES absorb some active power.The reason is that only two control objectives can be achieved at the same time, of which one is the CL voltage and another one is the PFC function. The effectiveness of the proposed dead-beat and δ control has been validated by the simulations results above. Sensitivity Analysis of Circuit Parameters In order to verify the influences of the circuit parameters on the controller, sensitivity analysis is carried out based on Matlab/Simulink, as shown in Figure 9, where only one parameter is scanned in each subfigure. Discussions It should be noticed that CL voltage is not the only control objective of the ES.It is easy to regulate the RMS value of CL voltage to follow the predefined value.However, the key point is the compensation mode of the ES which should be monitored.Another finding is that the proposed dead-beat controller with the help of an observer has the obvious advantage of eliminating harmonic components in CL voltage if compared to PR and P controllers.To increase the control precision and to make the δ control more practical, it is necessary to use the proposed control to replace the existing PR and P controls. Conclusions In this paper, dead-beat control cooperating with state observer for the state variables is proposed to work with existing δ control for the single-phase ESs.System modeling is executed and the discrete-time state space model is obtained.The operating principle and also the design process of the dead-beat control together with the observer is illustrated.Pure reactive power compensation and power factor correction, which present two typical operating modes of the ESs, are selected as examples to validate the proposed control and related analysis.By comparing the proposed control with existing controllers based on the same δ control algorithm, it is revealed that the proposed control has the obvious advantages of eliminating harmonic components in CL voltage during grid voltage distortion. Figure 1 . Figure 1.The typical topology of a single-phase ES embedded in a power system.ES: electric springs; PCC: point of common coupling. Figure 1 . Figure 1.The typical topology of a single-phase ES embedded in a power system.ES: electric springs; PCC: point of common coupling. 15 Figure 1 . Figure 1.The typical topology of a single-phase ES embedded in a power system.ES: electric springs; PCC: point of common coupling. Figure 2 . Figure 2. The existing δ control setup of the single-phase ES.(a) Control scheme.(b) Vector diagram of the δ control in the capacitive mode as an example.(c) Fast Fourier Transformation (FFT) analysis of grid voltage and critical loads (CL) voltage under distorted conditions.PR: proportional resonant; THD: total harmonic distortion; SPMW: sinusoidal pulse width modulation; RMS: root mean square. Figure 2 . Figure 2. The existing δ control setup of the single-phase ES.(a) Control scheme.(b) Vector diagram of the δ control in the capacitive mode as an example.(c) Fast Fourier Transformation (FFT) analysis of grid voltage and critical loads (CL) voltage under distorted conditions.PR: proportional resonant; THD: total harmonic distortion; SPMW: sinusoidal pulse width modulation; RMS: root mean square. Figure 3 . Figure 3.The proposed dead-beat control based on the δ control. Figure 3 . Figure 3.The proposed dead-beat control based on the δ control. Figure 4 . Figure 4.The proposed dead-beat control cooperating with state observer.(a) Full control diagram; (b) Diagram of the state observer adopted in the control. Figure 4 . Figure 4.The proposed dead-beat control cooperating with state observer.(a) Full control diagram; (b) Diagram of the state observer adopted in the control. Figure 5 . Figure 5.The program flowchart of dead-beat control and δ control. Figure 5 . Figure 5.The program flowchart of dead-beat control and δ control. Figure 6 . Figure 6.The simulation waveforms based on the dead-beat and δ control under the ideal grid conditions and pure reactive power compensation mode.(a) Three operating modes at full time ranges; (b) Capacitive mode @VG = 102 V; (c) Resistive mode @ VG =115 V; (d) Inductive mode @ VG = 123 V. Figure 6 . Figure 6.The simulation waveforms based on the dead-beat and δ control under the ideal grid conditions and pure reactive power compensation mode.(a) Three operating modes at full time ranges; (b) Capacitive mode @VG = 102 V; (c) Resistive mode @ VG =115 V; (d) Inductive mode @ VG = 123 V. Figure 6 .Figure 7 . Figure 6.The simulation waveforms based on the dead-beat and δ control under the ideal grid conditions and pure reactive power compensation mode.(a) Three operating modes at full time ranges; (b) Capacitive mode @VG = 102 V; (c) Resistive mode @ VG =115 V; (d) Inductive mode @ VG = 123 V. Figure 7 . Figure 7.The simulation waveforms based on the dead-beat with observer and δ control under the pure reactive power compensation mode.(a) Comparison between ideal grid condition and distorted condition; (b) FFT analysis of line voltage and critical load voltage under distorted grid condition. Figure 8 . Figure 8.The simulation waveforms based on dead-beat and δ control under the power factor correction (PFC) mode. Figure 8 . Figure 8.The simulation waveforms based on dead-beat and δ control under the power factor correction (PFC) mode. Figure 9 . Figure 9.The sensitivity analysis based on dead-beat with the observer and δ control.(a) CL varies; (b) non-critical load (NCL) varies; (c) Line resistance varies; (d) Line inductance varies. Figure Figure 9a-d show the results of sensitivity analysis on CL, NCL, line resistance, and line inductance, respectively.It is seen that the variations of the CL, line resistance, and line inductance have negligible effects on the controller.Although the NCL has more effects compared with others, it can be ignored since the values of NCLs are almost fixed in real applications.The results have verified that the proposed controller has good robustness, and also the CL voltage can be regulated well by the ES. Figure 9 . Figure 9.The sensitivity analysis based on dead-beat with the observer and δ control.(a) CL varies; (b) non-critical load (NCL) varies; (c) Line resistance varies; (d) Line inductance varies. Figure Figure 9a-d show the results of sensitivity analysis on CL, NCL, line resistance, and line inductance, respectively.It is seen that the variations of the CL, line resistance, and line inductance have negligible effects on the controller.Although the NCL has more effects compared with others, it can be ignored since the values of NCLs are almost fixed in real applications.The results have verified Table 1 . The study case data.smart load; ES: Electric springs; CL: critical load; NCL: non-critical load; DC: direct current.
v3-fos-license
2022-04-26T06:48:26.677Z
2022-04-25T00:00:00.000
248377627
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1112/jlms.12771", "pdf_hash": "bfb0ed85bb63f3076c0f52df14d75f81d0fb85be", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43917", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "bfb0ed85bb63f3076c0f52df14d75f81d0fb85be", "year": 2022 }
pes2o/s2orc
Composition operators on weighted Hilbert spaces of Dirichlet series We study composition operators of characteristic zero on weighted Hilbert spaces of Dirichlet series. For this purpose we demonstrate the existence of weighted mean counting functions associated with the Dirichlet series symbol, and provide a corresponding change of variables formula for the composition operator. This leads to natural necessary conditions for the boundedness and compactness. For Bergman-type spaces, we are able to show that the compactness condition is also sufficient, by employing a Schwarz-type lemma for Dirichlet series. Introduction For a ≤ 1 we define the weighted Hilbert space D a of Dirichlet series as The space D 0 coincides with the Hardy space H 2 of Dirichlet series with square summable coefficients, which was systematically studied in an influential article of Hedenmalm, Lindqvist, and Seip [13].For a < 0 we refer to D a as a Bergman space and for a > 0 as a Dirichlet space, see [18].By the Cauchy-Schwarz inequality, D a is a space of analytic functions in the half-plane C 1 2 , where C θ = {s ∈ C : Re s > θ}.Therefore, if ψ : C 1 2 → C 1 2 is an analytic function, the composition operator C ψ (f ) = f •ψ defines an analytic function in C 1 2 for every f ∈ D a .Gordon and Hedenmalm [12] determined the class G of symbols which generate bounded composition operators on the Hardy space H 2 .The Gordon-Hedenmalm class G consists of all functions ψ(s) = c 0 s + ϕ(s), where c 0 is a non-negative integer, called the characteristic of ψ, and ϕ is a Dirichlet series such that: (i) If c 0 = 0, then ϕ(C 0 ) ⊂ C 1 2 .(ii) If c 0 ≥ 1, then ϕ(C 0 ) ⊂ C 0 or ϕ ≡ iτ for some τ ∈ R. We will use the notation G 0 and G ≥1 for the subclasses of symbols that satisfy (i) and (ii), respectively.In either case, the mapping properties of ϕ and Bohr's theorem imply that the Dirichlet series ϕ necessarily has abscissa of uniform convergence σ u (ϕ) ≤ 0, see [23,Theorem 8.4.1]. By what is essentially the original argument of Gordon and Hedenmalm, the condition that ψ ∈ G is necessary for a composition operator C ψ : D a → D a to be bounded.In the Bergman case a < 0, this is also known to be sufficient [2,3].When ψ ∈ G 0 , the proof of boundedness of C ψ : D a → D a , a < 0, due to Bailleul and Brevig [3], has a rather serendipitous flavor.In Section 3 we will supply a more systematic proof based on a Schwarz lemma for Dirichlet series, Lemma 3.4.Beyond this, we will focus on composition operators induced by symbols ϕ ∈ G 0 .The compact operators C ϕ : H 2 → H 2 were characterized only very recently in [10], in terms of the behavior of the mean counting function Re s, w = ϕ(+∞). The main purpose of this article is to explore analogous tools and results in the weighted setting.From Carlson's theorem [13,Lemma 3.2] one deduces the following formula of Littlewood-Paley type, (1) f valid for f ∈ D a such that σ u (f ) ≤ 0. From this point of view, the space D a is analogous to the weighted Hilbert space D α , consisting of those holomorphic functions g on the unit disk such that (2) where α = 1 − a ≥ 0 and dA(z) = dx dy, z = x + iy.By the results of [16,20,26], a holomorphic self-map of the unit disk φ : D → D induces a compact composition operator on D α , α > 0, if and only if (3) lim where for α = 1, N φ,1 is the classical Nevanlinna counting function and for α = 1, N φ,α is the generalized Nevanlinna counting function A key step in the disk setting is to introduce a non-injective change of variables in (2), resulting in what is known as a Stanton formula.In our setting, for ϕ ∈ G 0 , making the change of variables in (1) yields that For a Dirichlet series ϕ with abscissa of uniform convergence σ u (ϕ) ≤ 0, we therefore introduce the weighted mean counting functions if these limits exist. Jessen and Tornehave [15,Theorem 31] studied the unweighted counting function M ϕ,0 (w, σ) in the context of Lagrange's mean motion problem.They proved that the counting function exists for σ > 0 and w = ϕ(+∞), and that it satisfies On the basis of this and Littlewood's lemma, it was demonstrated in [10] that the weighted mean counting function M ϕ,1 (w, σ) also exists for σ > 0 and w = ϕ(+∞).Additionally, if ϕ belongs to the Nevanlinna class of Dirichlet series N u , that is, σ u (ϕ) ≤ 0 and In Section 4 we will investigate the existence of the weighted mean counting functions M ϕ,a . Theorem 1.1.For a ∈ R, let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0 and ϕ(+∞) = w.Then the counting function M ϕ,a (w, σ) exists and is right-continuous on σ > 0. Furthermore, For σ ∞ > 0 sufficiently large, depending on ϕ and w, we also have that In Theorem 4.8 we will furthermore obtain the integral representation of the weighted mean counting function, where dm ∞ denotes the Haar measure on the infinite polytorus T ∞ , and ϕ χ denotes the Dirichlet series ϕ twisted by the character χ ∈ T ∞ , see Section 2. In the case that a ≥ 1, we are from this formula able to deduce that That is, it is almost surely possible to interchange the T -and σ-limits in the definition of M ϕ,a (w).When a = 1, this partially resolves [10,Problem 1]. In Section 5 we then prove the analogue of the Stanton formula. We use Theorem 1.2 to characterize the compact composition operators in the Bergman setting. Theorem 1.3.Let ϕ ∈ G 0 .Then the induced composition operator C ϕ is compact on the Bergman space D −a , a > 0, if and only if (8) lim In addition to the change of variable formula, our Schwarz-type lemma, Lemma 3.4, is essential to proving the sufficiency of (8).In this context, we note that Bayart [5] recently showed that the condition lim Re s = ∞ is sufficient, but not necessary, for the operator Finally, we consider the Dirichlet-type spaces D a for 0 < a < 1.We prove that the analogue of (8) remains necessary for the composition operator to be compact, and we give an analogous necessary condition for boundedness.In Example 5.7 we observe that this condition is not sufficient for the operator to be bounded, at least not when a ≥ 1/2.Theorem 1.4.Suppose that 0 < a < 1 and let ϕ ∈ G 0 .If the operator C ϕ is bounded on the Dirichlet space D a , then for every δ > 0 there exists a constant C(δ) > 0 such that lim Re w→ In the special case where the symbol ϕ has bounded imaginary parts and the associated counting function is locally integrable, we can also prove that ( 9) is sufficient for the composition operator C ϕ to be bounded, and that (10) is sufficient for a bounded composition operator C ϕ to be compact. Notation.Throughout the article, we will employ the convention that C denotes a positive constant which may vary from line to line.When we wish to clarify that the constant depends on some parameter P , we will write that C = C(P ).Furthermore, if A = A(P ) and B = B(P ) are two quantities depending on P , we write A ≈ B to signify that there are constants c 1 , c 2 > 0 such that c 1 B ≤ A ≤ c 2 B for all relevant choices of P . Background material 2.1.The infinite polytorus and vertical limits.The infinite polytorus is defined as the (countable) infinite Cartesian product of copies of the unit circle T, It is a compact abelian group with respect to coordinate-wise multiplication.We can identify the Haar measure m ∞ of the infinite polytorus with the countable infinite product measure m × m × • • • , where m is the normalized Lebesgue measure of the unit circle. By the prime number theorem, T ∞ is isomorphic to the group of characters of (Q + , •).Given a point χ = (χ 1 , χ 2 , . . . ) ∈ T ∞ , the coresponding character χ : Q + → T is the completely multiplicative function on N such that χ(p j ) = χ j , where {p j } j≥1 is the increasing sequence of primes, extended to an n s is a Dirichlet series and χ(n) is a character.The vertical limit function f χ is defined as The name comes from Kronecker's theorem [7]; for any ǫ > 0, there exists a sequence of real numbers {t j } j≥1 such that f (s + t j ) → f χ (s) uniformly on C σu(f )+ǫ . If f ∈ D a , then the abscissa of convergence satisfies σ c (f χ ) ≤ 0 for almost every χ ∈ T ∞ .This is a consequence of the Rademacher-Menchov theorem [30, Ch.XIII], following an argument of [4].Finally, we note that if ψ(s) = c 0 s + ϕ(s) ∈ G, and we set then for every χ ∈ T ∞ we have that (11) ( 2.2.The hyperbolic metric and distance.The classical Schwarz-Pick lemma states that for every holomorphic self-map of the unit disk φ : D → D and for any z ∈ D, Equality holds in (12) for one point z 0 ∈ D, and consequently for all points, if and only if φ is a holomorphic automorphism of the unit disk.The hyperbolic metric and distance in the unit disk are defined respectively as where the infimum is taken over all piecewise smooth curves γ in D that join z and w.The Schwarz-Pick lemma implies that every holomorphic self-map of the unit disk is a contraction of the hyperbolic distance, (13) λ where z, w ∈ D. If equality holds in (13) for one point, or in (14) for a pair of distinct points, then φ is a holomorphic automorphism of the unit disk, and thus an isometry.Using the conformal invariance of the hyperbolic distance, one can prove that The Riemann mapping theorem allows us to transfer these notions to any simply connected proper subdomain Ω of the complex plane.More precisely, let f be a Riemann map from Ω onto the unit disk.Then where the infimum is taken over all piecewise smooth curves γ in Ω that join z and w.By the Schwarz lemma it is easy to prove that λ Ω and d Ω are independent of the choice of the Riemann map.In the case of the right-half plane, considering the Riemann map where z, w ∈ C 0 . The following Schwarz-Pick lemma for simply connected domains is a direct consequence of the definition and the ordinary Schwarz-Pick lemma. Theorem 2.1 ([6] ). Suppose that Ω 1 and Ω 2 are simply connected proper subdomains of the complex plane and that f : Ω 1 → Ω 2 is a holomorphic function.Then, for every z, w ∈ Ω 1 , Furthermore, equality holds in (15) for one point, or in (16) for a pair of distinct points, if and only if f is a biconformal map from Ω 1 onto Ω 2 . Bounded composition operators on Bergman spaces of Dirichlet series Consider the maps T β (z) = β 1−z 1+z , β > 0, and S θ (z) = z + θ, θ > 0, taking the unit disk D onto C 0 and the half-plane C 0 onto C θ , respectively.Following [12], the space We recall the following two lemmas. The Cauchy-Schwarz inequality shows that point evaluations in C 1 2 are bounded on D a .We record the following statement for easy reference.Lemma 3.3.Let a ≤ 1 and δ > 0. Then there exists a constant C = C(a, δ) such that for every Our next goal is to establish a kind of Schwarz lemma for Dirichlet series.Note that the Schwarz-Pick lemma for the hyperbolic distance implies that lim inf for any holomorphic self-map φ of D, see [8,Lemma 1.4.5].The corresponding inequality does not hold for all self-maps of the right half-plane.However, for a Dirichlet series ϕ ∈ G 0 we will prove that This implies a quantitative version of [12,Prop. 4.2].Namely, that for sufficiently small ǫ > 0, The key idea in proving ( 17) is to exploit the vertical translations of ϕ ∈ G 0 to restrict the limit to a half-strip, where the quantity in (17) can be shown to be uniformly bounded from below by virtue of Theorem 2.1. Proof.We consider the vertical translations ϕ(s + it) = ϕ χt (s), where By the Schwarz-Pick lemma and the triangle inequality for the hyperbolic distance, we find from here that The crucial step is to note that the quantity d C0 (ϕ χt (1) − 1 2 , 1) is uniformly bounded, since ϕ maps the line Re z = 1 into a compact subset of C 1/2 .Given s ∈ C 0 , we can therefore choose z = Re s and t = Im s to obtain that which is the desired inequality. We next recall Littlewood's subordination principle, which implies that any holomorphic self-map of the unit disk generates a bounded composition operator on the Hardy space H 2 (D).Lemma 3.5 ([17, 27]).Suppose φ is a holomorphic self-map of the unit disk D.Then, for every We also borrow the following lemma from [12]. Lemma 3.6 ([12] ).Let a ≤ 1 and let {p j } j≥1 be the increasing sequence of primes.Then, the function f (s) = j≥1 a pj p −s j , with coefficients a pj = √ p j log(p j ) 1+ a 2 −1 , satisfies the following: As promised in the introduction, we now provide a proof of the characterization of the bounded composition operators on the Bergman spaces D a , a ≤ 0, which is new for Dirichlet series symbols.To do so, we will combine the original argument of Gordon and Hedenmalm [12] with the Schwarz lemma for Dirichlet series.Proof.It was essentially already proven in [12] that it is necessary that ψ ∈ G in order for C ψ : D a → D a to be bounded.Indeed, by [23,Theorem 8.3.1],P • ψ is a Dirichlet series for every polynomial P if and only if the symbol ψ : C 1 2 → C 1 2 has the form ψ(s) = c 0 s + ϕ(s), where c 0 is a non-negative integer and ϕ is a Dirichlet series.The mapping properties of ψ are deduced from the composition rule (11) and Lemma 3.6, noting that ϕ where f σ = f (• + σ).First we will consider the case when ψ(s) = c 0 s + ϕ(s) ∈ G ≥1 .In this case the analogue of the Schwarz lemma is trivial: Re s ≤ Re ψ(s).For a Dirichlet polynomial f and a positive number β > 0, we define the functions where η = c 0 (β + σ) − σ.Note that g β (0) → 0 as β → ∞.By Lemma 3.1 and Lemma 3.5, we have which demonstrates that the composition operator is bounded in this case.Suppose next that ϕ ∈ G 0 .By a vertical translation of the argument f , there is no loss of generality in assuming that ϕ(+∞) > 1/2.By Lemma 3.4 there exists a constant λ = λ(ϕ) > 0 such that In this case, for a Dirichlet polynomial f and positive numbers β > 0, we define the functions . Then we again have that lim β→∞ g β (0) = 0, and By Lemma 3.2 we conclude that there is a constant such that For σ ≥ δ, we simply note that ϕ σ (C 0 ) ⊂ C 1/2+ε for some ε > 0, and therefore by the Cauchy-Schwarz inequality that sup Hence C ϕσ (f ) 0 ≤ C f −a , as can be seen for example from Carlson's theorem, see [13,Lemma 3.2].We conclude that Remark.Using the same argument one can prove that Theorem 3.7 holds for all Bergman-like spaces of Dirichlet series [18], assuming that the coefficients are of the form where µ is a probability measure on (0, ∞) with 0 ∈ supp(µ) and satisfying Every symbol ψ ∈ G ≥1 induces a contraction C ψ on D µ , even without the condition (19). Weighted mean counting functions In this section, we will investigate the properties of the weighted counting function M ϕ,a (w, σ), σ > 0, where ϕ is a Dirichlet series with abscissa of convergence σ u (ϕ) ≤ 0. Firstly, we will prove the existence of this function, generalizing [10,Theorem 6.2].Monotonicity then ensures the existence of the limit function M ϕ,a (w) (finitely or infinitely).Secondly, following the ideas of Aleman [1] and Shapiro [26] from the disk case, we will give a weak version of the submean value property for the weighted counting function M ϕ,a (w), a > 0. Note that the (strong) submean value property of M ϕ,1 (w) was proven in [10, Lemma 6.5]. 4.1.Existence.In [10], the existence of M ϕ,1 (w, σ) was established through Littlewood's lemma [28, Sec.9.9], which is a rectangular version of Jensen's formula [26,Sec. 10.2].We will replace Littlewood's lemma with the following theorem, which allows us to count the zeros of a non-zero holomorphic function in an arbitrary domain.In the special case that u = log |f |, where f ≡ 0 is a holomorphic function on the domain Ω, the measure 1 2π ∆u is the sum of Dirac masses at the zeros of f , counting multiplicity.The almost periodicity of the Dirichlet series ϕ in C σ0 , σ 0 > 0, implies an argument principle for the unweighted counting function M ϕ,0 , see [15].Lemma 4.2.Suppose that ϕ is a Dirichlet series with abscissa of uniform convergence σ u (ϕ) ≤ 0. If ϕ(+∞) = 0, {Re s = σ 0 } is a zero-free line for the function ϕ and {T j } j≥1 is an increasing sequence of positive real numbers, relatively dense in [0, +∞), such that Proof.Let σ ∞ > 0 be such that the equation ϕ(s) = 0 has no solution for Re s ≥ σ ∞ .We will denote by R j the rectangle with vertices at σ 0 ± iT j , σ ∞ ± iT j .By the argument principle, we then have that We observe that the first coefficient of the Dirichlet series f = ϕ ′ ϕ satisfies f (+∞) = 0. Thus, letting T j → ∞ and then σ ∞ → ∞ follows that We begin by proving a special case of Theorem 1.1. Theorem 4.3.Let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0, and let w = ϕ(+∞) be such that {Re s = σ 0 } is a zero free line for the function ϕ − w.Then, for every a ∈ R, the counting function M ϕ,a (w, σ 0 ) exists and satisfies Furthermore, for sufficiently large σ ∞ > 0, where J ϕ−w is the Jessen function (4). Proof.Without loss of generality we assume that w = 0.By almost periodicity there exists an increasing sequence {T j } j≥1 of positive real numbers, relatively dense in [0, +∞), such that for every σ ≥ σ 0 , Let σ ∞ > 0 be so large that that ϕ = 0 in C σ∞ 2 .Then ∆ log |ϕ| = 0 near the boundary of the rectangle R j with vertices at σ 0 ± iT j , σ ∞ ± iT j . By a C ∞ version of Urysohn's lemma [11,Theorem 8.18] there exists a function On the other hand, Green's theorem implies that where ζ = x + iy.For the first line integral on the right-hand side, we have that From Lemma 4.2, dividing through by 2T j and letting j → ∞, we obtain that Writing out the second line integral, we have that where J φ (σ ∞ ) = log |φ(+∞)| by [15,Theorem 31]. We apply Fubini's theorem to the area integral, We conclude that This finishes the proof of ( 21 Finally, the Jessen function J ϕ (σ) is convex and, as a consequence, absolutely continuous on every closed sub-interval of the positive semi-axis.Thus, we can integrate by parts, yielding that which is (20). Before proving Theorem 1.1, we extract the following technical lemma from the work of [10, Lemma 2.4].Lemma 4.4.Let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0 and ϕ(+∞) = 0. Then for every σ 0 > 0, and for T > 0 sufficiently large, there exists a constant C(σ 0 , ϕ) > 0 such that, (22) π T Proof.Let Θ denote the unique conformal map from the unit disk to the half-strip with Θ(0) = 1 and Θ ′ (0) > 0. We observe that , and that there exists absolute constants δ 1 , δ 2 > 0 such that Thus, there exists an absolute constant C 0 such that For T > 0 we will denote by S T the half-strip T S 1 and by Θ T : D → S T the map Θ T = T Θ.We consider the function ϕ σ 0 2 (s) = ϕ(s+ σ 0 ) M , where M = sup Then, for T so large that the equation ϕ(s) = 0 has no solutions for Re s > T , we have by ( 23) that where ψ 2T = ϕ σ 0 2 • Θ 2T and N ψ2T is the classical Nevanlinna counting function.Thus, the Littlewood inequality [26], We can now give the proof of Theorem 1.1. Proof of Theorem 1.1.Without loss of generality we can assume that w = 0. Let ω(σ) = σ a .Then, where σ ∞ > 0 is such that the equation ϕ(s) = 0 has no solutions in C σ∞ .By the dominated convergence theorem, which applies in light of (22), and then Fubini's theorem, we obtain that This proves the existence of the function M ϕ,a (0, σ 0 ) and ( 5).The right continuity of M ϕ,a (0, σ) is now a consequence of the right continuity of M ϕ,0 (0, σ), see [10,Lemma 5.1].Integrating by parts as in the proof of Theorem 4.3, we also obtain (6). Strictly speaking, this argument is independent of Theorem 4.3.However, we find the proof of Theorem 4.3 to be illuminating and interesting in its own right.Note that the proof for Theorem 1.1 can also be applied to the more general counting function induced by a twice continuously differentiable weight ω(s) = ω(Re s) on (0, ∞), By monotonicity we deduce the following.M ϕ,a (w, σ) exists, finitely or infinitely. The limit is not finite in general for a ≥ 0, as we now exemplify. Example 4.6.Applying the transference principle [24] to the example constructed by Zorboska in [29], we obtain a Dirichlet series ϕ such that the a-weighted counting function is finite if and only if a > 1 2 .More precisely, we consider the Dirichlet series ϕ(s) = g(2 −s ), where g(z) = e − 1+z 1−z , z ∈ D. We observe that ϕ is a periodic function (with period ip = 2πi log( 2) ) and abscissa of uniform convergence σ u (ϕ) ≤ 0. Let w ∈ g(D) \ {g(0)} = ϕ(C 0 ) \ {ϕ(+∞)}.The periodicity implies that where [x] is the integer part of the real number x.Note that Writing w = e −b e iθ , where b > 0 and θ ∈ [0, 2π), so that We thus have This shows that M ϕ,a (w) = ∞ for all w ∈ D and a ≤ 1 2 .4.2.Weighted mean counting functions as integrals.The purpose of this subsection is to replace the limiting processes in the definition of M ϕ,a with integration.For a ≥ 1, this allows us to show that it is almost always possible to directly take σ = 0 in the definition of the weighted mean counting function. Lemma 4.7.Let a ∈ R and let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0 and ϕ(+∞) = w.Then, for every σ > 0, the weighted mean counting function is invariant under vertical limits, that is, Proof.The statement holds for the Jessen function, see [14, Satz A] or [10, Lemma 4.1], Thus, for the unweighted counting function, we have that . By Theorem 1.1 it follows that every weighted mean counting function M ϕχ,a (w, σ) is invariant under vertical limits. Of course we may let σ → 0 + to obtain that M ϕ,a (w) = M ϕχ,a (w) for every χ ∈ T ∞ .Theorem 4.8.Let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0 and ϕ(+∞) = w.Then, for every a ∈ R the weighted mean counting function can be written as Proof.For fixed σ > 0, almost periodicity and Hurwitz's theorem imply that M ϕχ,a (w, σ, 1) is uniformly bounded in χ ∈ T ∞ , cf. [15,Theorem 3].Thus, we can apply the Birkhoff-Khinchin theorem: for almost every character χ ′ ∈ T ∞ , it holds that Interchanging the order of integration and summation yields that Applying Lemma 4.7 and then letting σ → 0 + with the monotone convergence theorem, we obtain (25). From this argument we are also able to give a partial solution to [10, Problem 1]. Theorem 4.9.Let ϕ be a Dirichlet series such that σ u (ϕ) ≤ 0 and ϕ(+∞) = w.Then, for a ≥ 1 and almost every χ ∈ T ∞ , Proof.By [10, Lemma 2.4], applied in conjunction with Lemma 3.4 for a > 1, we have that M ϕχ,a (w, 0, T 0 ) ∈ L ∞ (T ∞ ) for all sufficiently large T 0 > 0. Applying the Birkhoff-Khinchin theorem as in the proof of Theorem 4.8, it holds for almost every However, exactly as in Theorem 4.8, we also have that where |D(w, r)| = πr 2 is the area of the disk.Shapiro [26,Section 4] proved that for every holomorphic self-map of the unit disk φ, the Nevanlinna counting function N φ satisfies the submean value property in D \ {φ(0)}.Kellay and Lefevre [16,Lemma 2.3] proved that for α ∈ (0, 1), the generalized Nevanlinna counting function N φ,α satisfies the submean value property in D \ {φ(0)}.In fact, this result follows directly from the submean value property of the classical Nevanlinna counting function and the following formula due to Aleman. Theorem 4.10 ([1] ).Let 0 < α < 1 and φ : D → D be holomorphic and non-constant.Then where ω a (z) = 1 − |z| 2 α and τ z (w) = z−w 1−zw .In the Hardy space case the mean counting function M ϕ,1 satisfies the submean value property [10, Lemma 6.5], for every Dirichlet series ϕ that belongs to the Nevanlinna class.For periodic symbols ϕ(s) = g(2 −s ), where g is a holomorphic self-map of the unit disk, we also know that M ϕ,a (w) satisfies the submean value property for all a ∈ (0, 1), by an application of Theorem 4.10. This subsection is devoted to proving the following. Theorem 4.11.Let ϕ be a Dirichlet series with σ u (ϕ) ≤ 0.Then, for every positive a > 0, there exists a constant C = C(a) > 0 such that for every disk D(w, r) that does not contain ϕ(+∞). For a = 0 the (unweighted) counting function does not satisfy (27), as can be seen from the following example.M ϕρ,0 (z) dA(z) = 0. Therefore Theorem 4.11 could not be true for a = 0. First we need the following lemma. Lemma 4.13.Suppose that Ω is a bounded subdomain of C and let φ : D → Ω be holomorphic. Conversely, again by the Koebe quarter theorem, there exists an absolute constant whenever σ < Re s < T and | Im s| < 2T .Thus, for z ∈ D(w, r) In summary, we have shown that for all sufficiently large T > 0 and z ∈ D(w, r), Since N ϕ•Θσ,2T ,a satisfies the submean value property by Lemma 4.13, we conclude that M ϕ,a (z, σ, 2T )dA(z). By Theorem 1.1 and ( 22) we can apply the dominated convergence theorem to let T → ∞, and then let σ → 0 + with the monotone convergence theorem, to obtain the desired property, for a constant C > 0 that depends only on a ∈ (0, 1]. We assume now that a > 1.By Tonelli's theorem we have, for every T > 0, σ > 0, and Applying the submean value property for a = 1, we thus find that This concludes the proof, by letting T → ∞ and then σ → 0 + in the same way as before. 5.1. Reproducing kernels.To obtain necessary conditions for a composition operator to be compact, we will make use of reproducing kernels.The reproducing kernel k s,a of D a at a point s ∈ C 1 2 is given by the equation For fixed a < 1, we have that as Re s → 1 2 + .We will also require slightly more detailed information about the behavior of the reproducing kernel, cf.[19, Lemma 3.1]. The Stanton formula. The proof of the analogue of the Stanton formula for the weighted spaces D a , a ≤ 1, relies on the work of [10] and a generalized version of the dominated convergence theorem. Proof of Theorem 1.2.As explained on [10, p. 10], it is a consequence of Bohr's theorem that for any f ∈ D a , the abscissa of uniform convergence satisfies σ u (f • ϕ) ≤ 0. By making a non-injective change of variables in the Littlewood-Paley formula (1), we thus have that We extract the following equation from the proof of [10, Theorem 1.3], lim Since both M ϕ,1−a (w, σ, T ) and M ϕ,1 (w, σ, T ) converge pointwise for σ > 0, and additionally, since for σ 0 < Re s < σ 1 there is a constant C > 0 such that (Re s) )) dA(w). By the monotone convergence theorem, letting σ 0 → 0 + and then σ 1 → +∞, we conclude that When a ≤ 0, Theorem 1.2, the boundedness of the composition operator C ϕ : D a → D a , and Theorem 4.11 allow us to deduce that the weighted counting function is finite. We are ready to prove Theorem 1.3. Proof of Theorem 1.3.We first assume that the operator C ϕ is compact on D −a .Suppose {s n } n≥1 ⊂ C 1 2 is an arbitrary sequence such that Re s n → 1 2 .We observe that the induced sequence of normalized reproducing kernels {K sn,−a } n≥1 converges weakly to 0, as n → ∞, and therefore (30) lim We conclude, by (30), that lim Conversely, we suppose that (8) holds and argue as in the proof of [10,Theorem 1.4].Let {f n } n≥1 be a sequence in D −a that converges weakly to 0, such that f n −a ≤ 1 for all n ≥ 1. Fix δ ∈ (0, 1).By Proposition 5.4 and (8), there exists for every ǫ > 0 a θ, 1 2 < θ < Remark.We can extract an alternative proof for the boundedness of C ϕ : D −a → D −a from the second half of the proof of Theorem 1.3. Composition operators on Dirichlet-type spaces.The proof of Theorem 1.4 is completely analogous to the proof of necessity in Theorem 1.3.We leave the details to the reader. The following example illustrates that the necessary condition (9) of Theorem 1.4 is not sufficient for the composition operator to be bounded on the Dirichlet space D a , a ≥ 1 2 . Theorem 5.8.Let 0 < a < 1, and suppose that ϕ ∈ G 0 has bounded imaginary part.If the counting function M ϕ,1−a is locally integrable and satisfies (9), then C ϕ is bounded on D a .In addition, if we assume that ϕ satisfies (10), then C ϕ is compact on D a . Proof.We present the proof of the first part of the theorem only.Let δ > 0 be small.By the hypothesis of local integrability and Lemma 3. Applying this with (9), we find that \D(ϕ(+∞),δ) In light of Theorem 1.2, this shows that C ϕ : D a → D a is bounded. From the proof it is clear that Theorem 5.8 also holds under milder decay assumptions on M ϕ,1−a (w) as | Im w| → ∞. Theorem 3 . 7 ( [2,3]).For a > 0, the class G determines all bounded composition operators on the Bergman space of Dirichlet series D −a . Theorem 4 . 1 ( [25]).Let u ≡ −∞ be a subharmonic function on a domain Ω in C.Then, there exists a unique Radon measure ∆u on Ω such that for every compactly supported function v ∈ C ∞ (Ω), it holds that Ω v∆u = Ω u∆v dA.
v3-fos-license
2019-07-25T13:03:56.922Z
2019-07-23T00:00:00.000
198190816
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00467-019-04293-9.pdf", "pdf_hash": "764f900bae5e190bbcfe8f41b34cacdb62a578a5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43919", "s2fieldsofstudy": [ "Medicine" ], "sha1": "43a3c17b67f680ef03d1b50fe34ab541466fe026", "year": 2019 }
pes2o/s2orc
Kidney volume, kidney function, and ambulatory blood pressure in children born extremely preterm with and without nephrocalcinosis Background Reduced kidney volume (KV) following prematurity is a proxy for reduced nephron number and is associated with the development of hypertension and end-stage renal disease in adults. We investigated whether extreme prematurity affects KV, function, and blood pressure in school-aged children and if nephrocalcinosis (NC) developed during the neonatal period had additional effects. Methods We investigated 60 children at a mean age of 7.7 years: 20 born extremely preterm (EPT < 28 weeks gestational age with NC (NC+)), 20 born EPT without NC (NC−), and 19 born as full-term infants (control). We measured KV by ultrasound, collected blood and urine samples to evaluate renal function, and measured office and 24-h ambulatory blood pressure (ABPM). Results Children born EPT had significantly smaller kidneys (EPT (NC+ NC−) vs control (estimated difference, 11.8 (CI − 21.51 to − 2.09 ml), p = 0.018) and lower but normal cystatin C–based glomerular filtration rate compared with control (estimated difference, − 10.11 (CI − 0.69 to − 19.5), p = 0.035). KV and function were not different between NC+ and NC− groups. Change in KV in relation to BSA (KV/BSA) from the neonatal period to school age showed significantly more EPT children with neonatal NC having a negative evolution of KV (p = 0.01). Blood pressure was normal and not different between the 3 groups. Fifty percent of EPT had a less than 10% day-to-night decline in ABPM. Conclusions Kidney growth and volume is affected by EPT birth with NC being a potential aggravating factor. Circadian blood pressure regulation seems abnormal in EPT-born children. Introduction Extreme prematurity interferes with nephrogenesis, leaving each individual with a finite number of nephrons which might increase the risk for reduced functional capacity and the later development of impaired renal function and hypertension [1][2][3]. Since Brenner et al. introduced the concept of low nephron number at birth leading to further loss of nephrons by hyperfiltration and glomerulosclerosis, multiple studies have described the association between low birth weight and kidney size, kidney function and blood pressure, mostly in adolescents and adults but also in children [1,[4][5][6]. There is insufficient data to conclude whether prematurity itself is the single adverse event or if additional factors during the neonatal period, such as extra-uterine growth restriction, nephrocalcinosis (NC), nephrotoxic drugs, sepsis, and hypotension, have additional effects on kidney function and the risk for later renal failure, renal hypertension, and overall cardiovascular morbidity and mortality. NC is defined as the pathological deposition of calcium crystals in the renal parenchyma. The incidence of neonatal NC varies between 7 and 64% depending on ultrasonographic criteria and is highest in the most premature infants [7][8][9]. Ultrasound has been found to be a sensitive and reliable method to detect NC [10]. The etiology of NC in preterm neonates has not been fully clarified. Furosemide has frequently been implicated as a causative f a c t o r d u e t o i t s h y p e r c a l c i u r i c e f f e c t [ 11 ] . Aminoglycosides, corticosteroids, and xanthenes have also been identified as potential risk factors for NC [12]. Besides nephrotoxic drugs, the preterm infants often experience severe infections, hypotensive crisis, and hypoxia as well as hemodynamic impairment by a persistent ductus arteriosus and/or its treatment, all of which potentially lead to transient or even permanent renal failure [13]. Nutrition is suspected to have an important impact on early postnatal kidney development and health. Recent research suggests that high protein intake, advocated for better growth velocity for extremely preterm (EPT) infants, might have a disadvantageous effect on the kidney. It has been suspected that this high-risk group could have difficulties in metabolizing the amount of protein given, leading to mild metabolic acidosis and possibly to hypercalciuria [14]. Also, the improvements in support of micronutrients to enteral and parenteral feeds including additional calcium, phosphate, and vitamin D bear a potential risk for an imbalance towards stone-promoting factors [15]. The current evidence indicates that neonatal-acquired NC resolves by 50% during the first year of life and to 75% by school age without having an impact on kidney function [11,16]. However, from the few studies focusing on this subject, it can be suspected that NC has a detrimental effect on the kidneys and therefore cardiovascular health later in life [9,17]. In this study, we investigate whether NC developed during the neonatal period in children born EPT has an impact on kidney volume and function at school age. Eventual effects on blood pressure are evaluated by 24-h blood pressure monitoring (ABPM). Subjects The study was approved 5by the Ethical Committee at Karolinska University Hospital. Written and oral consent was obtained from all parents and children. We identified 213 infants born before 28 weeks gestational age (GA) between 2008 and 2011 at the Karolinska University Hospital, Stockholm, Sweden (Fig. 1). Neonatal renal ultrasound was performed in 105 infants, but only 68 had traceable results and images. All neonatal investigations were performed by pediatric radiologists. NC was defined as hyperechogenic reflections in cortex and or medulla visualized in longitudinal and transverse projections. Of the investigated 68 infants, 34 were diagnosed with NC (NC+) during their late neonatal period and 34 infants showed no signs of NC (NC−). There was no history of hyperoxaluria, cystinuria, or any type of renal tubular acidosis or a history of antenatal or postnatal diagnosis for urogenital malformation in any patient. Twentythree families refused to participate and 4 children died after discharge from the neonatal unit. Twenty children with NC and 21 without NC during their neonatal period consented to participate in the study. The 172 non-participants (children without ultrasound investigation and children with ultrasound investigation but lost to follow-up or declined to participate or those with incomplete images for review) were not different from the participants with regard to perinatal characteristics. A total of 19 healthy children born at term with appropriate birthweight, without any congenital abnormalities, and with no history of kidney diseases selected from delivery room records were recruited as controls. All children were in good health at the time of the visits. Follow-up visit Clinical data was collected from the neonatal charts with particular attention to factors that could influence renal function, such as nephrotoxic substances: aminoglycosides, vancomycin, loop diuretics, thiazide diuretics, and antenatal and postnatal steroids. GA, birth weight, Apgar scores, intrauterine growth retardation, respiratory distress syndrome (RDS), bronchopulmonary dysplasia (BPD) as defined by NIH, acute kidney injury (AKI) defined and staged by the KDIGO guidelines, patent ductus arteriosus (PDA) requiring treatment, sepsis episodes (clinical and/or culture verified), necrotizing enterocolitis (NEC) Bell stage II or more, surgical interventions for NEC, retinopathy of the premature (ROP) grade III or higher (and or any plus disease), and intraventricular hemorrhage (IVH) or parenchymal hemorrhage as defined by Papile were documented. Small for gestational age (SGA) was defined as a birth weight < − 2 standard deviations (SD) according to Swedish reference data for normal fetal growth [18]. At the visit, patient and parental medical histories, as well as maternal and paternal height and weight, were registered. The same research nurse performed all anthropometric measurements for weight, height, head circumference, and waist circumference on the children, who were wearing light indoor clothing. Height was measured using a wall-mounted stadiometer. Waist circumference was measured midway between the lower rib margin and the iliac crest using a normal measuring tape. Body mass index (kg/m 2 ), body surface area (BSA = 0.007184 × Height 0.725 × Weight 0.425 ) [19] and waistto-height ratio were calculated. Office blood pressure was measured in all patients using an automated oscillometric device (GE Healthcare Dinamap Carescape V100). After a 30-min rest, three consecutive measurements were taken on the child's non-dominant arm with an appropriate cuff size (bladder width that was at least 40% of the arm circumference [20]). We used a SPACELABS 90217A (SpaceLabs Medical Inc., Redmond, Washington, USA) device for ABPM, using the same cuff size as for the office measurements. The device was programmed to register blood pressure every 20 min between 07:00 AM and 21:00 PM and every 60 min during the night (21:00 PM-07:00 AM). Due to compliance difficulties in 21 children (12/20 NC+, 9/20 NC−) with neuropsychiatric disorder (ADHD, autism, and autism spectrum), we accepted > 50% of successful readings instead of the standard > 75% for statistical analysis [21]. The rest periods were described by the patient's parents, and day and night measurements were adjusted accordingly. The non-dipper pattern was defined as nocturnal BP Systolic or BP Diastolic < 10% relative to the diurnal mean value [22]. Hypertension was defined as > 95th percentile according to gender and length. BP > 90th but < 95th percentile was defined as high normal BP [22]. Renal function estimation Morning urine samples were collected from all study participants and analyzed for sodium, potassium, creatinine, albumin, calcium, phosphate, magnesium and protein HC (α-1microglobulin) and immunoglobulin G. Sodium and potassium were measured using potentiometry with ion-selective electrodes (Cobas 8000, Cobas C ISE2, Roche, Basel Switzerland). Urine phosphate, calcium, and magnesium were measured using photometric technique (Cobas 8000 Cobas CC 701 Roche, Basel Switzerland), and urine albumin was measured using immunochemical and turbodimetric method (Cobas 8000 Cobas CC 701 Roche, Basel Switzerland). Protein HC and immunoglobulin G were measured by immunochemical and nephelometric method (BN Pro Spec, Siemens Healthcare, Erlangen, Germany). Assessment of renal volume Ultrasound of the kidneys during the neonatal period (2008-2011) was performed by pediatric radiologists using a Siemens S 2000 with a 6C2 curved transducer (Siemens, Erlangen, Germany) at an average age of 36 weeks postmenstrual age following local guidelines. All images from the neonatal period for the 41 included children were reviewed by a senior pediatric radiologist in 2018. Diagnosis of NC was confirmed in 39 of the 41 cases. In each group (NC+/NC−), one patient was misdiagnosed and moved to the opposite group. Kidney volume for the neonatal period was calculated in 36 out of the 41 patients using the equation for an ellipsoid described elsewhere [25] and expressed as a ratio to BSA (KV/BSA) [26]. In the remaining 5 patients, the reviewed images were incomplete and volume calculation therefore was not possible. Ultrasound of the kidneys at school age visits was performed by a single experienced user. All investigations were performed with a Philips EPIQ 7G with SW1.5.2 software (Philips Ultrasound, Inc. 22100 Bothell Everett Hwy Bothell, WA 98021-8431, USA) using a C9-2 curved transducer. Multiple (at least two) measurements of kidney length, width, and depth were performed. An average of these measurements was entered in the equation for an ellipsoid, and KV was calculated [25]. Results from volumetric kidney measurements were adjusted for BSA by using linear regression analysis but also by using the ratio of kidney volume (KV) to BSA [26]. Predicted KV was calculated for each individual using the equation described by Dinkel et al. [25]. Statistics For descriptive statistics, continuous variables were presented with mean and standard deviation (SD). Continuous variables, approximately normally distributed, were analyzed with respect to the three defined groups, using analysis of variance (ANOVA). In order to adjust for continuous prognostic variables, covariance analysis was used (ANCOVA). Stepwise regression analysis was used to examine the impact of a set of prognostic variables. The coefficient of determination, R 2 was used to compare the precision of different models. Nonnormal continuous variables were analyzed with Kruskal-Wallis test. Dichotomous variables were analyzed with cross tables and Pearson's chi-square test. As additional analyses, the kidney volume was analyzed with a mixed effects model including right and left volume in the same analysis. A hierarchical model with the child as the main unit was set up, taking into consideration the correlation between right and left side, with the covariance structure unstructured (UN). In all statistical analyses, the relevant assumptions were checked. The significance level alpha was set to 0.05. Subject characteristics The neonatal characteristics and morbidities did not differ significantly between the NC+ and NC− groups; however, the NC+ group tended to be younger, smaller, and with more vancomycin, less prenatal steroid use, and more frequent IVH for all grades (Table 1). Children in the control group at visit were insignificantly older but significantly heavier, taller, and had larger head circumference, BSA, and LBM, but lower waist-to-height ratio. Between NC+ and NC−, only SDS for height and waist-to-height ratio were different at visit ( Table 2). None of the kidney ultrasound investigations showed signs of persistent NC. One child from the control group was referred for follow-up ultrasound because of mild unilateral pelvic dilatation (excluded). Kidney volume Unadjusted total kidney volumes (KV) were significantly lower for both preterm groups in comparison with controls (NC+ = 90.1 ml, NC− = 93.8 ml, control = 118.4 ml, p = 0.0004, ANOVA, Fig. 2). After adjusting KV for BSA, the analysis was no longer significant between NC+ and controls (p = 0.056) ( Table 3). The mixed effects model analysis where BSA, gender, and each kidney side were included showed a significantly lower right and left KV for girls in the NC+ group compared with girls in the control group (p = 0.016). The effects of the following factors on total KV by using stepwise linear regression were also tested: age at visit, AKI, PDA, NEC, BPD, sepsis, and treatment with furosemide (days) and antenatal steroids. None of the tested confounding factors seemed to have the potential to explain a difference in total KV between the two preterm groups. Total KV calculated as the ratio of KV and BSA (KV/BSA) [26] was significantly lower for the NC+ group of preterm-born children compared with controls (p = 0.016), but not reaching significance for the NC− group (p = 0.08) ( Table 4, Fig. 2). Both preterm groups taken together had significantly lower kidney volume in comparison with controls (Table 4). There was no significant difference between NC+ and NC− groups. Regardless of the method of the statistical analysis, there were differences in kidney volume between boys and girls (Fig. 3) as well as laterality of the kidney shown in Tables 3 and 4. Total KV calculated as a ratio to BSA (KV/BSA) for the neonatal period was not significantly different between infants with or without NC (NC+: 131.4 ml (SD 21.1); NC− 109.9 ml (SD 29.7), p = 0.07 using non-parametric Kruskal-Wallis test). The KV/BSA ratio from the neonatal period compared with the measurements at school age showed that children who had suffered from NC (NC+) during the neonatal period had significantly lower KV/BSA ratios than those without NC (NC−) (NC+ 81.05, NC− 103.4; p = 0.0036). Among NC+ children, only 2 of 18 (11%) had a rise in KV/BSA ratio from neonatal to school age, while 9 of 18 (50%) in the NC− group showed a rise in KV/BSA ratio (p = 0.01). Kidney function All groups had normal estimated glomerular filtration rate (eGFR) calculated by the cystatin C-based CAPA formula but showed lower values for the NC− group compared with controls as well as for both preterm groups taken together (NC+ 120, NC− 113, control 126.3 ml/min/1.73m 2 , p = 0.012, 0.035 respectively). The difference between the NC+ and NC− groups was not significant (p = 0.11). There was no difference between the groups when using the simplified Schwartz formula (eGFRcreatinine with k = 36) (NC+ 114.4, NC− 112.5, control 104.6 ml/min/1.73m 2 , p = 0.3). Although entirely normal, plasma creatinine was significantly higher in controls compared with both groups of preterm-born children (NC+ 38.5 (5.9); NC− 40.1 (7.9); control 46.2 μmol/l (8.6), p = 0.011). Urinary proteins and electrolytes were not significantly different between the three groups (data on request). Blood pressure Office blood pressure and systolic and diastolic standard deviation score (SDS) were not different between the three groups (mean (SD); NC+, 0.18 (0.94); NC−, − 0.07 (0.72); control, − 0.005 (0.72), p = 0.58). Four of the preterm-born children had high office blood pressure measurements: 3 had both systolic and diastolic, and 1 had isolated diastolic blood pressure above the 90th percentile. One child in the control group had isolated high diastolic blood pressure measurements above the 90th percentile. ABPM readings were successful in 34 of the 41 EPT-born children. Neurodevelopmental disorders of autism spectrum disorders and/or ADHD were present in 16 children born EPT (12/20 NC+ (60%); 4 of 21 NC− (19%)). In seven of these children, ABPM measurements were not possible due to compliance problems or had to be discarded because of low quality. However, none of the investigated children with ADHD or autism spectrum diseases was on medical treatment at the time of investigation. The results of ABPM in the 34 performed were in the normal range for all preterm-born children. ABPM could verify the high office blood pressure measurements in only one of the five patients mentioned above. The majority of the children had systolic values below or at the 50th percentile (Table 5). There was no difference between NC+ and NC− with regard to the distribution among the percentiles. Seventeen of 34 preterm-born children (50%) (NC+ 9; NC− 8) had a day-to-night decline (night dipping) of < 10% (Table 5). Discussion After adjusting KV for BSA, we can show that extremely preterm-born children already at early school age have significantly smaller kidneys than their peers born at term. We cannot prove that NC has a clear effect on those findings, but we can show that mainly preterm-born children exposed to NC are the ones with smaller kidneys in comparison with controls. We were also able to show that the evolution of the KV/BSA ratio from the neonatal period to school age seems to be negatively affected for those who suffered from NC during their neonatal period. It remains speculative whether NC during the neonatal period has an adverse effect on kidney growth. We fully appreciate that the number of patients included does not allow any causative relation. However, we regard this observation as important and interesting. Kidney function and office blood pressure and ABPM were normal and not different between the groups. However circadian blood pressure regulation seemed to be significantly altered in both groups of preterm-born children. With regard to NC, Giapros et al. followed children born under 32 weeks GA to the age of 24 months and demonstrated that those born preterm who had developed NC had shorter right-sided kidney length [9]. Kist-van Holthe et al. found smaller kidneys in preterm-born children at the age of 7.5 years but no effect of NC on kidney volume. The association between prematurity and smaller kidneys in infants and toddlers up to childhood has been shown in a number of follow-up studies [27,28]. Other long-term followup studies, including our own previous research, showed somewhat conflicting results [29]. Kwinta and later Stazec et al. could confirm reduced kidney volume measured by ultrasound in very low birth weight (VLBW) children at the age of 7 and 11 years [30,31]. A study on renal biopsy material in 31 children with focal segmental glomerulosclerosis (FSGS) or minimal change nephrotic syndrome (MCNS) at a mean age of 11 years, where 8 children also were born very preterm (mean: 25.4 GA), showed significantly lower glomerular density and greater glomerular volume in the children born preterm compared with those with normal birth weight [32]. The vicious circle of fewer nephrons in children born with low birth weight introduced by Brenner et al. might be particularly important in patients where there already exists severe renal pathology [33]. Our findings also show that school-aged girls born EPT have smaller kidneys than controls-a finding not present for boys. Keijzer-Veen et al. also found preterm born females to be more affected than males at young adulthood [34]. Kidney function was, by definition, normal among all three groups. Cystatin C-based eGFR, which has been advocated to be more appropriate for the age group investigated here, was lower in the group of children born prematurely in comparison with the control group (p = 0.036) [35]. Using the modified Schwartz formula to establish eGFR, four children in the preterm groups and one child from the control group had eGFR below 85 ml/min/1.73m 2 , which can be defined as mild renal insufficiency [36]. Serum creatinine levels were higher in the control group which might be explained by the higher muscle mass in that group. We could not detect any difference between preterm-born children with or without NC in regard to eGFR. This is in slight contrast to the findings in the study by Kist-van Holthe et al. who showed a higher number of children in their NC+ group with mild chronic renal insufficiency, but these results were not significantly different to the children born preterm without NC [17]. Giapros et al. could not detect an effect on GFR in his follow-up study focusing on children born preterm up to 24 months of age [9]. Follow-up studies of the same cohort at the age of 15, 20, and 30 years of age will be informative and necessary to answer the abovementioned statements. Early elevated blood pressure in children born very preterm, and even hypertension, has been reported in previous studies [37,38]. It is debatable whether office blood pressure * indicates significant difference between NC+ and Controls (P values <0.05).Results from the ANCOVA analysis models and planned comparisons. Fig. 2 Total kidney volume presented as a ratio to body surface area (KV/BSA) for the three groups. Children born preterm screened positive for nephrocalcinosis (NC+), screened negative for nephrocalcinosis (NC−), and healthy term controls without nephrocalcinosis. *Significant difference between NC+ and controls (p values < 0.05). Results from the ANCOVA analysis models and planned comparisons. measurements are capable of reflecting true elevation of blood pressure or if there is a risk for overestimation. When we measured 24-h blood pressure in the preterm-born children, only one of the 5 children with office blood pressure above the 90th percentile had also elevated blood pressure in the ABPM. The results of the ambulatory blood pressure measurements were within the normal limits for systolic and diastolic age and height related percentiles (< 90th) [22]. However, we were very surprised by the high number of "non-dippers" among the children born premature. In 17 out of 34 (50%) children with reliable 24-h readings, the blood pressure difference between day and night time was less than what is generally regarded as normal (10% difference) [39]. The relevance of "non-dippers" has been well described for adults and has been strongly associated with worse cardiovascular outcome and can be interpreted as a marker preceding the development of hypertension and microvascular complications [40,41]. There is very limited data available for children on night-dipping, but a few studies confirm that the 10% rule should be valid for children in the age group we investigated [22]. Night dipping seems to be related to age. Varda et al. observed in infants and toddlers from 2 to 30 months old a less pronounced night dip of only 5.4% on average [42]. A recent study observed a close relationship between non-dippers and BMI with children with primary hypertension and overweight or obesity showing a lack of decline of nocturnal blood pressure values [43]. Unfortunately, that study lacks information on GA or birth weight. We are unable to explain the "non-dipping" phenomena with overweight or obesity in our study as only 4 out of the 17 non-dipper children had a BMI at or over the 90th percentile. Among the scarce evidence available for preterm-born children, a study in 41 preterm-born children (26-36 weeks GA) examined at the age of 7 years found in comparison with 27 healthy controls insufficient night dipping in 73% of the preterm group compared with 41% in the control group [44]. It is rather unclear why the prevalence of non-dipping is so high among healthy control children in this study. Another recent and slightly larger study investigated 78 preterm-born children (27 weeks mean GA) and compared them with 38 healthy [45]. The variation of blood pressure over 24 h is regulated by the autonomic nervous system (ANS) via the hypothalamopituitary-adrenal axis [46]. It has recently been shown by us and others that the ANS might be altered in children born very premature [47][48][49]. However, the normal or even relatively low blood pressure during daytime in some of our pretermborn school children might reflect physical inactivity during the day and not allowing further dipping during nighttime. Although we are convinced that these data are of importance, they have to be interpreted with caution and further follow-up on this cohort is needed. A general weakness of this study is the size, as the numbers in all three groups are limited. We regard it as a strength that we have analyzed KV with different methods and have thoroughly tested adjusting for different variables. Our investigation and analysis allowed us to make clear differentiations for gender as well as for right and left kidney sides. It was also beneficial that all study subjects were recruited from the same hospital which minimized confounding effects by different practices * indicates significant difference between EPT and Controls (P values <0.05).Results from the ANCOVA analysis models and planned comparisons. and that all histories and ultrasound examinations were performed by the same investigator, as well as all anthropometric measurements being taken by the same research nurse. ABPM were only taken in the EPT group and thus correlated to population blood pressure references which is another weakness. Conclusion In this study, we showed that children at the age of 6-10 years born EPT have significantly smaller kidney volume and lower cystatin C-based GFR, but within normal limits. Our results do not entirely support our hypothesis that NC which developed during the neonatal period has a significant impact on reduced kidney volume at school age. However, the NC+ group of EPTborn children had significantly lower renal growth. Kidney function at school age has not been affected by NC. The high number of non-dippers in preterm-born children at school age is a new observation and has potential implications for the development of hypertension. However, the clinical significance of this finding needs to be studied further. It is important to know when early morphological changes such as the reduced nephron endowment lead to clinical findings in children born prematurely. As more EPT infants are surviving, research describing the consequences is essential in order to be prepared for adequate support and to organize early preventive efforts for this high-risk group. Differences between NC+ and NC− groups analyzed with Pearson's chisquare test. The normal percentiles are taken from reference [22]
v3-fos-license
2020-04-02T09:10:42.125Z
2020-03-30T00:00:00.000
215799183
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/jamc/2020/3047961.pdf", "pdf_hash": "8dd83bfab173add98b75c9ecd19134830befbc17", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43921", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "2654d3920ecad20e35cbcb86e58cf34752d508bc", "year": 2020 }
pes2o/s2orc
Development of an Efficient Analytical Method for the Extraction and Analysis of Biocide Contents from the Textile Test Specimens on LC-DAD Biocides are frequently used in the manufacturing of textiles that are in direct contact with human skin. Recently regulated biocides do not have validated methods for testing; so, their presence cannot be estimated in the consumer products. Hence a rapid method was developed for the separation and quantitative analysis of biocide contents (2-methyl-4-isothaizolin (MIT), 5-chloro-2-methyl-4-isothaizolin-3-one (CIT), 2-octo-4-isothaizolin-3-one (OIT), and 5-chloro-2-(2,4-dichlorophenxy) phenol (triclosan)) from the textile test specimens. Test specimens were extracted with methanolic sonication and purified by centrifugation and filtration. Biocide contents were separated at C18 column with 0.4% acetic acid: methanol (1 : 1 v/v) under isocratic mode and detected at 280 nm wavelength. Pretreatment factors such as extraction solvent, extraction method, dilution ratio, and extraction time were optimized initially and plotted calibration curve showed regression (r2 ≥ 0.9995) in the range of 1.0–5.0 mg L−1. Recoveries were between 95% and 108% with the relative standard deviation ≤ 4%. Limits of detection (LODs) were between 0.06 mg L−1 and 0.12 mg L−1 and limits of quantification (LOQs) were between 0.21 mg L−1 and 0.38 mg L−1. From the results, conclusion was made that the method can achieve the purpose of quantitative detection and the analysis of real test specimens verified the reliability of this method. Introduction Biocides are active substances that reduce harmful effect, intended to destroy, prevent the action of, or otherwise exert a controlling effect on any harmful biological organism by chemical or biological mode of action instead of just physical or mechanical action. Biocidal Products Directive 98/8/EC was in force from May 13, 2000, until August 31, 2013. After that, it was reviewed and replaced with the biocidal product regulation (BPR EU 528/2012) [1,2]. Biocides are used in a variety of consumer products like rubber, polymerized material, paper, textile, and leather finishes [3]. For much demanding functionality of the products like wrinkle resistance, water repelling, fade resistance, and resistance to microbial invasion, there has been upsurge interest in apparel technology all over the world. Since the garments are in direct contact with human body, so the development of antimicrobial textile finish is highly indispensable and relevant [4][5][6][7][8][9]. Several challenges have been created for apparel researchers due to increasing global demand in textile as cotton fabrics provide ideal environment for microbial growth. ere is an increasing demand on global scale for textile fabrics with antimicrobial finishes. Several antimicrobial agents as quaternary ammonium compounds, triclosan, and recently nanosilver biocide contents are available for textile finishing [8,9]. ey are synthetic in nature which creates environmental problems [10]. Biocides are widely used in everyday life as active ingredients in a variety of textile, pharmaceutical, and personal care products; due to this reason they have received increasing attention as emerging contaminants [11,12]. Contamination of the receiving environment is due to the extensive use and high emission of these biocides [13,14]. Biocides also have been reported in various environmental media especially in wastewater treatment plants (WWTPs). In 19 Australian WWTPs at average concentrations of 142 ng L −1 for effluents and 5.58 mg/kg for biosolids, triclosan has been reported. In the literature, a variety of extraction and instrumental analysis methods for the biocide contents are present. But current literature study shows that systematic screening of various classes of biocides in textile matrices has not been performed yet. From the available literature, it is revealed that only few studies have been explored on screening methods of banned antimicrobial agents on textile materials [15][16][17][18][19]. ere is strong need for consolidated data and progressive research for screening and testing methods on antimicrobial finished product of textile. In the present work, we discuss the determination of biocide contents in different models of the finished consumer goods related to textile for women, men, and children commercialized in Pakistan or produced for export. e developed method was applied to the analysis of these target biocide contents in the textile test specimens. Chemicals and Solvents. All the organic solvents and analytical grade solid chemicals were purchased from local suppliers. MIT, CIT, OIT, and triclosan were from Chem Service; methanol HPLC grade and glacial acetic acid analytical grade were from Lab-Line supplier in Lahore city. Preparation of Stock and Working Standard Solutions. For the preparation of 1000 mg L −1 MIT, CIT, OIT, and triclosan stock standard solutions, 10.00 mg of each of MIT, CIT, OIT and triclosan standards was weighed, respectively, and made up to mark in separate 10 mL volumetric flasks with methanol. Working standard solutions of 100 mg L −1 were prepared by pipetting 500 μL of 1000 mg L −1 MIT, CIT, OIT, and triclosan stock solutions in separate 5 mL flasks and diluting each volumetric flask up to mark with methanol. Note that CIT/MIT stock solution is relatively more stable. So, for preparation of stock solutions, CIT/MIT stock solution was prepared collectively. Preparation of Calibration Standard Solutions. Five-point calibration standard solutions of 1.0, 2.0, 3.0, 4.0, and 5.0 mg L −1 of CIT, MIT, OIT, and triclosan each were prepared separately from 100 mg L −1 working standard solutions. Test Specimen's Collection. Commercial textile products were collected from textile process industries and retail stores in Pakistan from April to June 2018. Instrument Conditions for MIT, CIT, OIT, and Triclosan e chromatographic column Hypersil C18 (250 mm × 4.6 mm × 5 μm) was used. 0.4% acetic acid and methanol (50 : 50/v: v) were used as mobile phases at 280 nm wavelength. 30°C was the column oven temperature and 30 μL was the injection volume at 1.0 mL min −1 flow rate in isocratic mode. Total run time at LC instrument was 12.0 min. Test Specimen Preparation. From the textile consumer goods, the test specimen was taken randomly from the different parts of the test specimen. If the textile test specimen was single colored and homogeneous, the test specimen was cut into pieces of approximately 5 mm × 5 mm and mixed. If the test specimen was multicolored or with pattern, the test specimen was collected according to proportion of color, cut into pieces of approximately 5 mm × 5 mm, and mixed well. ree test specimens, A, B, and C, were selected as blank matrix and spiked with 3.0 mg L −1 of each pure standard of biocides. ese three test specimens were analyzed using different extraction solvents, methods, and time. Recovery was best in methanol using ultrasonic method at 30 min. Hence, methanol was chosen as extraction solvent, ultrasonic as extraction method, and 30 min as extraction time (results were summarized in Figure 1 and Tables "a to c" in supplementary data). Test Specimen Extraction. We accurately weighed 5.00 ± 0.01 g test specimen on an analytical balance Mettler Toledo (model: ML 204/01) and transferred the test specimen into a reagent bottle; 80 mL methanol was added to the 100 mL reagent bottle and sonicated at 50°C for 30 min. Test specimens were totally extracted within 30 min sonication. After 30 min time, reaction vessel was cooled down to room temperature within 2 min. Extract was concentrated to about 2 mL at 55°C by rotary vacuum evaporator. e concentrated extract was diluted to 5 mL with methanol in a volumetric flask, filtered with 0.45 μm glass wool filter in 1.5 mL GC vial. Forced Degradation Study for MIT, CIT, OIT, and Triclosan. Accelerated degradation studies were performed on MIT, CIT, OIT, and triclosan. Acidic degradation study was performed by taking the 5 mL methanolic solution of biocide in 1.5 mL 0.1 N HCl at ambient temperature for 1 hour. Alkaline degradation study was performed by taking the 5 mL biocide solutions in 1.5 mL 0.1 M NaOH at ambient temperature for 1 hour. ermal degradation was performed by exposing 5 mL biocide solutions at 80°C for three days. Oxidative degradation study was performed by taking the 5 mL biocide solutions in 30% v/v H 2 O 2 at ambient temperature for 1 hour. Photolytic degradation study was 3.4. Instrumental Analysis by LC/DAD. Prior to running any batch of test specimens on instrument, the following parameters were performed. LC/DAD instrument conditions were set as (A) instrumental conditions for MIT, CIT, OIT, and triclosan. Calibration curve was established for each analyte using peak area vs. concentration. e coefficient of linear regression (r) was ≥0.995. Calibration standard check solution (CC) 3.0 mg L −1 was injected to the instrument; recovery of calibration check was within range of 80%-120%. Prior to running any batch of test specimens, method blank and specimen blank were injected to the instrument to check any contamination. Sensitivity check 1.0 mg L −1 was analyzed for instrument response examination. Laboratory quality control 3.0 mg L −1 and specimen spike 3.0 mg L −1 were injected to the instrument to check for experimental recovery. After all these interim checks test specimen extract was injected, the presence of target analyte was identified based on retention time and comparison of the UV spectrum, and background correction was made, with characteristic wavelength in a reference UV spectrum. e relative retention time of the test specimen component was within the ±0.01 retention time units of the relative retention time of the standard components and the peak maxima/minima of the test specimen component were within ±1 nm of that in the reference spectrum. Specimen spike was injected per batch of test specimen to check for experiment recovery. If the response ratio (RR) for any quantitation wavelength exceeds the initial calibration range of the LC/DAD instrument, the test specimen extract was diluted for required range and was reanalyzed. Result Calculation. e biocide contents of the test specimens were calculated according to the following equation and rounded off to two decimal places. Results and Discussion Analytical method was validated prior to the introduction into routine analyses. Specificity. Matrix blank, reagent blank, and pure standards were analyzed to observe the effect of possible interference of any matrix or reagent on analytes and chromatographic technique (results were summarized in Table f of supplementary data). Accuracy. A blank and a test specimen was spiked with pure standard of concentrations 1.0 mg L −1 , 3.0 mg L −1 , and 5.0 mg L −1 and these individually prepared replicates were analyzed at each concentration level. Recovery of these replicates was within the range of 100 ± 10% (results were summarized in Figure 1 and Table "d" in supplementary data). Precision. A test specimen solution was prepared, containing the target level of analyte. 10 replicates were made from this test specimen solution according to the final method procedure and analysis was performed in the subsequent six days. e relative standard deviation was within the range of 1.8%-2.8% for same day and 2.4%-3.9% for 6 days (results were summarized in Table 1). LOD and LOQ. LOD and LOQ of the proposed method was calculated by preparing a blank solution and spiked solutions with progressively decreasing known concentrations of each analyte. e developed method was used to analyze these solutions. By evaluating the minimum concentrations for each analyte that can be detected and quantified with accuracy (for the LOD signal-to-noise ratio of 3 : 1 and for the LOQ 10 : 1), the LOD and LOQ were determined for the proposed method. e limits of detections (LODs) were between 0.06 mg L −1 and 0.12 mg L −1 and limits of quantification (LOQs) were between 0.21 mg L −1 and 0.38 mg L −1 for target analytes (results were summarized in Table 1). Selectivity. e selectivity of the proposed LC-DAD method was observed by preparing mixture of analytes with commonly occurring interferences found in textile test specimens and the percent recovery of each analyte in the presence of interferences was calculated (results were summarized in Figure 1 and Table "d" in supplementary data). 4.7. Robustness. e robustness of the proposed method was evaluated by intentionally changing the chromatographic parameters. Mobile phases were changed from 0.4% citric acid: methanol (50 : 50) to 45 : 55. Flow rate was changed from 1 mL/min to 0.9 mL and 1.1 mL/min. Column oven temperature was varied as ±3°C. Deviation in results was ±1% only. Hence, it was concluded that varying the conditions had no appreciable effect on analytes. e results of the robustness study were summarized in Tables 2 and 3 (results were given for MIT and CIT only). Stability. In the presence of the other analyte in the solution, the stability of each analyte was determined by calculating the percent deviation of the results obtained after three days' period and it was compared with the data at the start time. e deviation of each analyte was observed which was less than 2% in the three days' period. Degradation Study of Analytes. Accelerated degradation of the analytes in the mixture was performed to evaluate the specificity of the proposed method. e analytes were subjected to forced degradation as acidic, basic, thermal, and oxidative conditions. e test specimens treated with HCl showed considerable degradation for the analytes. e biocide contents were found to be degraded up to 4-10% in acidic condition, whereas in the case of alkaline degradation, it was observed that around 3-12% of the biocide contents were degraded and 6-14% biocide contents were degraded under thermal degradation condition. In oxidative degradation, it was found that around 7-13% of the biocide contents were degraded. Major degradation was observed in photolytic condition which was 12-34%. e chromatographic peaks of the degradation products were in good condition and were well separated from the analyte peaks under all the stress conditions and this separation showed the specificity of the method in the presence of the degradation products. Under the same conditions, a mixture of possible interfering substances (placebo) was also analyzed to evaluate their interfering effect. e absence of chromatographic peaks showed the specificity of the method. Peak resolution was good for all the analytes in detected test specimens and all the analytes elute before 12 min time. MIT elutes at 2.75 min, OIT at 5.85 min, triclosan at 7.85 min, and CIT at 10.65 min (results were summarized in Figure 2). Quantification of test specimens was performed by external standard method. Biocide contents leaching out from test specimens were in the range of 2.86 mg L −1 to 75.56 mg L −1 , which were summarized in Table 4 (only positively tested test specimens were summarized). A total of 135 test specimens were analyzed for biocide contents in the three months' period. Test specimens A, B, and C were used as blank matrix for analyzing the parameters as extraction solvent, extraction method, and extraction time for biocide contents from textile test specimens. Test specimens A, B, and C were spiked with 3.0 mg L −1 biocide contents (CIT, OIT, MIT, and triclosan). Different solvents as methanol, acetonitrile, water, water/methanol 1 : 1, and acetonitrile/water 1 : 1 were used to extract these biocide contents. Recovery was better in methanol as compared to other solvents. Ultrasonic bath and centrifuge and water both with shaker were used as extraction methods. Ultrasonic method was better than centrifuge and water bath method. 10, 20, 30, 40, 50, and 60 min time were used to extract biocide contents. 30 min time was adequate to extract total amount of biocides form textile test specimen (results were summarized in Figure 1 and Tables "a to c" in supplementary data). Test Specimens Screened for MIT, CIT, OIT, and Triclosan. e lowest value for these biocides was 2.86 mg kg −1 and the highest value was 75.56 mg kg −1 . Test Table 1: Regression equation, the limit of detection (LOD), the limit of quantification (LOQ), and relative standard deviation (RSD) for MIT, CIT, OIT, and triclosan. Conclusions e suitability of solvent extraction for the determination of four biocide contents (MIT, CIT, OIT, and triclosan) from the textile test specimens was determined. e validated method was effectively used to analyze the textile test specimens, maintaining high sensitivity and selectivity for target analytes. Test specimens were extracted with methanolic sonication, separated on C18 column, and detected with a diode array detector (DAD). Under the optimized conditions, good linearity (r 2 ≥ 0.9995) and recovery (95%-108%) were observed for target analytes. From the results, conclusion was made that the method can achieve the purpose of quantitative detection. So, this method can be used for the quantitative detection of biocide contents from the commercial textile test specimens. e proposed method could be a useful tool to control the safety of the textile test specimens. is method will be helpful for biomonitoring and tracking of these chemicals associated with human exposure through direct contact and use. Data Availability e supplementary data is attached herewith. Conflicts of Interest e authors declare there are no conflicts of interest. Journal of Analytical Methods in Chemistry 5
v3-fos-license
2021-06-22T17:54:57.165Z
2021-05-01T00:00:00.000
235552051
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.bcra.2021.100013", "pdf_hash": "8b775916a832d3eb734045b9d5b5097b45f2b4b7", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43923", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "708e801cb4173c35f821f64aceaa534cf451e7be", "year": 2021 }
pes2o/s2orc
UvA-DARE (Digital Academic The ARTICONF approach to decentralized car-sharing Social media applications are essential for next-generation connectivity. Today, social media are centralized platforms with a single proprietary organization controlling the network and posing critical trust and governance issues over the created and propagated content. The ARTICONF project funded by the European Union's Horizon 2020 program researches a decentralized social media platform based on a novel set of trustworthy, resilient and globally sustainable tools that address privacy, robustness and autonomy-related promises that proprietary social media platforms have failed to deliver so far. This paper presents the ARTICONF approach to a car-sharing decentralized application (DApp) use case, as a new collaborative peer-to-peer model providing an alternative solution to private car ownership. We describe a prototype implementation of the car-sharing social media DApp and illustrate through real snapshots how the different ARTICONF tools support it in a simulated scenario. Introduction Social media platforms are key technologies for next-generation connectivity [1,2]. They have the potential to shape and mobilize patterns of communication, practices of exchange and business, creation, learning and knowledge acquisition. Typically, social media are centralized platforms with a single proprietary organization controlling the network and posing critical issues of trust and governance over created and propagated content. This is particularly problematic when data breaches are a regular phenomenon at the hands of centralized intermediaries. Addressing this problem requires innovative solutions at the user level (i.e. consumers, prosumers, businesses) and the underlying social media environment. This facilitates global reach, improved trust, and decentralized control and ownership [3,4]. Decentralized social media The ARTICONF project [5] 1 funded by the European Union's Horizon 2020 program proposes to research and develop a novel decentralized social media platform based on a novel set of trustworthy, resilient, and globally sustainable tools. ARTICONF addresses issues of trust, time-criticality and democratization for a new generation of federated infrastructure with the following goals: Simplify the creation of decentralized applications (DApps) using a two stage permissioned blockchain architecture; Automatically detect interest groups and communities using semantic contextualization and abstraction [6] of dynamic, diverse social media facets; Elastically auto-scale time-critical social media DApps through an adaptive orchestrated cloud infrastructure meeting their runtime requirements; Enhance monetary inclusion in collaborative models through cognitive and interactive visualization. Car-sharing Car-sharing is a new collaborative model providing an alternative solution to private car ownership. This model allows customers to temporarily use a vehicle (on-demand) at a variable fee, charged depending on the distance traveled or time used. This sharing economy example, which can be business-to-business (B2B) or business-toconsumer (B2C), intends to satisfy transportation demand in a sustainable way by lowering the CO 2 emissions in cities through: Fewer vehicles thanks to a combined car pooling and car-sharing approach; The use of electric cars that require destinations closer to the charging stations, which facilitates planning; Reduced vehicle lifetime (below the 13-year average), which encourages the faster introduction of eco-friendly innovations on the market; Reduced traffic and parking congestions. Current car-sharing solutions rely on a central entity that operates the platform and charges customers (e.g. escrow, deposit) in a nontransparent manner. This entity unilaterally decides about the service level agreement (SLA), prices and penalties without considering other users' opinions, contents or reputation. In the absence of transparency and market competition, this can lead to high prices and even monopoly. We aim to overcome this problem by shifting the focus from a B2B and B2C model to a more traditional setting (resembling the "good old market" from centuries ago), when business happened in a social and direct consumer-to-consumer (C2C) fashion. We propose a new carsharing model based on a decentralized social network implemented as a DApp, which builds a community of car owners and fleet managers that jointly operate a pool or cars, rented to passengers in a transparent and peer-to-peer (P2P) fashion based on time, route and distance without central intermediaries. The success of the new model relies on a community per city, where users' trust and reputation are the basis for any successful business transaction, and the prices directly depend on these values. Our proposed model collects information from different sources, including vehicle devices, user ratings, user contents, and their past behavior to evaluate the user reputation. The platform evaluates the SLA by decentralized consensus, which has a major effect on the new interactions within the community. For instance, a user with a good reputation can rent a car without escrow or with a better price, and her content ratings in the social network have more trust than the ones coming from users with a poor reputation. The validity of the model relies on mutual trust and has an impact the community growth. Outline We describe the ARTICONF architecture applied to a car-sharing DApp use case, as a new collaborative model [7] and alternate software-as-a-service solution to private car ownership. The new car-sharing model allows customers to temporarily rent a vehicle on-demand and at a variable fee, charged depending on the travel distance or used time. This paper extends our initial work [8] with a related comparison and further implementation details of the car-sharing DApp and its benefits from using the ARTICONF social media platform. The paper has six sections. The next section outlines the ARTICONF decentralized social media platform, with focus on its modular tool architecture and its DApp use cases. Section 3 describes the design of the car-sharing DApp use case on top of the ARTICONF platform. Section 4 gives implementation details of the car-sharing DApp in a simulated usecase scenario. Finally, Section 5 summarizes the related work and Section 6 concludes the paper. ARTICONF architecture for DApp development To provide a high level of robustness and autonomy, while addressing issues of trust, time-criticality and democratization, the ARTICONF shares a decentralized architecture with four building block tools: Trust and Integration Controller (TIC); Co-located and Orchestrated Network Fabric (CONF); Semantic Model with self-adaptive and Autonomous Relevant Technology (SMART); Tools for Analytics and Cognition (TAC). Fig. 1 shows CONF as the initial entry tool, in charge of deploying and scaling the entire platform when necessary. On the other hand, the TIC tool is the backbone for use case blockchain services that instantiates a hyperledger fabric-based network [9] with two alternative blockchain access modes: a standard fabric client SDK or an enhanced adapter provided by TIC itself. SMART interactively guides social media consumers and providers to cooperatively support the behavior of the DApp, decision-making and infrastructure, while TIC provides the underlying permissioned blockchain that verifies and stores the pseudonymous of the user transactional activities and associated personal data in an encrypted form. Finally, TAC gives intelligent insights and relevant information about the platform to use case providers and end-users through an interactive dashboard. TIC tool initializes a SMART user with each instantiated hyperledger fabric-network to handle access permissions of end-user data required by the SMART and TAC tools. Essentially, the SMART user initiates a request to end-users on behalf of SMART and TAC tool for accessing encrypted data associated with their transactional activities. Upon getting the access permissions, SMART and TAC tools utilize the data for gathering and visualizing intelligent insights. This allows ARTICONF to analyze transactional activities of end users in a decentralized manner without violating privacy principles. ARTICONF integrated tools This section outlines the services of the four tools. TIC TIC provides support for creating and integrating fragmented DApp platforms with increased participation using an open application programming interface (API) based on a decentralized blockchain network. TIC allows customers, consumers, prosumers and businesses to engage in a safe, transparent and trustful interaction with monetization opportunities using four microservices. Permissioned consortium blockchain is a fundamental service that allows actors in the network to develop consistency, accountability and traceability of their shared data. Cloud storage of distributed large shared data items, such as copies of the blockchain transactions, ensures the same version of truth with efficient indexing and traversals. Relationship system is a Turing-complete programmable unit of the blockchain that allows users to define conditions on data sharing and use rights through smart contracts. This enables them complete control over their content. Certificate authority is a client software that manages user groups, securely shares the keys with their members, keeps records of shared keys, and encrypts shared data before broadcasting and storing it on the blockchain and the Cloud. CONF CONF provides effective customization and provisioning of virtual infrastructures based on specific DApp requirements. It also automates the DApp deployment, monitoring and adaption at runtime. The functional components of CONF follow a microservices design, connected via a message bus with a common interface to the other components. CONF includes four microservices. Infrastructure planner provides functionality for DApp microservices to handle the time-critical constraints based on algorithms like critical path. The planner selects virtual machines from available data centers, and customizes their capacity and topology based on the performance requirements and cost. Provisioning agent automates the infrastructure plans created by the planner onto the selected cloud data centers. The provisioning agent effectively decomposes the infrastructure description and provisions it in parallel across multiple cloud data centers with transparent configuration on the network topology. Deployment agent automates the deployment of DApp microservices onto the virtual infrastructure provisioned by the provisioning agent by considering the quality constraints of the DApp and of the blockchain services. Monitoring and control agent monitors the specific metrics of the cloud infrastructure and DApp network at runtime. In consequence, it takes effective actions to adapt them, such as scaling DApp microservices or cloud infrastructures, or adapting the network topologies based on the runtime status diagnosis. SMART SMART is a data-driven tool, capable of finding relevant interest groups through democratic and tokenized decision-making and reputation mechanisms, solving disputes in collaborative models and preserving the trustful and autonomous users-centric environment. SMART provides five microservices. Semantic framework for federated social media involves largescale entities at three levels of abstraction based on a conceptualization model: Concrete perception of the users and of the associated smart objects in a global domain; Structure of the perceived relationships; Communication among entities, exploiting decentralized reasoning and relevant communities with interest groups. Autonomous and adaptive user-centric model encompasses rolestage programming techniques and a human-agent collective model describing, reasoning and conceptualizing consumer, prosumer and business processes at model description and at runtime. Pseudonymous trace abstraction exploits the experiential pseudonymous activities embedded in the blockchain that take advantage of trace comparison and retrieval for effective and quick adaptation. Smart matching with community detection engages with the relevant audiences and communities based on semantic abstraction and DApp requirements. Decentralized decision making and reputation mechanisms apply to all entities irrespective of their role for enhancing the efficiency of collaborative business and prosumer models. This eradication of disputes and dissatisfaction through decentralized participation and incentivization opportunities. TAC TAC is a guided analytics and knowledge extraction tool for consumers, prosumers and businesses that aggregates contextualized data over spatial-temporal boundaries based on socio-cultural abstractions. TAC supports an analytic system for decentralized social media DApps, injecting additional information to improve operational tasks. This provides meaningful insights to DApp providers for improving their businesses and profits and to users for enhancing their experience and earning extra revenues. TAC consists of three services. Augmented cognition data model consists of two microservices: Geospatial microservice gathers, displays and manipulates the data consisting of longitude and latitude information and provides meaningful geolocation information to prosumers for boosting their experience and improving their economic benefits. Temporal microservice supports complex analyses of the ARTICONF social network and allows users to benefit from actionable insights from a large amount of data processed over a short period of time. Social-contextual model optimizes the platform functionality through empirical research that engages developers, end-users, digital right activists, non-governmental organizations, and scholars in relevant fields (e.g. social sciences and humanities, computer science). The goal is to understand how to promote different forms of community value, considering improvements in end-user engagement, transparency, fairness, trustworthiness, and sustainability. Guided analytics for collaborative economy uses an interactive interface to assist social media consumers, prosumers, and businesses in injecting intelligent insights in data aggregation and cognition. The visualization guides social network users by focusing on the analysis and visualization of the parameters of interest and allowing different configurations for each use case. This way, the visual analysis moves beyond reporting shallow summary data to provide strong and actionable insights from users. ARTICONF DApp use cases A large variety of individuals, social entrepreneurs, civil society organizations, research centers, small and medium enterprises, as well as startups can strongly benefit from the ARTICONF platform. The project gathered a selection of four complementary social media DApp use cases to validate its goals. Crowd journalism Crowd journalism with news verification is a DApp providing opportunities to independent journalists and the news broadcasting industry to create content outside the mainstream media by gathering crowdsourced news with public participation. Two challenges faced by DApp providers are to validate the crowdsourced news with precise and trustworthy participation, and to provision time-critical infrastructure resources closer to the news location. Car-sharing Car-sharing is a form of person-to-person lending or collaborative consumption in a sharing economy, where existing car owners rent their vehicles to other people for short periods of time. Two challenges faced by this DApp are low public awareness on shared mobility, and geographical constraints with detailed routes, flexible offering, precise planning, reliable execution, and optimized costs. Video opinion discussion Video opinion discussion is a collaborative platform for publishing and subscribing to online videos. It allows non-professional users to record videos, share them on platforms, and earn rewards from their viewers. Two challenges faced by this DApp are the contextualized, thematic search of audio-visual metadata in large video libraries, and the security of a scalable business model that rewards users for their interactions, including content generation. Energy marketplace Energy marketplace uses a P2P monetized utility platform to reduce the energy bill of the prosumers by stimulating their energy sharing and demand response. The DApp encourages consumers to become prosumers who generate and consume energy in response to the increasing distributed energy generation at the demand side. Such human-agent models face two challenges: lack of intelligent techniques to identify the behavioral prosumer decisions over a specific smart appliance, and lack of an efficient data management plan to track the energy produced by each user for efficient reward allocation. Decentralized car-sharing The last decade saw an increased popularity of car-sharing adoption resulting in an explosion of providers offering shared-mobility services in the market (e.g. Uber [10], ZipCar [11]). In contrast to Uber, ZipCar and other centralized shared-mobility providers, the ARTICONF's car-sharing model creates a new decentralized platform based on blockchain [12] and smart contracts to face this growing market and to meet the appropriate service requirements related to user trust, resilience and cost. The car-sharing use case does not cover a single business model, but allows direct user interaction (C2C), or with car-sharing companies and fleet providers (B2C). Social network for each city serves the users who interact, plan (where and when a vehicle is available), hire a service, or share contents like photos and short videos. Sharing contents has various purposes, such as showing operators vehicle damages, fuel level, or battery status, or asking other users for a route. Customers use the social network to reduce risks related to their data, protected by a democratic mechanism that controls the malicious use of the network and its contents. Users have a reputation score assessed based on their actions, such as contracts fulfillment without penalties, reporting of real network and car-sharing issues, or publishing contents and service policies. Car-sharing DApp architecture Blockchain network allows the users to easily create and deploy smart contracts using a contract generation service, automatically verified, resolved, and equipped with coded penalties in case of breaches. The customers can use the cars without financial worries since the money is available and visible only in the smart contracts, executed only upon triggering certain prerequisites. Geolocation monitoring service [13], installed on each operating vehicle, tracks and verifies the real-time location of each vehicle and user smartphone, and resolves the clauses of the smart contract. The smart contracts have an escrow that depends on the users' reputation, measured based on the service policies. Artificial intelligence (AI)-based rewarding and fleet allocation algorithms provide economic benefits for users and companies by eliminating fleet idle times, reducing the impact of external events, and enhancing the user experience [14,15]. Fig. 3 shows an architectural workflow for deploying and operating the car-sharing DApp using the ARTICONF tools. CONF represents the main tool for a DApp developer and provider to utilize the underlying cloud resources, which deploys the TIC backbone blockchain for the provisioning and management of different networks in different cities. Car-sharing clients use the TIC adapter wrapped in a mobile app to access, create, and manage smart contracts. TIC's identity and portability management allow the movement of users between cities without requiring the creation of new credentials. Moreover, TIC allows higher throughput by automating the smart contract generation process and the availability of the blockchain network. Additionally, CONF manages the deployment and the scalability of different car-sharing microservices by interacting with the SMART and TAC tools. Car-sharing DApp deployment and operation CONF provides a back-end service for the car-sharing provider to deploy the DApp microservices and to dynamically control them at runtime. CONF creates a resource configuration plan based on the carsharing DApp requirements, including the number of virtual machines and the detailed deployment steps. The DApp uses a specific API to integrate the client SDK for the TIC's hyperledger fabric and to instantiate the decentralized network, its storage database and the prescriptive system. CONF also deploys the required virtual machines and containers, and performs continuous monitoring to detect infrastructure and resource performance anomalies hindering the microservices scalability. TIC offers two communication methods upon deployment. On the one hand, TIC provides a RESTful API that uses the client SDK of NodeJS for the hyperledger fabric to communicate with the blockchain, grouping all the functions invoked in the chaincode. On the other hand, TIC utilizes a React Native 2 framework to integrate the TIC adaptor as a special client SDK that encrypts the information at the client side and allows data anonymization before sending it to the blockchain network. Moreover, this adapter allows calling every function in the chaincode and registering new users on the platform. TIC also registers diverse user activities and communicates to SMART with a RESTful API. SMART utilizes the experiential traces embedded within TIC's blockchain network to provide a semantic mapping that captures carsharing providers and user requirements originating from spatiotemporal mismatches between demands (e.g. trips, travelers, requests) and supplies (vehicles). Such a contextual mapping enables car-sharing providers to enhance customer escalation through personalized end-to-end travel experiences. Additionally, SMART's spatio-temporal community detection empowers car-sharing companies to identify semantically similar customers (e.g. based on lifestyle preferences) for optimizing fleet management and predicting travel demands with customized mobility options. Moreover, SMART provides trust metrics to calculate and query the user reputation through a RESTful API. An important objective of the reputation model is to detect lobbies or interest groups, which support malicious users in improving their reputation or ratings. Upon detecting such behavior in the community, SMART decreases the weight of its impact in the reputation model and further removes or mitigates it by learning from previous facts. TAC exploits the knowledge processed by the SMART tool for aggregation, prescriptive analytics and visualization, offered to providers, prosumers and consumers through a dynamic visualization dashboard (e.g. car rental heat-map zones). The insights offered by the TAC geospatial microservice include the coordinates and addresses of the parked cars, the places allowed for ending a trip (without penalties), the rewards for ending the trip to suggested destinations, the list of users who started or wait for joining a trip, and the users' locations when sending messages to the platform. The TAC temporal microservice supports complex analyses of the car-sharing social network, helping in avoiding traffic jams and using seasonality for accurate traffic prediction [16]. TAC's ultimate goal is to display guided analytics on the behavior and engagement of social media actors, which helps them diagnose performance risks and improves collaboration, successful sustainability and revenue growth. TAC also interacts with the car-sharing social media DApp to identify external factors (e.g. weather forecast) affecting the optimized rental of cars. Car-sharing workflow scenario Let us assume a customer called Alberta who asks for a car or to join travel. Her interaction with the DApp triggers the following workflow scenario: 1. The DApp first checks the certificate delivered by TIC, which entitles her to use the wallet, and handles the request afterwards. 2. CONF scales up the platform in case of a large number of concurrent requests and keeps the QoS by balancing the demand, replicating APIs or increasing the capacity of the database if the request comes along with the use of the social network. 3. SMART analyses Alberta's previous behavior and establishes a reputation rating before responding to the request. At the same time, it analyses the demand prediction at the destination and sets the price accordingly. 4. The DApp uses TAC to suggest the best price to Alberta and several options based on the analytics provided by SMART. If she decides to change her destination to a location closer to the more demanding zones, she will receive a better price. 5. Finally, Alberta decides the price and the travel, stores them in a smart contract together with the escrow for the vehicle and publishes the route for other users to join. However, she can modify the smart contract setup for the travel until the start time. 6. After Alberta starts the travel, the DApp collects geolocation from several sources, such as the car, the driver, and other passengers. 7. SMART analyses this data to detect potential inconsistencies or fake information. This data represents inputs to the smart contract executed on the blockchain network provided by TIC. 8. When Alberta finishes the travel, the platform evaluates the SLA through TIC and the smart contract. The escrow needs information from the next user of the same vehicle, who checks its status and validates it using a new entry for the smart contract. 9. Finally, SMART uses the new data stored on the blockchain for the next assessments of Alberta, affecting her future reputation and behavior. 1. Import or create a certificate pertaining to travel; 2. Import the travel details; 3. Display offers by different car owners specific to travel plan; 4. Provide user access for uploading content (e.g. images, videos, comments). Car-sharing DApp implementation The following sections describe the data models and the prototype implementation, preparation and integration of the car-sharing DAPP through its interaction with the ARTICONF tools. Car-sharing data model To validate the car-sharing prototype, we integrated the Mockaroo 3 random data generator and API mocking tool into its mobile application interface. Mockaroo enables the creation of realistic test data in CSV, JSON, and SQL formats. We used Mockaroo to generate 100 000 cars, 100 000 users, and 100 000 offers according to the schema presented in Fig. 4. The test data set is openly available in JSON format on the Zenodo archive 4 for reproducibility purposes. Car-specific data schema has eight fields: license plate, brand, model, color, owner identifier, number of seats, manufacturing year, and availability status. Car offer data schema comprises eight fields: offer identifier, car license plate, price per km, price per time, start coordinates (latitude and longitude), as well as start and end locations (addresses). User data schema contains four fields: user identifier, balance, payment source (generated using the PayPal RESTFul API 5 and reputation (i.e., numeric score). Travel data schema is a combination of car, offer and user data models, instantiated by the users, registered onto the blockchain, and used by the SMART and TAC tools for analytic purposes (see Fig. 5). TIC for car-sharing TIC facilitates a configurable blockchain platform with support for dynamic addition and removal of new organizations to the car-sharing consortium, as the business scales across different geographies. The TIC blockchain services described in the following sections provide to car-sharing DApp providers and developers an advanced environment to quickly deploy (Section 4.2.1) and configure (Section 4.2.2) a secure permissioned blockchain network for rapid DApp development and integration. The policy management in Section 4.2.4 provides an interface to add and update the rules that govern the access permissions of the participating nodes in the deployed permissioned network. The security model in Section 4.2.5 describes the pseudonymization of personal user data through encryption, while highlighting the user data sharing permissions in a two-stage permissioned blockchain architecture. The network management and blockchain explorer microservices in Section 4.2.6 manage and monitor the deployed blockchain network. Finally, the SDK and API described in Section 4.2.6 simplify the smart contract development process with rapid DApp integration. Blockchain deployment TIC uses Ansible [17] to manage the deployment of blockchain microservices. Ansible is an open-source configuration management, deployment, and orchestration tool that uses a secure shell to connect to the configured servers and run the tasks. Furthermore, Ansible is agentless and eliminates the need to install any additional software or open firewall ports on the client. This helps the car-sharing DApp providers to configure and deploy TIC microservices on remote public or private cloud infrastructures using secure shell access. Fig. 6 illustrates the key Ansible components used by the TIC tool for the blockchain deployment: Inventories are a list of host Internet protocol (IP) addresses managed by Ansible. Playbooks are simple files that contain the task descriptions executed by Ansible in YAML format. Modules control the system resources like packages, or execute system commands on the remote host machines using playbooks. Plugins execute Ansible tasks as a job build step. The car-sharing DApp provider starts the blockchain deployment by configuring the inventory with a list of IP addresses of host machines, running across private or public cloud providers. The Ansible automation engine executes the playbook tasks that configure the host machines with the blockchain platform requirements, followed by the deployment of persistent storage support (GlusterFS) and Hyperledger Fabric microservices. To achieve this, TIC identifies several sequential phases associated with the deployment of a sample organization in the car-sharing consortium, described in the following sections. Infrastructure configuration for blockchain organization The first step in the TIC deployment involves the identification of a reliable and scalable architecture for an organization participating in the car-sharing consortium. Its requirements may depend on various factors, such as network traffic or geographical distribution. Afterwards, TIC delegates the allocation of resources to CONF. Fig. 7 shows a prototype architecture of a car-sharing organization comprising one manager and four worker machines. The deployment process starts with the orchestration of the GlusterFS open-source distributed file system on these machines, which scales out in a building-block fashion to store multiple petabytes of data [18]. GlusterFS acts as a cloud storage for the common credentials and certificates needed by the blockchain network, as well as any other data required by the car-sharing DApp. Blockchain service configuration A simple interface provides access to the blockchain developments and facilitates the configuration of the hyperledger fabric microservices with the required number and type (i.e. endorser, committer, anchor) of peers, orderers, and car-sharing blockchain network. Such an interface eases the creation and deployment of a blockchain for the DApp providers without dealing with the underlying infrastructure complexities. Fig. 8 illustrates the configuration of a car-sharing organization, including: Two CA services, as follows: ORGCA that generates the membership service providers (MSP) for the agents (i.e. peers, clients, administrators) to interact with the blockchain network; TLSCA that generates the MSPs for the same agents to establish an internal or external transport layer secure (TLS) communication; An ordering service that implements the Raft consensus algorithm; Two peer services: Peer1 serving as anchor and committer peer and Peer2 serving as the endorser peer; A command-line interface (CLI) service to install or instantiate the chaincodes (smart contracts) onto the blockchain. Access control policies Unlike public blockchains where any node in the network validates the generated transactions, a permissioned blockchain requires collective mechanisms for controlling the network administration and its operations. Organizations that participate in a permissioned blockchain network need a proper governance model that defines how they agree on operations, transactions, updates, access rights, and others. They achieve this through policies that contain rules needed by their respective nodes for making any changes to the network. Policies allow members to enforce decisions on the choice of the organizations that can update or access the deployed blockchain network. They include a list of organizations with access to a resource, the number of organizations' agreements required to update a smart contract, as well as transactions and channels. The set of digital signatures from each organization attached to transactions, which satisfy the governance agreed by the network, ensures adherence to policies. Security model The blockchain has two certificate authority services ORGCA and TLSCA, as described in Section 4.2.3. The network participants or users of the organizations use these certificate authorities to implement cryptographic algorithms for verification, signing and identity attestation, achieved by the MSP process running on the channel levels and ordering service. The MSP process consists of a set of protocols and cryptographic mechanisms for validating and issuing identities and certificates throughout the deployed blockchain network. This ensures that the issued identities have access within their defined scope. Furthermore, TIC's SDK adapter provides a option to encrypt data with an user key before adding it to the blockchain ledger, particularly useful in the pseudonymization of sensitive personal data. At the time of user registration or login, SMART requests user permissions to access their blockchain ledger data for analysis. The user grants this permission by sharing the key with SMART and enabling it to access the encrypted ledger data and use it for analysis. Install all prerequisite software for the fabric blockchain network, depending upon the operating system type (e.g. Debian, Centos); Pull the required Docker images of the hyperledger fabric services, deployed as plug-and-play modular components; Deploy the GlusterFS cloud storage cluster for the blockchain network; Spawn the Docker services according to the business configuration. Hyperledger fabric services. Multiple playbooks deploy the hyperledger fabric Docker service in the network, as follows: Spawn two certification authority services, as described before; Spawn the ordering service implementing the Raft [19] consensus algorithm; Deploy the peer services according to the business configuration, for example, one peer acting as anchor and committer (Peer1) and one peer acting as endorser (Peer2); Deploy a CLI service to install or instantiate the smart contracts onto the blockchain network, which define the transaction logic that controls the lifecycle of the car-sharing business object; Add or update the channel for conducting confidential and private transactions between several network members through private communication; Add or update system policies with the set of rules deciding which organizations can update or access the fabric network. 4.2.6.3. Visualization services. Two playbooks spawn the portainer and the portainer agent, 6 and two further playbooks spawn the hyperledger Fig. 9 represents a snapshot of various Docker hyperledger visualization services deployed by TIC on the manager machines, and of the hyperledger fabric services deployed across four worker machines, as follows: Raft based orderer service on worker machine 1; CLI and ORGCA services on worker machine 2; CouchDB service storing the blockchain ledger on worker machine 3; Anchor and committer Peer1, endorser Peer2 and the TLSCA service on worker machine 4. Deployment verification. To verify the deployment of a blockchain network according to its business configuration, the hyperledger explorer service provides visualizations of blockchain metrics like the number of participating nodes, transactions, blocks, and chaincodes (see Fig. 10). The car-sharing DApp provider accesses the hyperledger explorer service via the configured credentials and ports. The hyperledger explorer provides an interactive visualization interface for the deployed blockchain network used for monitoring and verification of the deployed car-sharing blockchain network. Primarily it shows a summary of the number of blocks and transactions created within the blockchain, and reports on the active nodes inside the network and the deployed chaincodes. 4.2.6.5. Car-sharing chaincode installation. TIC facilitates the installation and instantiation of the car-sharing chaincodes through a CLI service, verified using the hyperledger explorer. These smart contracts implement the business logic behind the car-sharing DApp. The car-sharing DApp provider can install, instantiate and update smart contracts through the portainer management service CLI. 4.2.6.6. TIC microservice management. The network management portainer provides a dynamic overview of all microservices of the deployed network (see Fig. 11). For each microservice, it reports their status, health and number of replicas. Furthermore, it provides facilities to dynamically update specific microservices based upon the business requirements, particularly useful in fine-tuning the microservices parameters for better performance (e.g. upscaling, downscaling). Test node SDK. TIC supports the integration of mobile and web Apps with the blockchain network through a NodeSDK. It also provides basic helper libraries for registering a user to the blockchain, querying or invoking a chaincode for easy integration, and verifying the car-sharing prototype implementation. Similarly, it can dynamically configure and add organizations to the car-sharing consortium, as the business grows across geographic areas. CONF for car-sharing We describe in the following a concrete car-sharing scenario with improved reaction time during peak hours, invoked using the Postman 8 collaborative platform for API development. In this example, CONF detected an increased load on the metrics database, which significantly affects the QoS and the user experience, mitigated in four steps. Step 1. The DApp provider specifies a high-level description of the application in Topology and Orchestration Specification for Cloud Applications (TOSCA) without the underlying software dependencies or infrastructure. Step 2. The application provider requests a plan using the identifier provided by CONF. CONF resolves all constraints and delivers a plan that contains the infrastructure and software definition to run the application. In this example, CONF detected an increased load in the application database and provisions virtual machines as close to its source as possible. Step 3. The infrastructure provisioner executes this plan based on the available cloud providers. Step 4. After provisioning, the client requests to deploy the car-sharing DApp on a Kubernetes cluster. Fig. 12 shows the Kubernetes dashboard together with the deployed metrics database and the application state (e.g., state of deployments, services, and pods). SMART for car-sharing SMART provides support for business predictions and recommendations to car-sharing DApp providers. To achieve this, it extracts information from the raw blockchain transactions and transforms it into a more expressive structured representation. Essentially, SMART exploits the experiential pseudonymous of car-sharing DApp user activities embedded in the TIC blockchain as immutable traces, such as the data models shown in Figs. 4 and 5. SMART architectural workflow The SMART architectural workflow follows three steps: Semantic linking [20] and contextualization [21] of blockchain transactions; Detecting communities based on contextual similarity and assign roles to DApp user groups; Temporal decomposition of contextual communities into stages. Fig. 13 shows the SMART workflow for semantic linking, contextualization, community role detection, and temporal stage abstraction of DApp transactions. Initially, SMART collects semi-structured transaction data from the blockchain, splits them into unique contexts (e.g. location, timestamp), and semantically links them through diverse contexts. Afterwards, it identifies unique DApp user roles representing unique behavioral patterns in a specific context. Furthermore, SMART clusters transactions represented as nodes for each individual context, where each cluster represents transactions with similar properties (e.g. locations, times, prices, values). Within a geolocation context, for example, an explanatory cluster could span over a unique district D of a city C. Hence, the cluster of users performing transactions receives the label C À D. We chose to label clusters instead of DApp users, as directly labeling users based on their transactions can be intrusive and compromise their anonymity. We further temporally partition the labeled cluster groups into stages, where a stage only contains transactions within specific time intervals. The temporally partitioned stages allow comparison of subsequent stages and discovery of the clusters' evolution (e.g. growth, shrinkage). This process allows SMART to not only understand the changing behavioral patterns of social media users in a single context, but also across contexts. Semantic linking and contextualization SMART explores multilayer contextualized semantic linking [22] and enrichment of raw blockchain transactions. For this purpose, it defines a set of layers showing different contexts of the DApp transactions. Additionally, SMART jointly considers the context of all network entities and their social similarity strength. To define a semantic link, SMART considers different semantic labels for edges across contextual layers, where similar semantic labels stay within a single layer. SMART provides a fully interconnected network, where all layers contain all nodes, following a diagonal coupling model such that inter-layer edges only exist between nodes and their counterparts. This model also adopts a categorical coupling model, where inter-layer edges are present between any pair of layers, and links between pairs in each layer describe similarity strengths. Fig. 14 shows the activity traces of the car-sharing DApp represented as a multilayer networked graph with six identified layers: L1-Reputation represents the credit points earned by each user for using the car-sharing application; L2-Users represent pseudonymous, as unique user identifiers; L3-StartingPoints is the source location of each user travel; L4-Time shows the timestamp at each travel source location; Each layer in Fig. 14 represents a context embedded with multiple agents, where each agent is a car-sharing DApp user with a behavioral pattern within each layered context. For example, a behavioral pattern within the L4-Time contextualized layer varies the timestamp of different users at the source location. SMART identifies overlapping agents in different layers and semantically links them to understand the role of each user in different contexts, as shown in Fig. 14. SMART performs semantic linking at regular time intervals to obtain multiple multilayered graphs, stored in its knowledge base. Car-sharing community and role detection The multi-layer network contains raw, but contextualized information. Hence, the next step is to find similar agents (i.e. car-sharing DApp user) based on the characteristics of each layer. To achieve this, we applied a clustering technique that tags similar agents with the same arbitrary but fixed label. Initially, SMART fetches the multilayered contextualized graphs across temporal stages as input for community and role detection. Further, SMART utilizes the OPTICS [23] augmented cluster-ordering algorithm to find similarities among different agents in each layered context through a distance function and a minimum number of neighbors required as a unique community. Fig. 15 shows the clustering performed for the L3-StartingPoints layer, where the same color blocks represent a community. In this case, the distance-based similarity is a function of the longitude and latitude of the source location. Each contextualized layer repeats the clustering algorithm with distance function models. After identifying the clusters for each layer, we implicitly know the roles of the agents within a cluster. For instance, a cluster C representing the approximate location of a city A indicates that the agents in C have their location in A. If one agent is in C multiple times, we assign a label connecting it to the city C. SMART assigns roles only to clusters, as direct user labeling based on transactions can be intrusive and compromise anonymity. Car-sharing temporal stage decomposition After obtaining the clusters in each layer, the next step analyzes the evolution of the networked clusters in terms of their temporal, structural, and size changes. For example, a car-sharing DApp provider is interested in the changes in the number of users who connect from a city A over an interval. To formulate such an evolution, SMART defines stages that split the time interval and represent snapshots of the networked cluster. In the current implementation, SMART slices the data based on weeks for an average of 52 stages per year. We apply these time intervals to the ob- tained clusters and analyze if single clusters grew or shrank from one stage to another. We plan in the future to dynamically create the stages by detecting and applying the natural peak use over time. SMART car-sharing evaluation We evaluate the SMART multilayered semantic linking, contextualization, community and role detection, and temporal stage decomposition for the car-sharing data model described in Section 4.1. We use an Intel(R) Xeon(R) Gold 5218 server operating at 2.30 GHz using Ubuntu 18.04 (x86_64) operating system with an attached 9.6 TB solidstate drive. We used a MongoDB 9 database engine for managing the contextualized layer and clustered data due to its NoSQL and fast querying-based features. Semantic linking and contextualization. SMART's multilayer carsharing network has 100 000 trips corresponding to the data models described in Fig. 5. Fig. 16a shows that the multilayer network context creation and semantic linking time for this data set allocated across six network layers increases linearly with the number of journeys taken. The increase in the number of journeys also increases the analysis data volume required for creating the multilayer network, and results in a higher execution time. Community and role detection. We used the L4-Time, L5-Destinations, and L6-TravelPrices car-sharing data set layers to evaluate the SMART community and role detection. We performed clustering of five input sizes with 1000, 5000, 10 000, 20 000, and 50 000 agents. Fig. 16b shows the execution time of the clustering-based role detection in SMART, where all three layers show an increase in runtime with a quadratic complexity. The clustering operation in the L5-Destinations layer across two dimensions (i.e. longitude and latitude) requires a higher execution time for large input sizes compared to the L4-Time and L6-TravelPrices layers clustered across a single dimension. Temporal stage decomposition. We performed the temporal stage decomposition using clusters with computed within the L4-Time, L5-Destinations, and L6-TravelPrices layers with five input sizes of 1000, 5000, 10 000, 20 000, and 50 000 agents (i.e. car-sharing DApp users). Fig. 16c shows that the stage decomposition execution time increases with the number of agents, however, it is similar for all three layers requiring less than 0.5 s for 50 000 nodes. The reason is that stage extraction for clustered agents takes place across a single timestamp dimension (i.e. one week, described in Section 4.4.4), while keeping the number of agents constant in each layer. TAC for car-sharing The TAC interactive interface assists the car-sharing DApp consumers, prosumers, and businesses in injecting intelligent insights in data aggregation and cognition. Data aggregations are collections of data stored in buckets and grown from the geospatial and temporal microservices. Data aggregations Data aggregations generate analytical information over stored documents used for near real-time data analytics. There are two types of aggregations. Bucket aggregations contain buckets created to store and group various documents based on the value of a specific field. The bucket aggregations usually combine with other types of aggregations, creating sub-aggregations. Metric aggregations represent computations performed over several documents after creating bucket aggregations. Subsequently, the metric aggregations calculate and return the value of each bucket. Geospatial microservice The geospatial microservice handles the gathering, display, and manipulation of geolocation data [24] to create categorized buckets based on geo_point fields. The geospatial map visualizes the starting positions of journeys that made the most revenue. This visualization requires metric aggregations on the data field containing geographic information (i.e. latitude and longitude) about the travel starting positions, and counts the corresponding total spending (see Fig. 17). Temporal microservice The temporal microservice handles the changing data over time. The temporal analysis coupled with the visualization microservice allows carsharing users to follow their savings, check if a car is available for rent, and verify the status of travel (i.e. booked, started, finished, checked, canceled). The car provider can identify the rating of the user (which reveals their behavior during the trip), the date when the trip starts or ends, or penalize the user if the trip has not started or finished in a range of 5 min. Hence, the company can follow the distance traveled by a concrete car, the rating of the travel given by the passengers, the list of users who already started the trip with the dates and the coordinates, and the date of publications made by the users. Fig. 18 displays the time series trend of the aggregation data, which analyzes the change in maximum, minimum, average, and median rental price over time. Visualization service Visualization service is responsible for aggregating and exploiting the car-sharing content supply chain across providers, communities, groups, and users. It provides information that supports user engagement in collaborative economies with monetary inclusion, it increases the provider's awareness of users' activity on the platform, and it helps them track the rating and functioning of their application. TAC provides visual analytics of the car-sharing qualitative data obtained from SMART by applying filtering rules to select the most relevant parameters of interest specific to the car-sharing use case. For this purpose, it uses three types of visualizations implemented using three open-source software tools known as the elastic stack (Elasticsearch, Kibana, and Logstash). Fig. 19a shows a metric aggregation example that requests the average, minimum, maximum, and price per kilometer for the car-sharing data set. Fig. 19b shows the following aggregated result: Number of documents inside the index ("count" field of value 1000); Minimum value for priceForKm ("min" field of value 1.01); Maximum value for priceForKm (max field of value 10.0); Average value for priceForKm (avg field of value 5.49608); Total price paid for the travel ("sum" field of value 5496.08). TAC offers several visualization types, including area chart, data table, line chart, markdown widget, metric, pie chart, tile map, and vertical bar chart. Fig. 20 visualizes the average, minimum, maximum, and median price per kilometer for the car-sharing metric aggregation given in Fig. 19. Fig. 21 shows the aggregated number of seats across diverse car brands visualized as a heat map. Similarly, the bar chart in Fig. 22 illustrates the top five car brands and the corresponding car types that achieved the most revenue. Related work This section presents related works on decentralized social media and car-sharing platforms. Decentralized social media Diaspora [25] is a non-profit, user-owned, distributed social network that addresses privacy concerns related to centralized networks. It consists of a network of nodes called pods, hosted by different individuals and institutions. Each node operates a copy of the Diaspora software running as a personal web server with social networking capabilities. Users of the network can host a pod on their own server, or create an 9 https://www.mongodb.com/. account on any pod to interact with other users. The Diaspora users retain ownership of their data and do not assign ownership rights. Steem 10 is a social blockchain [26,27] that grows communities and generates revenue streams by rewarding users for sharing content. It supports community building and social interaction with cryptocurrency rewards. Steem provides a backbone to support social media and online communities by returning its value to the people who provide valuable contributions (e.g. content generation and propagation) and rewarding them with cryptocurrency. Through this process, it creates a currency able to reach a broad market, including people who have yet to participate in any cryptocurrency-based sharing-economy. UHIVE 11 is a blockchain-based, privacy-aware social media with token rewards for content producers and consumers, which respects privacy by not logging user activities and by permanently deleting data. The network distributes revenues and rewards in exchange for content, engagement with posts, and time spent on the app. It also provides a content discovery UX, powered by interest-based user selection. Finally, it supports public, personal, and interest-based user profiles. Minds 12 is a decentralized social networking platform that rewards users with tokens for contributions to the community. Minds provides a free and open-source distributed crypto-social networking service that uses blockchain to reward the community with ERC-20 tokens. Minds users can use their tokens to promote content or to crowdfund other users through monthly subscriptions to exclusive content and services. Matic 13 is a decentralized network that scales through sidechains for off-chain computation, and ensures security using the Plasma framework and a proof-of-stake validator. It provides scalability and superior user experience to DApps by using sidechains to guarantee fast, low-cost and secure transactions. Matic uses Ethereum, but intends to offer support for additional base chains. Additionally, it provides a developer abstraction from the main chain to a Matic chain, native mobile apps, and wallet support. Moreover, Matic uses public, permissionless sidechains capable of supporting multiple protocols. Sapien 14 is a social news platform that offers users control over their data. Sapien utilizes a unique engine for rewarding content creators with SPN tokens by accepting micropayments or subscriptions, and allowing users to get SPN from advertisers. Additionally, it constructs a global reputation system that uses smart contracts to evaluate individual contributions. Decentralized car-sharing The decentralization of social media platforms contributed to the rise of many business models, in particular for blockchain-based applications [7,14]. One such prominent model is the P2P mobility-as-a-service marketplace. We discuss in this section several existing car-sharing market DApps. HireGo 15 is a car-sharing DApp based on the Ethereum blockchain. HireGO facilitates car-sharing and rentals between owners and passengers through smart contracts, and enables rewards by distributing virtual HGO tokens based on the non-fungible token standard ERC-721 [33]. Helbiz 16 integrates the mobile transportation systems and provides ERC-20 [34] tokens called HBZ. Contrary to HireGo, Helbiz also rewards end-users for providing mobility data to insurance companies. DAV [35] DApp provides mobility services for a set of interconnected autonomous vehicles (e.g. cars, trucks, drones) in exchange of ERC-20 DAV tokens, similar to HireGo and Helbiz. WONO 17 DApp provides a decentralized marketplace for car and house rentals. WONO combines a public Ethereum with a private blockchain network that uses a proof-of-stake algorithm for consensus and an IPFS-based decentralized file system for data storage (e.g. user profiles, videos) [36]. WONO uses the public key infrastructure to store and encrypt personal data on the client device. Conclusions We presented the approach taken by the ARTICONF project funded by the Horizon 2020 program of the European Union that researches and develops a novel set of trustworthy, resilient, and globally sustainable decentralized social media services. To achieve this goal, ARTICONF proposes an open decentralized architecture consisting of four tools (i.e. TIC, CONF, SMART, and TAC), addressing in collaboration the following five main objectives: Transparent and decentralized infrastructure creation and control (TIC) through a novel permissioned blockchain network, supporting pseudonymous identities and offering users a secure, permanent and unbreakable link to their personal data; Improved and trusted participation (TIC and SMART) by eliminating malicious actors in participatory exchanges, with improved collaboration, trust, and operational costs; Democratic and tokenized decision-making (SMART) through collective decentralized reasoning that improves the content quality by finding interest groups and incentivizing users for their participation; Elastic resource provisioning (CONF) for customizing, deploying and controlling distributed P2P and cloud virtual infrastructures required by time-critical social media applications; Cognitive guided analytics for improved collaborative economy (TAC) that injects intelligent insights into operational and missioncritical social media applications, with predictive models for consumers, prosumers, and business markets. We presented the ARTICONF architecture applied on a car-sharing DApp use case, as a new collaborative model and alternate software-asa-service solution to private car ownership. We described a simulated use case scenario accompanied by real snapshots illustrating how the different tools of the ARTICONF platform allow car-sharing customers to engage in trustful and secure interactions for renting a vehicle at a variable fee, charged depending on the distance traveled or time used, while keeping complete control over their personal data. Future work aims to apply AI techniques to optimize the fleet allocation based on the demand location, predicted by SMART by analyzing community movements inside a city using geolocation and previous travel information. The challenge is to match and reward travel to destinations close to high demand locations, which reduces the operational costs of relocating cars. The ARTICONF project plans to validate its results on three further industrial applications targeting crowd journalism with news verification, video opinion discussion, and a smart energy sharing marketplace. The project started in January 2019 and expects to achieve its first prototype release in 2021.
v3-fos-license
2022-01-23T14:33:57.533Z
2022-01-23T00:00:00.000
246166858
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10815-021-02379-8.pdf", "pdf_hash": "13ba4a8e51a3ac7a2e29ef9e91188bc709b44d8b", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43926", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "13ba4a8e51a3ac7a2e29ef9e91188bc709b44d8b", "year": 2022 }
pes2o/s2orc
Importance of IGF-I levels in IVF: potential relevance for growth hormone (GH) supplementation Purpose Growth hormone (GH) supplementation in association with in vitro fertilization (IVF) is worldwide again increasing, even though study outcomes have been discrepant. Since GH acts via insulin-like growth factor-1 (IGF-1), its utilization in IVF would only seem to make sense with low IGF-1. We, therefore, determined whether IGF-I levels affect IVF outcomes. Methods Retrospectively, 302 consecutive first fresh, non-donor IVF cycles were studied, excluding patients on GH supplementation. Patients were divided into 3 subgroups: IGF-1 in lower 25th percentile (group A, < 132 ng/mL, n = 64); 25th–75th percentile (B, 133–202 ng/mL, n = 164), and upper 25th percentile (C, > 202 ng/mL, n = 74). IGF-1 was tested immunochemiluminometric with normal range at 78–270 ng/mL. Because of the study patients’ adverse selection and low pregnancy chances, the main outcome measure for the study was cycle cancellation. Secondary outcomes were oocyte numbers, embryos transferred, pregnancies, and live births. Results Group A was significantly older than B and C (P = 0.019). IGF-1 decreased with increasing age per year by 2.2 ± 0.65 ng/mL (P = 0.0007). FSH was best in group B and worst in A (trend, P = 0.085); AMH was best in B and worst in A (N.S.). Cycle cancellations were lowest in C (11.6%) and highest in A (25.0%; P = 0.042). This significance further improved with age adjustment (P = 0.021). Oocytes, embryo numbers, pregnancies, and live birth rates did not differ, though oocyte numbers trended highest in B. Conclusions Here presented results support the hypothesis that IGF-1 levels affect IVF outcomes. GH treatments, therefore, may be effective only with low IGF-1. Introduction As add-on to ovulation induction for intrauterine inseminations [1] and in vitro fertilization (IVF) stimulation protocols [2], growth hormone (GH) supplementation was actively utilized for a little over a decade starting in the late 1980s. After a relative hiatus of approximately two decades, GH supplementation has in the last 15 years again become more fashionable [3,4], even though effectiveness of GH supplementation in improving IVF outcomes has remained controversial [5,6]. GH is a peptide hormone secreted by anteriorly positioned cells in the pituitary gland (somatotrophs) and plays multiple important roles in the body which go far beyond just support of linear growth, as its name would suggest. Released in pulsatile fashion by GH-releasing hormone with peaks during sleep, it is inhibited by somatostatin, produced in the hypothalamus. Its levels are the highest during puberty and are affected by environmental factors, like sleep patterns, diet, exercise habits, and exposure to stress. The hormone's principal organ target is the liver, where it induces synthesis of insulin-like growth factor (IGF-1) [7]. GH's principal (though not only) activity, therefore, is mediated by IGF-1. How GH and IGF-1 affect reproductive tissues has recently been reviewed [8]. Though thus a good number of studies have investigated GH-supplementation in conjunction with IVF, peripheral IGF-1 values in infertile women have been only minimally explored and, indeed, with contradictory findings by the same institution [9,10]. . Some studies have reported on IGF-1 in follicular fluids and observed correlations to IVF outcomes [11][12][13]. The GH/IGF-1 signaling pathway (at times also called the somatotropic axis) relates strongly to aging [12,13]. In centenarians, functional mutations in the IGF-1-receptor (IGF-1R), resulting in diminished IGF-1 signaling, are enriched [14,15]. In women, low IGF-1 was demonstrated to offer a general survival advantage [16]. As of this point, effects of IGF-1 on ovarian aging are not well defined. Animal data, however, have convincingly demonstrated that GH can stimulate IGF-1 secretion not only from the liver but also from peripheral organs, including ovaries. To complicate matters further, such local IGF-1 secretion can also be stimulated by steroid hormones and/or gonadotropins. Moreover, GH can also be produced locally in the ovary, in which case the hormone functions in a paracrine, no-pulsatory, and noncircadian mode without involvement of the GH receptor (GHR) [17]. A mouse model, knockout of GHR, however, interestingly did not prevent fertility but reduced litter size [18], thereby delaying exhaustion of the follicle pool [13]. Diminished GH activity in the ovary may, thus, help in maintaining the resting follicle pool (i.e., reduce recruitment), as it naturally declines with advancing female age (i.e., declining functional ovarian reserve). This is also supported by histological examinations, demonstrating a decline in the growing follicle pool. That IGF-1 is, likely, involved in the signaling cascades for these observations is demonstrated by the fact that IGF-1 administration reverses them [19]. Moreover, knockout of the IGF-1 gene in the mouse does results in infertility (and dwarfism), a phenotype that cannot be rescued with gonadotropin stimulation and on histology demonstrates a complete arrest in the development of the growing follicle pool [17]. IGF-1, thus, appears essential for follicle maturation. Hsu and Hammond in 1987 were the first to demonstrate that GH increased ovarian IGF-1 production in granulosa cells, thereby enhancing FSH action [20]. We today know that GH and androgens share in this function at small growing follicle stages [21]. With increasing clinical utilization of GH supplementation in IVF, a better understanding of IGF-1 effects on ovaries appears, however, urgently needed. For example, GH supplementation would appear senseless in presence of normal or even high IGF-1 levels, as any hormone supplementation only appears indicated if concentrations in the to be treated microenvironment are insufficient. It indeed would not surprise if above noted persisting controversy whether GH supplementation improves IVF outcomes may be due to unselected indiscriminate utilization of such supplementation in infertile women. Assuming normal endocrine physiology, GH supplementation should only be effective in women with abnormally low IGF-1 levels. To elucidate the potential importance of peripheral IGF-1 levels for IVF outcomes, this study, therefore, investigated the importance of untreated initial peripheral IGF-1 levels on IVF cycle outcomes. Results support the hypothesis that peripheral IGF-1 levels relate to IVF cycle outcomes and, therefore, suggest that GH supplementation should only be applied selectively. Study population We report on 978 consecutive patients undergoing 815 IVF cycles at our center between 2018 and 2020 who as part of a diagnostic work-up had peripheral IGF-1 level determinations at time of initial consultation. Bloods were routinely obtained approximately 6-8 weeks before IVF cycle start. Patients on GH supplementation and/or in repeat IVF cycles at our center were excluded from this study. Ultimately, 302 fresh first non-donor cycles qualified for the study. Based on IGF-1 levels, these women were then divided into 3 subgroups representing the lower 25th percentile (group A, < 132 ng/mL, n = 64), the 25th-75th percentile (group B, 132-202 ng/mL, n = 164), and the upper 25th percentile (group C, > 202 ng/mL, n = 74), with A considered patients with low, B with normal and C with high IGF-1 levels. Main outcome measures Because our center, based on patient age, low ovarian reserve, prior IVF cycles at other centers, and other adverse patient parameters, likely, serves the most adversely selected patient population among IVF centers in the USA (and possibly worldwide), the primary chosen endpoint for the study was cycle cancellations, likely the most sensitive endpoint among patients with high cycle cancellation rates. Secondary study end points were number of oocytes retrieved, embryos transferred, pregnancies, and live births. Because of low expected pregnancy rates, the study was, however, considered underpowered to consider them as primary endpoints. Primary and secondary endpoints were also investigated adjusted for patient age at time of presentation. The diagnosis of a clinical pregnancy mandated visualization of pregnancy on vaginal ultrasound examination. IVF cycle protocol As already noted, our center serves a very homogenous, poor-prognosis patient population, characterized by advanced female age, large numbers of prior cycle failures, low functional ovarian reserve, and, therefore, ovarian resistance to stimulation. Patients, consequently, receive individualized ovarian stimulation protocols, which contain the following common denominators: (i) Every woman above age 40 and women below age 40 with LFOR for age and low peripheral androgen levels and/or elevated sex hormone binding globulin (SHBG) receives as previously reported, at least 6-8 weeks of pre-supplementation with dehydroepiandrosterone (DHEA) and CoQ10 prior to IVF cycle start [22]. DHEA supplementation is initiated only after baseline bloods, including IGF-1, are drawn. Cycles are initiated only once androgen levels and SHBG are in normal range. (ii) All cycles are initiated on days 2-3 of menses after ca. 10 days of luteal estrogen supplementation for priming purposes. (iii) Except in younger women with still adequate ovarian reserve, who, per Surrey et al. [23] receive a micro-dose agonist protocol, most patients receive ovarian stimulation without either agonist or antagonist since they receive HIER (highly individualized egg retrieval), with human chorionic gonadotropin (hCG) trigger of 10,000 IU, depending on female age and prior cycle history, at 12-16-mm lead follicle size [24,25]. Because of the early egg retrieval, agonists/antagonists to prevent spontaneous ovulation are not required in such patients. (iv) All patients receive gonadotropin stimulation of 450-600 IU per day, usually at 3:1 ratio of FSH to human menopausal gonadotropin (hMG) products (manufacturers vary, depending on patient preference and/ or insurance coverage). If patients have a history of very poor prior response to such stimulation, they in parallel also receive Clomiphene citrate 100 mg for 5 days, starting on day 2 of menses. (v) Considering the importance of every embryo in this patient population, the embryology laboratory performs, as also previously reported, rescue in vitro maturation of every immature oocyte [26]. IRB approval Since this study only involved data extraction from our center's anonymized electronic medical research data base, it only required expedited IRB approvals. Every included patient provided written permission by consent to utilize their medical records for research purposes, as long as their anonymity was maintained, and the medical record remained confidential. Statistical analyses Continuous variables were presented with mean ± standard deviation and compared between IGF-1 groups by an ANOVA test. Categorical variables were compared between IGF-1 groups with Fisher's exact test. Age was compared to continuous IGF-1 levels by linear regression. Logistic regression and negative binomial regression models were used to adjusted for patients' age. A P-value < 0.05 was considered statistically significant. Analyses were performed by the center's medical statistician (S.K.D.) using SAS version 9.4 (SAS Institute, Cary, NC). Results The distribution of IGF-1 levels in the whole study population was Gaussian (Fig. 1a). Patients in the lowest IGF-1 quartile (group A) were significantly older (43.0 ± 4.8 years) than those in mid-range (group B, 41.3 ± 4.9 years) and highest quartile group C (40.7 ± 5.6 years; P = 0.019). This is of importance because, as one would expect, IGF-1 levels were age dependent: A linear regression revealed that IGF-1 levels decreased with increasing age 2.2 ± 0.65 ng/mL per year (P = 0.0007; Fig. 1b). Table 1 demonstrates further details: Though not statistically different, trends reflecting ovarian reserve parameters were the best in group B: FSH was 17.3 ± 17.8 vs. in group A, 24.8 ± 35.3 and in group C, 18.1 ± 20.6 mIU/mL; P = 0.085; AMH was 1.4 ± 3.3 vs. in group A, 0.7 ± 1.2 and in group C, 1.0 ± 1.6; P = 0.200). Cycle cancellations were statistically the lowest in C (11.6%), the highest in A (25.0%), and in mid-range in B (13.5%; P = 0.042). Oocyte numbers, transferred embryos, pregnancy, and live birth rates did not differ significantly, though oocyte numbers trended the highest in group B (5.2 ± 5.4 years) vs. 3.6 ± 5.4 in group A and 4.5 ± 5.0 in group C. Adjusting statistical assessments for age, the difference in cancelled cycles became even more significant (P = 0.021), while all other outcome, likely because of too small patient numbers, remained non-significant. Discussion It is important to initiate the discussion of here presented results by pointing out one more time the highly unfavorable selection of here presented patient population (Table 1). Not only were patients of advanced age, from a mean of 40.7 years in group C, 41.3 years in group B to a mean of 43.0 years in group A (P = 0.019), but they also demonstrate highly unfavorable functional ovarian reserve parameters, with FSH in this case demonstrating the best abnormal median in group B at 17.3 mIU/mL, group C with FSH 18.1mIU/mL holding the middle, and group A with 24.8 mIU/mL being the worst, though differences did not reach significance (P -0.085). They, however, correlated with abnormally low AMH levels, with group B again demonstrating the best mean level of 1.4 ng/mL, followed by group C at 1.0 ng/mL and group A again demonstrating the by far poorest mean value at 0.7 ng/mL, though these differences were statistically also not significant. Despite quite a large number of first IVF cycles (at our center) in this study (n = 302), because of the unfavorable prognosis of here investigated patient, pregnancy and live birth rates were as expected relatively low (Table 1). This can be assumed to be a reason why oocyte numbers retrieved, numbers of transferrable embryos and pregnancy, and live birth rates did not reach statistical significance between study groups. Cycle cancellation rates, clearly the most sensitive outcome parameter in poor prognosis patients, however, did demonstrated statistically significant differences between study groups based on IGF-1 level and these differences even strengthened with age adjustment. Further studies, involving even larger patient numbers as well as better prognosis patients, will, however, be helpful in reaching more definite answers as to why, even in most unfavorable IVF patients, cycle cancellations do statistically relate to IGF-1 levels. Since cycle cancellation rates in this study clearly inversely correlated with IGF-1 levels, this study for the first time offers a potential selection tool for women in infertility treatments who may benefit from GH supplementation in association with IVF. All evidence points toward women in group B (normal IGF-1 levels) demonstrating best outcomes. This finding, alone, supports the study's initial hypothesis that GH supplementation may improve IVF outcomes only in patients with low IGF-1 levels (group A). These findings potentially also explain the very conflicting results in the literature regarding GH utilization in association with IVF, as unselected utilization will, of course, dilute effectiveness of GH treatment: Just as aspirin will relive headache only in patients with headache and will be ineffective in a general population without a preponderance for headache, so will GH only be effective in women with low IGF-1 levels, through which GH exerts its physiological effects on ovaries. Our results to a degree contradict studies from a single laboratory, claiming in two studies poorer IVF cycle outcomes with increasing IGF-1 levels [10,27]. The same group in an earlier study, however, as we do here, reported highest cycle cancellations with lowest IGF-1 and lowest cancellations with highest IGF-1 [9]. Their most recent study involved so-called poor-responders but ages were clearly younger and FSH and AMH levels more favorable than in our patient population [27]. In addition, these authors defined high IGF-1 levels as anything over 72.0 ng/mL, while in our study, even the lowest 25th percentile was going as high as 132 ng/L. These two studies, therefore, are not comparable and an insightful accompanying editorial noted that drawing conclusions from this group's recent study for several additional reasons was difficult [28]. For these reasons and because of physiological logic, here observed statistical correlation between low IGF-1 and increased IVF cycle cancellation risk, therefore, is credible. At absolute minimum, GH supplementation, thus, appears indicated in women with low peripheral IGF-1 levels, in this study defined as < 132 ng/mL (lower 25th percentile). As noted earlier in the "Introduction" section of this manuscript, positive effects of GH supplementation should not surprise in absence of GH and especially of adequate IGF-1 levels [13,[18][19][20]. Though such supplementation has remained controversial [5,6], our improving understanding of GH/IGF-1 effects on granulosa cells and the resulting synergism with FSH effects on follicle growth support such supplementation but only if it occurs in women with low IGF-1 levels. We, therefore, propose that future studies of GH supplementation in IVF cycles should be preceded by IGF-1 evaluations and only women with abnormally low levels should be considered for such supplementations. Here presented findings are, however, also interesting for their apparent contradictions: On the one hand, there appears strong evidence for a beneficial effect of IGF-1 on IVF cycle completion; yet, while the positive effect on cycle completion appears linear with increasing IGF-values, functional ovarian reserve, as represented by FSH and AMH levels, on the other hand, appears best at mid-levels of IGF-1 (group B). Cycle cancellations as well as FOR are clearly the worst in group A, also the oldest patients in this study and, therefore, are not a surprise. Reaffirming the likelihood of a causal association with IGF-1, age, however, does not appear to explain cycle cancellations since significance was maintained (and actually improved) after age adjustments (P = 0.021). Cycle cancellations automatically denote IVF cycle failure. Though a statistical association does not establish causation, here demonstrated statistical association between IGF-1 levels and IVF outcomes strongly supports a causal relationship since this association even strengthened after age adjustments. How, specifically, IGF-1 lowers cycle cancellation risks, remains to be established. Cycle completion mandates at least one oocyte and one transferrable embryo. One, therefore, may conclude from here presented findings that better IGF-1 levels support the likelihood that at least one embryo becomes available for transfer. IGF-1, may achieve this by, as previously noted, enhancing recruitment [29] and acting synergistically with androgens and FSH in follicle maturation during small growing follicle stages [21]. Improvements in egg and embryo numbers after GH supplementation have, indeed, also been reported in studies that have failed to demonstrate improvements in pregnancy and live birth rates [5,6] and, therefore, based on existing literature appear as of this point factual. Whether there are other ways by which the GH-IGF-1 axis may beneficially influence IVF outcomes remains as of this point unsettled. An aged mouse model, recently reported by Chinese investigators, offers complementary information to here presented data: In that study, the authors confirmed that GH increased the number of antral follicles and of retrieved oocytes most at a medium dosage, second-best at high dosage and least at low dosage. This effect was achieved in those animals without obvious changes in AMH levels. Because improvements also correlated with increasing ATP levels, frequency of homogenous mitochondrial distribution, and improved mitochondrial membrane potential (though not with mtDNA copy numbers), the authors suggested that GH improved mitochondrial function in oocytes [30]. GH, in addition, also appears effective in improving in vitro maturation of human oocytes [31,32]. Finally, recent studies also strongly hint at effects of GH on endometrial receptivity [32] which, thus, potentially appears to offer an independent contribution to improved IVF outcomes from ovarian HGH/ IGF-1-effects. After female age, egg and embryo numbers in a given IVF cycle represent the second most-important predictor of pregnancy and live birth chances in IVF [33]. They in that same study also related in a rather peculiar way to AMH levels that may also have some relevance to here reported results: As pregnancy and live birth rates increased with larger egg and embryo yields, they did so also in parallel to increasing AMH levels. That increase, however, persisted only up to a certain AMH level, at which point, with further increasing AMH, not only did pregnancy rates start declining but miscarriage rates skyrocketed. Beyond certain AMH thresholdlevels, its initially positive effects on IVF outcomes, thus, turned radically negative. IGF-1 may demonstrate a similar effect-reversal with increasing concentrations in the peripheral circulation, as Irani et al. in frozen-thawed IVF cycles recently reported higher miscarriage rates associated with higher peripheral IGF-1 levels [34]. This observation further supports above noted suspicion that this group of investigators dealt with a very different patient population with quite different IGF-1 cut offs in comparison to this study,. Here presented IGF-1 data, suggesting best FOR at midrange for IGF-1 (in our study at roughly 132-202 ng/mL), are supported by above noted mouse study [30], suggesting similar IGF-1 dynamics, with a "best" level at mid-range. Endocrinology is defined by "best" endocrine ranges for practically all hormones. Another good example in control of ovarian function is androgen levels, with too low and too high, producing subpar IVF outcomes [21]. Limitations, summary, and conclusions The highly unfavorable patient populations our center serves obviously limits the applicability of here reached conclusions (Table 1). Considering the advanced age and low functional ovarian reserve of all three here reported patient groups, IVF outcomes were characterized by relatively small oocyte yield, embryo numbers, and few pregnancies. Consequently, it is not surprising that, despite a reasonably large patient population, no significant differences were observed in secondary IVF cycle outcome parameters. Though difficult to assess considering the various outcome parameters, our statistician concluded that study groups would at least have to double in size to also demonstrate differences in other clinical IVF cycle outcome parameters than cycle cancellations. Though advanced ages of the study population must be carefully considered before generalizing here observed findings to younger age groups, that age adjustment actually improved the significance of here reported finding, in a way validates them. This study clearly supports further exploration of GH supplementation especially in women with low IGF-1 levels, usually mostly older patients. While ovaries in younger women may reveal different hormonal dynamics, we, therefore, would not be surprised if younger women with low IGF-1 levels would also be positively affected by supplementation with GH. Two additional issues deserve mention: As IGF-effects on ovaries are most profound at small growing follicle stages, follicles exposed to adequate IGF-1 levels still require at least 6-8 weeks to reach gonadotropin-dependence that renders them available to gonadotropin stimulation in IVF cycles. GH supplementation must, therefore, be started at least 6-8 weeks before IVF cycle start. A large majority of studies in the medical literature supplemented patients with GH, however, only during stimulation or, at best, starting about 2 weeks before stimulation start. Such supplementation, like androgen supplementation which supports follicle growth with identical timing [21], will not result in desired effects on granulosa cells of growing follicles (and, therefore, oocytes), though they may, at right concentrations, exert beneficial endometrial effects [35,36]. Second, the literature also varies greatly in daily dosages of GH that were administered. Here, too, a consensus must be reached if study outcomes are to be compared. As a final message, this manuscript also suggests that determination of IGF-values, generally not considered a routine test in infertility practice, may be indicated in women with low functional ovarian reserve. Competing interests N.G. and D.H.B. are listed as co-owners of several already awarded and still pending US patents, some claiming benefits from androgen supplementation in women with low functional ovarian reserve, a topic peripherally addressed in this manuscript. Others relate to diagnostic and potential therapeutic benefits of AMH, also marginally addressed in this manuscript. N.G. is a shareholder in Fertility Nutraceuticals, LLC, which produces a DHEA product, and is owner of The CHR., where much of the research reported in this manuscript was performed. N.G. and D.H.B. also receive patent royalties from Fertility Nutraceuticals, LLC. N.G., P.P., and D.H.B. also received research support, travel funding, and lecture fees from various Pharma and medical device companies, none, however, over the last 3 years and none in any way related to this manuscript. Other authors have no conflicts to report. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2018-12-29T21:32:52.640Z
2018-07-24T00:00:00.000
158826347
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://journal.unj.ac.id/unj/index.php/hayula/article/download/6250/5772", "pdf_hash": "cc1a82a43928357754e7cdc1498234672651465f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43930", "s2fieldsofstudy": [ "Education" ], "sha1": "cc1a82a43928357754e7cdc1498234672651465f", "year": 2018 }
pes2o/s2orc
Innovation In Islamic Education The approach to religious education had been shaped by historical, cultural, social, and political reasons, however, both the inherited secular education system and traditional religious education system, as standalone, can never bring development to the society. Hence, the religious education must be combined with secular education and the innovative approaches in education evolve continuously to uphold the pedagogy, learning approaches and facilities even in the current context to sustain, endure to be relevant and culturally resilient with the contemporary scientific and advanced technology. The process of education Tolba Said, Umachandran, Abdul Ghafar Don Innovation In Islamic Education Page 118 Hayula, P-ISSN: 2549-0761, E-ISSN: 2548-9860 evolved in many countries, radically changed from traditional madrasah to localized vernacular schools, and then on to European education. The educational curriculum should kindle the learning process by systematic observation, quantitative reasoning and scientific expressions. The learning tools should engage students with the creation through observation, pictorial and descriptive records, exploration, articulation, and communication with fellow students. This paper analysis a combination of both structural and cultural orientations on identified needs leading to facilitations in learning environment, implementing knowledge to practice, and finally evaluated for the fulfillment or gap identification, for furthering the learning or development inputs towards constructive utilization and growth. Introduction The Muslim education structure was in the decline, the Islamic community took to dichotomy with secular education and religious education.Islamic education is intelligently a boundless religious education stating to the whole of teaching and learning activities in family, the school and the mosque accommodating and readjusting with the thinking of modern Islam and the materialistic humanism happening simultaneously, so that they can interact more (HT, 2013).The madrasa education in addition its political effects, productively prepare students for interacting in a market economy, however falling short of the unrealized gains that is inevitable.They are less effective in English language teaching.Parents tend to choose private schools for the quality of education is likely to be higher. Reconsideration to position visual arts and other forms of artistic expression is one of the identified needs to upgrade scientific development.Education in arts, geometry and humanities can bring newer dimensions to Islamic art, architecture and aesthetics; delivered through evolution of an artificial, cohesive and harmonized approach to the curriculum.Islam is mistakenly featured to control and shy away from the forbearance, sensuality and extravagance of arts as it reflects the outer forms of nature and the material world, and consequently focuses on the abstract, inner reality of things.As Islam reached all corners of the society, the schools also educated the Quran. Comparable to the religious promulgation these educational institutions mingled with local establishments and appropriated to the characteristic cultural features of the vicinities in which they were located.Teachers lack the pedagogical skills to teach effectively and mostly depended on the punishments, where the students can learn through fear, which is quite contrary to the learning theories taught to them.Teachers do not understand the role of motivation in the learning process (Sheikh, 2011, p. 215).The comprehensive details by students on the different types of schools, depends upon the background of students, who followed Quran inputs at primary level education and preliminary classes in their neighborhood; compared with those who understand better the learning activities of the school which as well as students had a contextual input of modern education which helped them to easily acclimatize with the available condition. Puzzle to Education The conflicting Islamic and western curriculum were affected historical, cultural, social, and political reasons; however, secular or religious education can never be isolated to bring development to the society.An inclusive and cohesive notion of Islamic education grounded on the heavenly harmony perfectly comprehends the teaching, application of the mind and the conduct of understanding, to cherish to analyze thread bare, honestly and admiring pluralistic views on wide-ranging realities with ethical restraints and wisdom.Education on the nonviolent foundations of a religion can be a strong influence for construction of flexibility against ferocity.Youth movements based on education and exchange of ideas accomplishes to promote appreciation of one another and develops evolution of leadership traits to take care of altruistic and social welfare activities (Silvestr & Mayall, 2015, p. 96).Socially conservative parents are liberal under the impression that it is safe to send their daughters to school.They do not impose the ban on girls wearing a headscarf in school, quite acceptable to construction of dormitories for girls, and involvement in cocurricular events well-suited with the conservative values that are well -intentioned policies of socially liberal arrangements and intentions (Kuran, 2018, p. 95).The religion holds a vital position in moral construction and foundation for social asset.The assignment of allocated funds for economic and scientific change, in education must focus on the issues of community.To be meaningful social development, educational institutions should Expansion of innovative expertise in self-governing of the community (M Agbiji & Swart, 2015, pp. 1-20).The symbol of veil was the only way to distinguish the West from Islam, with the eligibility for free education women had been given access to university levels, permitting them to vote for governance; provision of certain family protection law with the right to divorce and child custody, also increased the marriageable age for girls1 . Education is required to shape the best of people within them.Knowledge improves astuteness and an individual becomes more talented they become more proficient to originate the doles from trainings and observations ( Mohd Nor & Bahroni, 2011, pp. 15-24).Concluding as a blending of recital and replication activities, lead through memorization of the Quran, by children who in addition had learned to inscribe and deliver in the Arabic language.Backing for such schools originated from the society as they provided the school space, housing, food for the teacher, and facilitates choices on the employment of tutors and also provided all kinds of resources required for the proper functioning of the school.The teacher teaches on a one-on-one basis through coaching actions with the individual students or via small groups of students working together and who are almost like peers at the same level in their studies (Ibrahim, 2012, p. 96). The demand for religious education within a community always have a higher proportion of religious education based on important features that allow the community to benefit from the presence of religious institutions such as the communities' religious beliefs, teachings; and involvement (Permani, 2009, p. 262) 2 .Islamic teaching was conventionally accessible to the scholars who had been familiar by mastery over different branches of knowledge available at those periods.The length in time required for proficiency in all the subjects was very long, in addition to it there was a serious shortage of resources needed to pursue enlightened course of studies in numerous other subjects.The acute shortage of resources needed to pursue advanced studies led to mobility of scholars from place to place for studying various subjects from different scholars residing at different places.Conventional Islamic teaching happened in a oneroomed-school where the teacher and several assistants who were earlier graduates of the same school or senior students, all male, taught children to learn by rote the Quran. Education conserves the rudimentary structure of the society by protecting completely all that is valuable in elementary principles and societies, by diffusing them to the future generations and also by reintroducing the culture as afresh as whenever disintegration, inaction or loss of standards occur (Mohd Nor & Bahroni , 2011). Modern school teachers possibly will get instructions from the traditional schools that identifies the individual difference in the students and their different learning abilities.To a great extent, each student's difficulties, interests and abilities have to be understood by the classroom teachers on the basis of the school curriculum, so as to inspire the learner, based on student learning profile, and fulfill their potentialities from among several elements identified.The basics that are active in the traditional school system can be utilized in the modern school to facilitate the learning of talented students by embracing enhancement, acceleration and ability consortiums.The work assignment in Islamic Integrated School is high compared with the public schools, while the public schools teach only the relevant subjects, the Islamic Integrated School teaches more than double, leaving the learners with no leisure and teachers with no time to evaluate or give corrective teaching in areas were students are weak. New Issues Due to Education Innovation in education is continuous relook on pedagogy, learning approaches, facilities relevant and culturally resilient with the contemporary scientific and advanced technology.It is understandable that ethical and divine growth in education shapes the learner to understand home or familial values on a holistic dimension, towards countless widening opportunities.The role of family extends yonder than parental cooperation and provides prospects to engage in creative arts, cultural and sporting activities, nature activities and community service (HT , 2013).There is a lot of attitudinal change in women towards female employment, desired fertility, and higher education for girls, therefore upholding an ethically aggressive stance towards woman educational development and being overt on Islam by increasing the religious content of education might harm didactic impact in women.Education is a tool which permits development of humility, moral values, ambitions and the self-confidence to achieve concentrating on academics and revitalizes the efforts to realize the dream for culmination of discrimination at all levels.Achievement is based on innovation-oriented ethos to gain through newer adaptations of knowhow and following lucrative approaches with the upgradation of knowledge and skills as a continuous attempt ( Kumar & Mistri, 2015). Education is the source of knowledge, to take over existential challenges by not being unequal or unstainable and be transformational to achieve development of human potential by nurturing a participative approach in cognitive, economic, social, cultural, artistic and personal magnitudes with self-governing social responsibility (Hashim, 2012, hal. 132).Education has made to realize that the Muslim personal Law can be positive only, when it is within the confines of the rulings and teachings of Islamic Law (NMMU, 2010).Traditional Islamic schools had recognized their students as per their achievement.High achiever and gifted students were permitted to complete their courses in shorter time period than those who had fixed stipulation of time as usual.By numerous kind gestures, the slow beginners were given opportunities to complete their course works at their convenient bound.Contents which takes shorter periods of months for bright students would take years to cover for those who are not so bright others. However, every student is buttressed and assisted in a method that is common and to guide for their upliftment in education.Education is a connect over which society diffuses and reintroduces its culture and values to the future generation.The educational philosophies and information yielded from the cultural morals of the society get transferred or acquired with excellence by the use of practices like coaching, training, reading, exercise, direction and discipline.the origins of education were organized through religion in the west, but with the passing periods the insight of the connectivity between religion and education turned reduced. The religious establishments take locations as observed by numerous reformist activities so as to interrupt in the development of a fast progressing society because of their connections to the four classical schools of Islamic laws and divinity and more supernaturally oriented devout practices.Islamic Integrated Schools face challenges emanating from a wide curriculum, subjects taught in different languages, teachers use differing methodologies and lack of time to play.Despite all these problems, students are forced to take things in their stride and are quite positive about the benefits of the two systems.The language of instruction is the basic for an educational system.The choice of the language for instruction affects the quality of education, however the decision on the choice of language of instruction in education is left to the policy makers in education.The lack of training in subject knowledge among Islamic teachers is also greatly impactful on education. Even those who have passed out from major Madrasas and have satisfactory subject content, they are inundated by the lack of procedure and pedagogical skills (Anzar, 2003, p. 23). Proliferation of Education The process of evolution from traditional madrasah to localized vernacular schools, and then on to European education brought the societies to become heterogeneous, culturally diverse, and facilitated mobility in the pursuit of knowledge to improve the Islamic community towards nourishment and growth, through participation and economic development.Islamic philosophy constrains reforms, due to compassions instigating and conserving disparities of control.Female educational practices are associated with Islam, where isolation and over protection of girls in the curiosity of domestic honor weakens human capital by blocking the education to them and affects gender equality. Education is a way for accomplishing life goals, where the finish point and goals of society will determine the completion and rewards of education.Education involves whole of human; where the wholesomeness happens only when all the features of the life leads to material, moral, social, and spiritual development correspondingly spread all over.The teachers cannot joyfully deliver or interpret the syllabus correctly.They teach with textbooks with lessons that are at times above the understanding abilities of the students.Syllabus is significant in the process of education where the organization of the necessary knowledge, skills and attitudes gets imparted through the educational system.The Islamic Integrated schools use a dual curriculum.In addition to the problems associated with a western curriculum, which does not heed any consideration to the requirements, welfares and atmosphere of the learners; the dual curriculum combines the missed part and delivers all the subjects in Islamic education as a common syllabus.Education has steadily ignored the implication of conventional Islamic Education towards the development of contemporary educational structure (Berglund, 2015, p. 52).Education bears this sagacity of quality in the students towards quality that has an impartial rank elsewhere without any individual norms and proclamations, but entails distinct reasoning if people are to grow as a full-grown person. Muslim parents want a strong Islamic studies program coupled with a sturdy academic education that could help their children become a well-groomed individuals and productive moral contributor in the community.Islamic principles make obligations to work and endorse their spiritual and religious requirements parallel with developing their lives over knowledge and empathetic views.Knowledge was found to be resultant in all those concerned directly from the Qur'an and Hadith as it was during then or knowledge in general Islamic principles to do good, avoid harms, collaborate through others in upright activities and so forth were quite common as guided by the religion and does not stress the requirement to learn separately.Parents send their children to Islamic Integrated schools incline to be certain of that the religion should not be separate from the daily experiences and practices. Novelty in Learning Observation, quantitative reasoning and scientific expressions can be polished using learning tools to engage students in the new developments through drawing and writing skills, reading, expression, and communication.The empirical evidence for reasoning skills in addition to the cognitive and academic skills was foundation for development.The procedure of examining the proof, probing norms and constructing inferences was traditionally active and permeated within the Islamic enforceable limits.This indicated that there already existed a growing recognition for critical thinking, within the Islamic framework.The program for the transformation of Muslim instruction dispirited the learning by memorization and announced modern methods for learning.As a precedence for decisive growth, religious validity and secular admiration reformed the educational sector with religious training, taking care of curriculum development in humanities, history of religions and civilizations (Svante E. Cornell, 2006, p. 75).Individual changes can stress relevance and carry alteration not only in the purposes of learning the erudite of subject matter, approaches of learning, and similarly in the ethics of education as well.Education transacts through spiritual nurturing and edifying the character.Parents want a school that they can be trusted to strengthen, and not re-do, when it comes to ethics and ideologies. The western orientation of complete college structure which contributed to rise of the contemporary universities had a proximity of relations to the Islamic madrasah system which endures through successfully up to this day in the Islamic world.The traditional education system consumed and became an essential part in the strategy for the knowledge impartation that which triggers to afford education that to go along with the student's understanding.Stressing religious values in the school settings are positive factors necessary to shape students'religious ethics and personalities groomed for the future.Islamic education was envisioned to aid as the principal medium for providing religious instructions to all the faithful on the vital guidelines prophesized by Islam. Besides then the core of Islamic education is in ethical and character learning among Muslim scholars also taught the common man about the nature of association with God and the responsibility towards the divine, on the allowable and disgraceful actions and deed which attract the laws governing social affiliation among fellow Muslims.The drive to education is to polish student's ethics, educate their sprites, proliferate virtue, explain decorum and prepare them for a life full of genuineness and transparency. Students as well functioned in learning circles or groups who and operated autonomously.Learning was self-paced and had no formal tests.They require only the demonstration of mastery in recitation and appropriate writing of the verses.Physical reprimands were generally used both to control and rectify the behavior wrong done or to discipline a student for not reading and learn by heart as well.Students turnover was very frequent and the movement was also rapid contingent to the domestic need for help, or errand work at the home, no disgrace were carried through for getting dropped off education at school.Several students stay late to finish homework, almost missing to see the family or hanging out with friends, but instead are hopefully expecting to proceed to state that these long hours were just investments on their part of life which would pay a price as success in the future.Though the students may not be excited on their workload, they appear at least accepting to it. Religion towards Innovation The Qur'an contains reference to innovation as an apparently favorable light for an idea or practice that which is consistent with recognized pattern and principle (Abd-Allah, 2014, p. 14).The medressa's disciplined the Islamic Jurisprudence and guided them, that be remained as established along with the expansion of Islam and by the complex interpretation of Islamic texts which had trained them to be clad with legal and secretarial determinations as required by the Quranic school.Quranic schools teach children how to study in a organized situation, with admiration to the teacher, practice language and recite in harmony, encode and decode an alphabet, be a ethical person with decent behavior; and rudimentary arithmetic.Teachers sometimes feel frustrations as the children seem to prefer the secular school teachers to them.Evaluation of learning is formative or summative that the teacher or the institution regulates the amount of learning that has taken place.Islamic education teachers resort to using methods of memorization without explaining the content they teach.Often, they use corporal punishment, thus dissatisfying the students from their lessons.The Qur'an upholds Islamic education as validation of truth and instructs those with the insight, on a mix of ethical and principled proportions to foster an open investigative process.The exclusively dependable narration is protected through the factual abilities learning strategies -memorization, imitation, dictation and recall.To chase a review, contemplate or founding a theory on or around the theme or source of an entity is respectable because the faculty of cognition is the maximum treasured as custodian of man, but it necessitates to start constructing upon from some the real details and not from sheer belief.Prospects and integration of spiritual growth as part of their educational goals were due to the support of the parents of Islamic Integrated school students who anticipated that the school will deliver such sustenance by strengthening the behaviors, values, and morals that are to be ingrained in their children, which normally could have been through at home.They look for a background that would be alike to the home environment with the rules, principles, and standards taught in the school would overlay with the parents expectations and taught at home.Rational thought and spiritual knowledge may be hard to unite but the idea of human reality as a social construction indicates that knowledge and reality differ according to social context be it Islamic or Western. Conclusion Learning provided to improve process and outcome through learning Environment, recognized learning and development needs, application of learned knowledge in practice, linkage of rewards to learning and development process (Shahram Gilaninia, Rasht., Iran Mir Abdolhasan Askari Rankouh., Milad Abbas Poor Gildeh., 2013).Education in the Muslim world had positioned Islamic education in the broader aspects of Islam, discovered that the learning consequences claim a wide spread and the related reorientation in learning process by pushing the Islamic education to a threshold to pick ingenuity and collaboration to attain a future-state ( Niyozov & Memon, 2011, pp. 5-30).Islamic Integrated schools are workshops for continuous service spots of shaping the good moral in children.Academically and ethically these are common among many parents who feel it is crucial to take effort to the schools that disburse to fix the Islamic values in students.The Islamic education programs has issues which is wide and difficult in various surroundings and cultural differences, use of English language for instruction, lack of graded and relevant teaching/learning resources to operationalize it.The Islamic studies curriculum benefits to retain Islam as a practice of life complete diversity of resources, including that of the memorization of the Quran.Parents reward their children for learn by rote parts of the Quran but also hire private tutors to assist them in their activities.Liberal interpretation of Islam will lead women to participate in social, economic and political life, and sustainable livelihoods and peace.The religious education outlawed with emancipation of women on an aggressive front, encouraging educational exchanges at all levels to strengthen only the western values.
v3-fos-license
2018-04-03T05:09:23.982Z
2016-03-17T00:00:00.000
20331408
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "148c1205a7ca3421977f0745a7cc9f1e78386c6e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43931", "s2fieldsofstudy": [ "Medicine" ], "sha1": "148c1205a7ca3421977f0745a7cc9f1e78386c6e", "year": 2016 }
pes2o/s2orc
Scar masses from granulation tissue. Question A 50 years old Male known with a diagnosis of ulcerative colitis since October 2012 (Montreal Class E3) was seen for further follow up in gastroenterology outpatient. He was on Meselazine 2.4 gm per day. The last flare up he had was 2.5 years ago in April 2013 when he was treated successfully with a course of oral steroids. His colitis has been in a stable state with no further flare ups ever since. During his follow-up in Gastroenterology clinic in Sept 2015, he mentioned that he is opening his bowels 4-5/day with a consistency between watery to loose stool that was “normal” for him. He denied having significant bleeding or cramps. He also denied having a family history for colorectal cancer (CRC) or polyps. Physical examination was unremarkable. Blood test revealed a normal CRP of 7, FBC, LFTs and U &E were also normal. Further colonoscopy surveillance were organised for assessment. What is the diagnosis? What is the prognosis and appropriate management? Figure 1 Colonoscopy; distal ascending colon. The colonoscopist reported that the ascending abnormalities caused stenosis but just enough open to pass the scope getting through to the Caecum  Answer Giant Inflammatory Polyps (GIPs) in IBD. Question A 50 years old Male known with a diagnosis of ulcerative colitis since October 2012 (Montreal Class E3) was seen for further follow up in gastroenterology outpatient. He was on Meselazine 2.4 gm per day. The last flare up he had was 2.5 years ago in April 2013 when he was treated successfully with a course of oral steroids. His colitis has been in a stable state with no further flare ups ever since. During his follow-up in Gastroenterology clinic in Sept 2015, he mentioned that he is opening his bowels 4-5/day with a consistency between watery to loose stool that was "normal" for him. He denied having significant bleeding or cramps. He also denied having a family history for colorectal cancer (CRC) or polyps. Physical examination was unremarkable. Blood test revealed a normal CRP of 7, FBC, LFTs and U &E were also normal. Further colonoscopy surveillance were organised for assessment. What is the diagnosis? What is the prognosis and appropriate management? The colonoscopist reported that the ascending abnormalities caused stenosis but just enough open to pass the scope getting through to the Caecum. Discussion Giant inflammatory polyps are uncommon with a reported prevalence of 4.6% and two thirds of cases being associated with Crohn's disease (1,2). Inflammatory polyps are not exclusive to inflammatory bowel disease and can occur in infectious, ischemic colitis, borders of ulcers as well as mucosal anastomosis (3). It is not exactly clear why these polyps form with some studies showing enhanced de novo synthesis of all types of collagen, in patients with ulcerative colitis, as well as increased expression of collagenases (4,5). Giant inflammatory polyps are defined as inflammatory polyps more than 1.5cm (1). Most are asymptomatic although there could be symptoms of underlying IBD. GIPs are known to cause obstruction, protein-losing enteropathy, anaemia and bleeding (6)(7)(8). GIPs are generally deemed benign but there have been case reports of dysplasia and malignancy within these polyps (9,10). Despite their benign nature, the presence of inflammatory polyps have been demonstrated to be associated with an increased risk of malignancy (11,12). Based on increased risk for malignant transformation BSG guidelines, for colorectal cancer screening, recommend escalation of risk category to "intermediate" and performing 3 yearly surveillance colonoscopy rather than every 5 years (13). The pathologist reported inflammatory polyps might show features of crypt distortion, cryptitis, crypt abscesses, loss of muscularis mucosae, sumucosal fibrosis, and Paneth cell hyperplasia. In almost all of the cases, these are not amenable to medical management but some case reports have demonstrated regression of these lesions after medical management (2,18,19). BSG guideline states that prophylactic colectomy should be discussed with patient especially if colonoscopist feels that the value of surveillance is compromised (13). Our patient was discussed in lower GI MDT and MDT recommended referral to Oxford team for second opinion.
v3-fos-license
2018-11-15T17:45:15.278Z
2018-10-23T00:00:00.000
53211077
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.iucr.org/s/issues/2018/06/00/hf5369/hf5369.pdf", "pdf_hash": "0f185f6c96df269ca5ad97cd4a2de1af46277570", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43932", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "sha1": "0f185f6c96df269ca5ad97cd4a2de1af46277570", "year": 2018 }
pes2o/s2orc
XAS/DRIFTS/MS spectroscopy for time-resolved operando investigations at high temperature A new reactor cell and experimental setup designed to perform time-resolved experiments on heterogeneous catalysts under working conditions that simulltaneously combines XAS, DRIFT and MS spectroscopies are reported. The combination of complementary techniques in the characterization of catalysts under working conditions is a very powerful tool for an accurate and indepth comprehension of the system investigated. In particular, X-ray absorption spectroscopy (XAS) coupled with diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and mass spectroscopy (MS) is a powerful combination since XAS characterizes the main elements of the catalytic system (selecting the absorption edge) and DRIFTS monitors surface adsorbates while MS enables product identification and quantification. In the present manuscript, a new reactor cell and an experimental setup optimized to perform time-resolved experiments on heterogeneous catalysts under working conditions are reported. A key feature of this setup is the possibility to work at high temperature and pressure, with a small cell dead volume. To demonstrate these capabilities, performance tests with and without X-rays are performed. The effective temperature at the sample surface, the speed to purge the gas volume inside the cell and catalytic activity have been evaluated to demonstrate the reliability and usefulness of the cell. The setup capability of combining XAS, DRIFTS and MS spectroscopies is demonstrated in a time-resolved experiment, following the reduction of NO by Rh nanoparticles supported on alumina. Introduction The correlation of catalyst performance with its local and electronic configuration is a major challenge for the scientific community. In particular, the comprehension of interactions between reactants and catalysts, formation of reaction intermediates and recognition of active sites are of paramount importance in the catalyst design, improving efficiency, selectivity and lifetime. The study of catalysts under working conditions is essential for a complete comprehension of their structure-function relationship. Moreover, the combination of complementary characterization techniques is very powerful since it allows investigation from different perspectives in real time and correlation with the catalyst behaviour subject to the same conditions (e.g. temperature, gases and pressure). X-ray absorption spectroscopy (XAS) is widely used for in situ and operando experiments since it can provide information on the local and electronic structures of the absorbing elements (Bordiga et al., 2013). Since the first experiment performed by Couves et al. (1991), coupling XAS and X-ray powder diffraction (XRD) (Couves et al., 1991), several cells and experimental setups were developed for in situ or operando investigation, combining XAS with a complementary technique (Clausen & Topsøe, 1991;Beale et al., 2005;Frenkel et al., 2011;Tinnemans et al., 2006). Among the different couplings available for the study of solid-gas heterogeneous catalysts, XAS, diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and mass spectrometry (MS) are a powerful combination because characterization of the metallic centre, identification of surface adsorbates and quantification of reaction products are performed at the same time (Newton & Beek, 2010). Experimental setups devoted to this combination have already been developed and hereafter a general overview is reported. Pioneering work devoted to the study of molecules was carried out by Young (Young & Spicer, 1990;Young, 1996) and concerning catalysis by Newton and co-workers on the ID24 beamline at the European Synchrotron Radiation Facility (ESRF, Grenoble, France). The first experiments were performed using a custom-built DRIFTS cell (Newton et al., 2004), modifying a design proposed by McDougall (Cavers et al., 1999). The flat-top design minimized the cell dead volume, resulting in a fast gas-switching response. However, the final design affected the cell maximum temperature (400 C) and presented a bypass of the gas feed in the catalytic bed. As a further step, a commercial DRIFTS cell provided by Spectra-Tech was modified (Newton, 2009). Two carbon-glass windows for X-rays were added to the original dome, resulting in a higher dead volume and the catalytic-bed bypass was not completely solved as underlined by Meunier et al. (2007Meunier et al. ( , 2008. A similar design was adopted by another setup which combined XAS and DRIFTS and was developed at Brookhaven National Laboratory and mounted on the X18 beamline . It was assembled from a Harrick cell using a DaVinci arm attached to a modified Praying Mantis DRIFTS accessory. The cell was mounted in the internal sample chamber of the infrared (IR) spectrometer and it could work up to 500 C at ambient pressure. The Harrick cell equipped with Praying Mantis optics was also modified at Argonne National Laboratory, USA. The setup was initially designed for X-ray pair distribution function measurements, after which it was also used for XAS measurements in transmission and fluorescence geometries Yao et al., 2014). A relatively simple cell was used by Bando et al. (2009Bando et al. ( , 2012, placing the sample in pellet form in the centre of a cross-like cell measuring both X-ray and IR in transmission configuration. The sample was heated to 530 C and could sustain up to 3 bar of pressure, but with a large dead volume and bed-bypass problem. Later, a cell able to combine XAS and DRIFTS with a different design was proposed by Chiarello et al. (2014). The novelty of this cell was that both X-rays and IR radiation passed through the same window in direct contact with the sample. In this way, a plug-flow reactor with reduced dead volume was achieved. Fast exchange of gases became possible, fulfilling the requirements for transient experiments. The main drawbacks of this design were the necessity to drill a small hole of 0.5 mm diameter in the CaF 2 IR window for XAS spectra acquisition and to seal it using a carbon-based glue with high thermal stability. The cell was tested up to 500 C at ambient pressure. Moreover, the design of the cell allows the user to perform XAS measurements alone or combined with XRD and IR at the same time (Marchionni et al., 2017). An innovative approach was recently proposed by Urakawa and co-workers, which combined XAS or XRD with IR spectroscopy on a pelletshaped sample (Hinokuma et al., 2018). The flexibility of a modular IR interferometer (Arcoptix SA, OEM model) allowed measurements in both transmission and diffusereflection modes by changing the relative position of the IR source and IR radiation detector. Notwithstanding the difficulties during sample preparation to fulfil the different sample requirements for IR and XAS or XRD techniques, investigation of samples in transmission configuration in pellet form often results in higher data quality. The general concept followed in designing the catalytic reactor and presented in this manuscript was to develop a flexible cell able to perform operando measurements combining XAS and IR spectroscopies for solid-gas reaction catalysts. According to this idea, the design proposed by McDougall (Cavers et al., 1999) was the most suitable to fulfil our requirements. Indeed, this configuration allows for the optimization of radiation windows for both IR and X-rays and hosts the sample in free powder form. Both points are essential to cover a wide range of experimental conditions (e.g. X-ray energy, metal loading, different supports) and perform operando experiments. Moreover, the experimental setup was optimized to minimize cell dead volume, avoid any sample bypass for gases and work at high temperatures under high pressure. In the present manuscript, we describe the cell and the experimental setup developed to combine XAS, DRIFTS and MS. First, the cell was extensively investigated in the laboratory and the following tests were performed: sample surface temperature in comparison with cell body temperature, evaluation of gas-exchange time inside the reaction chamber and evaluation of reactor capabilities during CO hydrogenation over a Sn-Co/Al 2 O 3 sample. The setup was then mounted on beamline ID24 at ESRF (Pascarelli et al., 2016) to demonstrate the cell capability, combining XAS, DRIFTS and MS spectroscopies during a time-resolved experiment following the reduction of NO by Rh nanoparticles supported on Al 2 O 3 . Operando XAS/DRIFTS/MS cell The cell developed was designed to allow the simultaneous combination of XAS, DRIFTS and MS spectroscopies in order to investigate the sample under working conditions. This aim implies application of temperature, pressure and reactive atmosphere to the catalyst. In addition, to optimize solid-gas interaction, catalysts are measured in powder form without making a pellet, allowing reactive gases to flow through the catalytic bed. The concept design is schematized in Fig. 1(a), and Fig. 1(b) displays a sketch of the cell. XAS measurements are performed in transmission configuration while IR spectra research papers are collected in diffuse reflection mode by the reflectance sphere (DRIFTS). The cell is composed of two parts: the main body and the dome. The body hosts the heater and thermocouple, sample holder and gas system, while the dome hosts IR and X-ray windows. The sealing between dome and body is guaranteed by a metal CF16 O-ring clamped by six screws. Cu metal is usually used but Au coating can be applied on its surface if necessary to avoid any chemical reaction. This configuration is very convenient because it makes the setup flexible for future developments; minor changes in the body and in the dome are sufficient to implement new experiments and cell capabilities (Castillejos-Ló pez et al., 2017). The sample in free powder form is hosted in a crucible that can be easily changed, optimizing sample thickness for XAS measurements performed in transmission mode according to metal loading and absorption edge. The X-ray path can be tuned from 1 mm to 5 mm: the powder is placed between two carbon-glass windows that are transparent to X-rays, see Fig. 1(c). A metal grid below the powder allows the passage of gas through the catalytic bed. Particular attention was paid to the design and machining of the sample holder in order to avoid any bypass of gas without interaction with the sample. As illustrated in Figs. 1(a) and 1(b), a round window of 25 mm diameter and two square 5 mm  5 mm windows are mounted for DRIFTS and XAS measurements, respectively. Both can be easily changed, tuning material and thickness according to experimental requirements. In the typical configuration, 2 mm-thick CaF 2 and 200 mm-thick carbon-glass windows for IR and X-rays are used, respectively. Since minimization of dead volume is a mandatory requirement when performing fast kinetic studies, the design was optimized accordingly: the three windows are directly glued to the dome and the distance between the sample surface and the IR window is 1 mm. This design yields a reaction chamber dead volume of 0.5 cm 3 . This also guarantees a large solid angle for the diffuse reflectance sphere in order to also collect good signal-to-noise spectra during kinetic experiments. The silicon glue utilized (LOCTITE SI 5399) can sustain temperatures up to 350 C and, in addition to temperature and chemical stability, a fundamental feature of this glue is its elasticity. In fact, when heating the cell at high temperatures, the different thermal expansion of the glass window and metal flange can result in damage or breaking of the glass window; the role of the glue is to minimize the glass-metal mechanical strain to preserve the integrity of the window. Operation temperatures range from room temperature (RT) to 600 C. The reactive feed can be pre-heated before interaction with the sample in order to minimize any thermal gradient along the catalytic bed. A Ktype thermocouple is placed between the sample holder and heater, though outside the reaction chamber in order to avoid any interaction with the reactive atmosphere. The cell is made of Inconel alloy for its resistance to reducing and oxidizing atmospheres even at high temperatures. Both inlet and outlet pipes can be heated to 150 C to avoid liquid condensation. It is important to note that the cell configuration implies the use of four carbon-glass windows (two in the sample holder hosting the powder and two in the dome) and their X-ray absorption has to be considered in the design of the experiment, particularly at low energy. The minimum window thickness successfully tested is 60 mm each, resulting in a total of 240 mm of carbon glass. Even using low-density material ( carbon glass = 1.5 g cm À3 ) its contribution at low energy can be significant; for example, it results in a total absorption of x = 0.66 at the Ti K-edge (4966 eV). The cell can be also equipped with a second dome in order to work up to 5 bar of pressure. In this case, the IR window is not glued but clamped by a metal flange while the two carbonglass windows are glued in the inner part of the dome, see Fig. 1(d). This configuration implies an increase in the distance between the sample surface and IR window up to 3 mm and thus the dead volume reaches 1 cm 3 . Experimental setup mounted on ID24 The experimental setup combining XAS, DRIFTS and MS and hosting the cell described above was developed specifically to be used on beamline ID24 at the ESRF (Pascarelli et al., 2016); however, since it is mounted on top of one plate, it can be hosted on any XAS beamline equipped for operando studies of catalysts (Castillejos-Ló pez et al., 2017), and an overview is given in Fig. 2(a). The combined XAS-DRIFTS system is composed of a Fourier transform infrared (FTIR) commercial instrument Varian 680, a diffuse reflectance sphere provided by OMT Solutions, the cell described in the previous section and a setup to handle gases according to experimental requirements, e.g. mass-flow controllers, saturator and fast switching valves. The whole setup is mounted on a 1.2 m  1.5 m table motorized along the three axes, allowing placement of the sample in the correct position with respect to the X-rays, keeping, at the same time, the IR optics and the spectrometer fixed. This solution guarantees the correct alignment of the IR optics with respect to the sample surface during the experiment. A set of Au-coated mirrors together with the diffuse reflectance sphere focus IR radiation to the surface of the sample. The backscattered light, reflected by the same mirrors, enters an MCT (mercury cadmium telluride) external detector after passing through a beam splitter. The sample surface must be placed in the focal point of the reflectance sphere and the alignment is achievable by a vertical movement of the cell. In addition, three fine motions can adjust the angle of the spherical mirror. The whole setup is mounted inside a Plexiglas box under pure N 2 flux to decrease H 2 O and CO 2 vibrational modes in the IR spectra. One critical issue combining IR and XAS spectroscopy is the different volume and portion of the catalytic bed investigated. On one hand, IR spectroscopy investigates only the top part of the catalytic bed since the typical penetration depth for IR light in solid matter is of the order of few tens of micrometres (Mondelli et al., 2006). On the other hand, XAS measured in transmission mode relies on the absence of the incident-beam (I 0 ) leaks present in the I 1 beam, which forces us to place the X-ray beam just below the surface. In fact, the sample-air interface on the top part of the catalytic bed is never well defined since catalysts are hosted in the sample holder in free form and the sample grains move as a result of temperature and gas flow. The microbeam size available on ID24 and the geometry of the sample holder minimize this issue, although it was not possible to avoid it completely. This experimental setup is optimized to perform kinetic studies on catalysts under working conditions. The general approach for these kinds of studies foresees to follow, with a suitable time resolution, the modifications occurring in the local and electronic structure of the catalyst, surface adsorbates of active sites and product formation, as the conditions change, e.g. from inert to reactive atmosphere or from dark to UV-Vis light. In the present manuscript, XAS experiments performed to validate the setup were carried out at the Energy Dispersive XAS beamline (ID24) at ESRF (Pascarelli et al., 2016) because its sub-millisecond time resolution made this beamline particularly suitable for exploring cell performances. This capability requires precise synchronization between X-ray detector, IR spectrometer, mass spectrometer and the device used to change the catalytic conditions of the sample. Two general requirements need to be fulfilled: a capability to follow evolution of samples right after condition modification and the possibility to correlate, at any time, the spectra of the three techniques with the experimental conditions. For this aim, an OPIUM timing and synchronization card drives all devices. The frames collected using an X-ray detector are used as counter in the macro during data acquisition. At the beginning of each experiment the OPIUM starts the acquisition of the X-ray detector and of the IR spectrometer. It can trigger a change in state of other equipment. For example, it can open/close a shutter for UV-Vis in photochemistry experiments or open/close a gas switching valve for solid-gas reaction interactions. Any further change can happen only after a defined number of frames acquired by the X-ray detector. This approach guarantees a precise control of the experimental conditions, avoiding changes in the middle of acquisition of one spectrum and at the same time allowing the correlation of information extracted by XAS, IR and MS spectroscopies with the experimental conditions applied to the catalysts. Moreover, it is very flexible since different synchronization schemes can be implemented and several devices can be driven. The standard configuration is able to control, in addition to the X-ray detector and the IR spectrometer, up to three switching valves and another device at the same time and independently from each other. The mass spectrometer is not synchronized because it works in continuous mode; however, the OPIUM signal can be recorded by the MS software in order to monitor changes in the gas phase and relate them with the electronic/structure/surface changes. During XAS measurements performed in the energydispersive configuration, the incident intensity I 0 is measured either before or after the transmitted intensity I 1 using the same detector. In catalysis, most of the time I 0 is collected through the catalyst support. In this way, I 0 normalization more efficiently eliminates effects caused by X-ray-sample interaction other than photo-absorption (such as small-angle scattering from the support). Considering this, a second sample holder, visible on the left of Fig. 2(c), was mounted to host the pure support of the catalysts. Samples for laboratory and beamline measurements Two different experiments were carried out to evaluate the performance of the cell: CO hydrogenation over a Sn-Co/ Al 2 O 3 catalyst and NO-CO reaction over a Rh/Al 2 O 3 catalyst. The CO hydrogenation was performed without X-rays to evaluate catalytic performance and hence compare the results with other reactors. Details about the synthesis, characterization and catalytic evaluation of the Sn-Co/Al 2 O 3 are reported elsewhere (Paredes-Nunez et al., 2018). In brief, the cobalt loading was 14.4 wt% and that of Sn was 0.52 wt%, yielding a Sn/Co molar ratio of 1:60 with a metal dispersion of ca 9.2%. The Sn-Co/Al 2 O 3 sample was reduced in situ in a stream of H 2 before being exposed to a flow of syngas (H 2 :CO = 2) at 220 C. The cell effluent was analysed by 2 m path-length transmission IR gas cell (Paredes-Nunez et al., 2015) that enabled determination of the concentration of methane, propene and methanol through calibration curves. The rates of formation (expressed in mol g catalyst À1 s À1 ) were calculated and compared with those reported elsewhere (Paredes-Nunez et al., 2018) obtained on a modified high-temperature low-pressure Spectra-Tech DRIFTS cell (Meunier et al., 2008). The NO-CO reaction over a Rh/Al 2 O 3 catalyst was performed in order to validate the whole setup and the combination of spectroscopies (XAS+DRIFTS+MS). The sample was composed of 5 wt% Rh nanoparticles supported on -Al 2 O 3 (Sigma-Aldrich, 212857). The Rh/Al 2 O 3 catalyst was reduced in situ (5%H 2 /He, 250 C, 10 C min À1 , 30 min), the temperature was increased to 275 C and reduction of NO by CO was performed. Two streams were alternated using a switching valve: first 5%NO/He and then 5%CO/He. Each stream was kept for 60 s. Coupled XAS, DRIFTS and MS measurements were performed. Spectra in transmission mode at the Rh K-edge (23220 eV) were collected using a Si(111) polychromator in a Laue configuration and a Hamamatsu detector (Kantor et al., 2014). A Varian 680 FTIR instrument collected spectra in DRIFTS mode. Both measurements were performed with a time resolution of 50 ms per spectrum. The infrared background was considered as the first spectrum under CO to evaluate its evolution on the particle surface. The gas outlet was measured by a Hidden Analytical HPS-20 QIC MS (intensity measured for ten masses corresponding to different gases) with a time resolution of 300 ms. The XAS data reduction, both spectra normalization and extraction of EXAFS signals in the k range 3-11 Å À1 , was performed in batch mode by the XAS plug-in from PyMca described elsewhere (Cotte et al., 2016). Results and discussion This section is divided in two different parts. In the first, the cell performance as a catalytic reactor is explored, investigating sample temperature, time for gas exchange and catalytic activity. The second part is focused on the cell and setup capabilities, combining XAS, DRIFTS and MS. 3.1. Laboratory performance investigations: temperature, dead volume and catalytic tests 3.1.1. Pyrometry test. Temperature control of the cell has been evaluated using an optical pyrometer. The temperature given by the thermocouple internally fixed to the cell body was compared with that determined from the thermal radiation emitted by the surface of the sample. The results of the tests on the present cell are compared with those on the Harrick and Spectra-Tech (IRCELYON) model (Li et al., 2013) cells in Fig. 3. As the thermocouple is placed below the sample holder, the temperature measured by the thermocouple is actually a poor estimate of the sample surface temperature. Yet, the deviation is constant at a given temperature (i.e. independent of gas composition and flow rate) and, once known, the set point can be adjusted to make the cell reach the appropriate temperature during experiments. 3.1.2. Dead volume test. An indication of the dead volume of the cell is provided by the time required for a known flow of gases to replace the previous atmosphere. A way of evaluating this time consists of calculating the time between the gasphase switch and the arrival at the detector of this new gas phase. Several tests were performed and one example is shown in Fig. 4: at time = 0 s, the neutral atmosphere flowing through the cell is exchanged by a NO-containing one. Only 3 s are required using 75 ml min À1 for the signal corresponding to NO (m/z = 30) to reach stability (within the error). 3.1.3. Catalytic test: CO hydrogenation. The catalytic performance of the cell was assessed and compared with that of a modified Spectra-Tech model (Meunier et al., 2008) from the IRCELYON laboratory for CO hydrogenation (i.e. Internally versus externally measured temperature for the indicated cells. Fischer-Tropsch synthesis) at atmospheric pressure (Paredes-Nunez et al., 2018). It must be stressed that the modified Spectra-Tech cell was shown to yield identical reaction rates as those measured in a traditional plug-flow reactor (Meunier, 2010), at least for temperatures below 300 C. When flowing the syngas mixture at 220 C, the main products were methane, propene and methanol. Fig. 5 reports the reaction rates obtained for these main products over the first 6 h on stream. The rates measured were essentially identical for both cells, apart from the initial stream, which may have been caused by differences in lines and cell dead-volumes, thermal stabilization (owing to differences in the cell heat capacity) or transient contamination by air leading to a temporary deactivation of the metallic cobalt catalyst. These data show that the catalytic data obtained at steady-state using the present ESRF DRIFTS cell are fully consistent with those typically obtained on calibrated cells for a reaction that is very sensitive to O 2 and moisture impurities. Combined XAS/DRIFTS/MS test on ID24 beamline at ESRF: CO oxidation by NO CO oxidation by NO on 5% Rh nanoparticles was performed in order to test our experimental setup. At 275 C, the gas feed was changed from NO to CO and XAS, DRIFTS and MS data were recorded. Fig. 6(a) shows two selected EXAFS spectra in k space under a different atmosphere (blue curve under NO, red under CO), demonstrating the good data quality up to 11 Å À1 collected in 50 ms (average of the accumulation of 50 frames of 1 ms). The corresponding k 2 -weighted not phasecorrected Fourier transform moduli (|FT|) are reported in Fig. 6(b). Both spectra show two different contributions corresponding to RhÀO and RhÀRh paths at 1.6 Å and 2.5 Å , respectively. Evolution of DRIFTS data in the 1950-2100 cm À1 region is reported in Fig. 6(c). Initially no bands were observed but within 28 s an increasing contribution centred at 2025 cm À1 and ascribed to a linearly adsorbed research papers CO band (Yang & Garl, 1957) appeared. The evolution of the observed feature was followed by the region of interest (ROI) option implemented in PyMca and described elsewhere (Cotte et al., 2016). The time evolution of the |FT| peak area of Rh-Rh, the CO band area and the 44 mass signal obtained from EXAFS, DRIFTS and MS characterization techniques, respectively, are shown in Fig. 6(d). Switching the gas phase from NO to CO reduced the rhodium particles, as indicated by the shift in the K-edge towards lower energies (not shown) and the decrease in the contribution from the first O shell in the EXAFS spectra. In addition, the growing intensity of the second shell (corresponding to Rh neighbours) points to the agglomeration of the particles. Only after the stabilization of the EXAFS spectra does the infrared band ascribed to CO adsorbed over Rh 0 increase. From the MS results, an increase in the mass 44 signal, attributed to CO 2 or N 2 O, was observed very faintly above 25 s and more significantly after 36 s. This combined information indicates that the CO molecules replace the NO molecules adsorbed over the Rh particles, initially reducing surface Rh atoms (0-22 s). Then, the CO induces agglomeration of Rh particles which leads to a general reduction of Rh atoms (20-30 s) and the materializing of Rh 0 -CO species (27-60 s). Oxidation of CO leads to some CO 2 production at first (25-35 s), but most of the generation happens once the Rh particles are totally reduced (35-55 s). More information about this experiment and deeper analysis of the results can be found elsewhere (Monte et al., 2018). Conclusions A catalytic reactor for XAS/DRIFTS/MS combination was developed and successfully tested with and without X-rays present. The experimental setup was developed to perform time-resolved experiments on heterogeneous catalysts under working conditions. The cell design was optimized to obtain a low dead volume for the reaction chamber (around 0.5 cm 3 ), allowing at the same time measurements up to 600 C. Measurements under pressure are possible up to 5 bar with an appropriate dome but at the expense of a larger dead volume (1 cm 3 ). The design of the cell allows future developments, including different dome geometries with minor modifications of the main cell body. In this way, the heating part and the gas pipe system, critical components for catalytic application, remain unchanged. The combination and correlation of information from XAS, DRIFTS and MS spectroscopies are guaranteed by the synchronization of the X-ray detector, IR spectrometer, mass spectrometer, switching valve and other devices such as the UV-Vis shutter. Catalytic tests performed with and without X-rays confirmed the reliability and accuracy of the kinetic data obtained in the cell. Similar geometric cell configurations were previously reported, yet our design enables a low dead volume, accurate catalytic performance, high temperature and utilization at a few bars of pressure. The time-resolved capability of the setup was demonstrated by following the evolution of the Rh-Rh EXAFS contribution, the infrared band associated with Rh 0 -CO (2025 cm À1 ) and the signal of m/z = 44 corresponding to generated CO 2 .
v3-fos-license
2018-03-02T02:59:14.378Z
1981-04-01T00:00:00.000
3639003
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://hrcak.srce.hr/file/286561", "pdf_hash": "535a47542980189c8fedde93ccd8b64ddd9f2eb6", "pdf_src": "Unpaywall", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43935", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "535a47542980189c8fedde93ccd8b64ddd9f2eb6", "year": 1981 }
pes2o/s2orc
INTERACTIONS OF α-CHYMOTRYPSINOGEN A WITH SOME ALKYLUREAS The interactions of a-chymotrypsinogen A with urea, methyl-, N,N'-dimethyl-, ethyl-, N,N'-diethyl-, and propylurea were studied by means of calorimetry and circular dichroism. It has been found that the enthalpies of interaction of the alkylureas, with the exception of methylurea, with a-chymotrypsinogen A are distinctly from those of urea. Thus the transfer of the protein from water to aqueous urea and methylurea solutions is accompanied by release of heat, · i.e., the overall reaction is exothermic, whereas the transfer of the same protein to solutions of other alkylureas is characterized by consumption of heat, i.e., the overall reaction is endothermic. By examining the far UV CD spectra it can also be concluded that the alkylureas are clearly less efficient denaturants than urea. The difference in behavior reflects the presence of the hydrophobic moiety in the urea molecule. INTRODUCTION a-Chymotrypsinogen A is a pancreatic protein composed of 245 amino acid residues arranged in a single polypeptide chain. It is cross-linked by five disulfide bridges, one of which includes the N-terminal residue. It is one of the most thoroughly studied of all proteins. The purpose of this investigation was to determine the denaturing action on a-chymotrypsinogen A of various alkylureas, i. e., ureas having one or more hydrogen atoms replaced by alkyl groups. As is well known, urea is a strong denaturant, and therefore it is interesting to know how alkylsubstitution affects its denaturing activity. Calorimetry and circular dichroism were chosen for investigating the activity of alkylureas. The first gives the enthalpy of denaturation, the second allows us to ascertain conformational changes brought about by the alkylureas. The two data sets are complementary and provide insight into the nature of the interaction. Previous studies of human serum albumin 1 have shown that the alkylureas are clearly less efficient denaturants than urea. Similar studies with several other proteins, e.g., some heme proteins 2 • 3 , and also a-chymo-trypsinogen2•4, using various methods, e.g., spectrophotometry and optical rotation, have also been reported. On the basis of the results obtained with a-chymotrypsinogen ·the conclusion was reached that -the denaturing action of alkylureas was a function of the substituted alipha'tic group and predominantly hydrophobic in character and that the mechanism of denaturation by alkylureas differed appreciably from that by unsubstituted urea 4 . By calori-' metric studies it should be feasible to check this claim. Moreover, in our studies we have systematically covered the whole range of solubilities of individual alkylureas and thus obtained enough data for a proper evaluation of their denaturing action. EXPERIMENTAL Six times crystallized bovine pancreas a.-chymotrypsinogen A, free of salt, was obtained from Sigma Chemical Co., St. Louis, Mo., U.S.A. The various ureas used were supplied by Fluka, Buch, Switzerland. For calorimetric measurements they were washed with reagent grade acetone, for CD measurements they were recrystallized from hot reagent grade benzene. Solutions of a.-chrymotrypsinogen were prepared in distilled water. Protein concentration was determined by using 4.00 ml of alkylurea solution, respectively. <I> is the protein displacement volume, i.e., the product of protein mass and its partial specific volume, 0.734 mlig. The two compartments in the reference cell were filled with 2.00 ml of water and 4.00 ml of the same alkylurea solution. In some cases the protein was dissolved in more concentrated urea solution, and the enthalpy of transfer to a less concentrated urea solution was measured. The results obtained in this way were identical with those found in the usual experiments. For each transfer at least t wo experiments were performed. Since the two cells were thermally not balanced, their thermal response differing by about 3.50/o, a separate blank experiment had to be performed for each transfer. In the experiment the compartments in both cells were filled with 2.00 ml of water and 4.00 ml of denaturant solution. The apparent heat effect measured was accounted for in the real experiment. The experimental errors involved are relatively large, reflecting especially the lack of thermal balance as well as large enthalpies of mixing. CD spectra were recorded at 25 °c on a Roussel-Jouan Dichrographe Mark III. In the experiments silica cells of 0.01, 0.05, and 0.1 cm pathlength were used. The mean residue ellipticity [Bl mrw was calculated using the following relation Moe [6Jmrw = lOO cl where Mo is the mean residue molecular weight, 105 ; e is the ellipticity, c is the concentration in g/cm 3 and L is the pathlength in dm. RESULTS AND DISCUSSION The values of the enthalpies of transfer of u-chymotrypsinogen from water to aqueous solutions of alkylureas are presented in Table I. For comparison the values for urea solutions 6 are also included. In Figure 1 the corresponding plots are given. From Table I enthalpy of denaturation«. The second contribution stems from solvation changes, and it depends on the nature of the denaturant. In the case of urea it is negative throughout, and from the data in Table I it may be inferred that changes of salvation make the major contribution to the enthalpy of transfer. The same applies to methylurea solutions but the enthalpy values are considerably smaller, i.e., less negative, than in urea solutions. Substitution of a hydrogen atom with a methyl group apparently produces major changes in the interaction between the protein and the denaturant. Therefore it is not surprising that the enthalpies of transfer to ethylurea solutions are already positive, and that they increase with increasing denaturant concentration. The same behavior is observed with N,N'-dimethylurea. The enthalpies for propyland N,N'~diethylurea are given only for 1.0 mol/dm 3 solutions owing to their low solubility in water and they are positive as well. Butylurea is also sparingly soluble rn water but even in dilute solutions the protein precipitates. On the basis of the results obtained, it may be concluded that the entrance of hydrophobic groups into urea molecules produces drastic changes in the enthalpies of transfer. The contribution to the enthalpy is positive but on the basis of the enthalpy values alone more cannot be said, since nothing is known about the conformational changes involved. Thus the analysis of CD spectra is necessary for a more detailed interpretation of the calorimetric data. The recorded CD spectra are given in Figures 2-6. Discussion of the spectra will be based on the comparison of the spectrum of the native protein with those of the protein in urea and alkylurea solutions. The [8] values for urea solutions are least negative through the whole concentration range studied, Figure 2. Since it is known that a-chymotrypsinogen is gradually denatured by urea and in 8 mol/dm 3 solution it is largely unfolded 8 , with the constraints imposed by the five disulfide bonds, methyl-, ethyl-and N,N'--dimethylurea are clearly less efficient denaturants than urea. In methylurea solutions the values follow the same pattern as in urea solutions, F1gure 3, but they are clearly more negative which may be interpreted as diminished unfolding at the same denaturant concentration 1 . However, in ethylurea solutions, Figure 4, the [8] values first decrease with increasing concentration and are below those for the native protein. They reach a minimum at about 6 mol/dm 3 whereupon they increase. This indicates that the fraction of the ordered structure first increases with increasing ethylurea concentration and then at concentrations above 6 mol/dm 3 it starts decreasing1. On the basis of the CD spectra which, owing to solvent absorption, are available only to around 220 nm, not more than this qualitative statement can be made. Similar behavior is observed with N,N'-dimethylurea, Figure 5. Thus owing to the presence of the hydrophobic moiety, the denaturing action of the three alkylureas is different from that of urea. Depending on the size of the moiety and the concentration, the fraction of the ordered structure is diminished, remains the same or is increased. This conclusion is in essential agreement with previous findings with a-chymotrypsinogen 4 where the action of alkylureas has been likened to that of corresponding alcohols. It should be noted that similar behavior has been observed with human serum albumin in solutions of the same alkylureas 1 . In 1 mol/dm 3 propylurea and in 1 mol/dm 3 N,N'-diethylurea, Figure 6, the changes of the [6l] values are small so that no conclusions regarding their action are feasible. Returning now to the calorimetric data, it is possible to make additional comments on the enthalpy of transfer values found. The fact that the enthalpies of transfer to methylurea solutions are considerably less negative than those to urea solutions, although the extent of unfolding in the former is not much less, indicates that the difference is due to the positive contribution of the hydrophobic moiety to the enthalpy. In solutions of ethylurea and N,N'--dimethylurea the observed conformational changes of a-chymotrypsinogen involve an increase and a decrease in ordered structure, respectively, and it is not possible even to estimate their contribution to the enthalpies of transfer. However, considering the fact that the enthalpies increase with increasing denaturant concentration it may be surmised that the hydrophobic interaction is dominant. The combined application of calorimetry and CD spectroscopy has given useful informat~on regarding the interaction of a-chymotrypsinogen with alkylureas and their denaturing activity. The data are in complete agreement with those obtained in the previous srtudy of human serum albumin 1 and give an impetus to studies of other proteins.
v3-fos-license
2019-05-07T13:41:06.115Z
2018-01-01T00:00:00.000
145962552
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://wnus.edu.pl/ab/file/article/view/15561.pdf", "pdf_hash": "07bbc8a3825d648a7589e5e1a774d5561ea0bdf7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43936", "s2fieldsofstudy": [], "sha1": "21d65b257b05620601edd109c1b7b7c9dbea3bf0", "year": 2018 }
pes2o/s2orc
Atlantic sturgeon Acipenser oxyrinchus and alien sturgeon species in Polish waters: can biometric analysis assist species discrimination and restoration? Biometric character analyses were conducted on Atlantic sturgeon Acipenser oxyrinchus, a species included in a re-establishment programme in the Baltic Sea basin. The study sought to identify measurable and countable characters most useful for distinguishing A. oxyrinchus from three alien species found in open waters of Poland: Acipenser baerii, Acipenser gueldenstaedtii, and Acipenser stellatus. Measurable characters that contributed most to discrimination included preorbital distance, eye diameter, ventral fin base to anal fin base, and postorbital distance. Among countable characters, the number of lateral scutes made the greatest contribution. The data from the present study may be used to supplement existing systematic keys and lay the foundations for creating a catalogue or atlas of popular species and interspecific hybrids of sturgeon, including their complete systematic determination. Jesiotr ostronosy Acipenser oxyrinchus oraz obce gatunki jesiotrów w wodach Polski: czy analiza biometryczna uzupełni identyfikację i restytucję gatunków? Słowa kluczowe analiza dyskryminacyjna, cechy biometryczne, identyfikacja gatunków, przyłów, restytucja gatunku Streszczenie W niniejszej pracy przedstawiono wyniki analizy cech biometrycznych osobników jesiotra ostronosego Acipenser oxyrhinchus oxyrhinchus, gatunku objętego restytucją w basenie Morza Bałtyckiego. Przy wykorzystaniu technik aglomeracyjnych podjęto próbę wyjaśnienia, które z cech mierzalnych i policzalnych mają największą przydatność przy odróżnianiu Acipenser oxyrinchus oxyrinchus od trzech gatunków obcych, łowionych w wodach otwartych Polski, tj. jesiotra syberyjskiego (Acipenser baerii), rosyjskiego (Acipenser gueldenstaedtii) oraz siewrugi (Acipenser stellatus). Wyniki analiz wykazały, że największy udział w dyskryminacji na podstawie cech mierzalnych miały: długość rostrum (R), odległość między płetwami V i A (VA) oraz przestrzeń zaoczna (OP), natomiast wśród cech policzalnych była to liczba tarczek bocznych (SL). Wyniki uzyskane w niniejszej pracy wzbogacają aktualnie #0# Material and methods Fifty dead juvenile specimens of A. oxyrinchus were collected from seine fishers in three locations (N 53°43′59″; E 014°28′11″; N 53°44′22″, E 014°27′51″, N 53°43′10″, E 014°26′42″) in the Szczecin Lagoon near the town of Stepnica, from 21 May to 9 June 2008. The fish, some of which were tagged, originated from German stocking (Gessner et al., 2008) undertaken as part of a species restoration plan (Figure 1). Specimens were frozen using the method proposed by Keszka and Krzykawski (2008). Measurements were taken with calipers of 0.01 mm accuracy according to figure published by Kempter et al. (2013). Masses of individuals were measured with electronic scales with 0.01 g accuracy. Twenty-one measurable and five countable biometric characters were compared with characters of A. baerii (n = 336), A. gueldenstaedtii (n = 99), and A. stellatus (n = 69) (Keszka, 2000;Keszka, Raczyński, 2006). Information on measurable characters for these 3 species were published by Keszka and Raczyński (2006), Keszka and Krzykawski (2008), Keszka et al. (2009). All individuals of alien species originated from warm water aquaculture systems in Nowe Czarnowo and Gryfino (NW Poland). Biometric characters of the studied species are given in Table 1. Statistical analysis was performed using Statistica 10.1. Measurable and countable characters were summarized calculating minimum and maximum values, arithmetic mean, and standard deviation. Per cent of total length (L T ) of measurable characters and absolute values for countable characters were used for conducting stepwise discriminant analysis and canonical discriminant analysis. Results were analysed taking into consideration the classification matrix and values of standardized coefficients for canonical variables and visualized as scatter plots of canonical scores. Measurable Characters Length and mass were comparable for all specimens (Table 1). Measurable characters of A. oxyrinchus presented as absolute values (mm) and as per cent calculated with respect to total length (L T ) are presented in Table 2. Among the five countable characters, standard deviation for the number of dorsal scutes and the number of ventral scutes were the lowest and therefore similar to the mean value (Table 3). The characters showed a low variation range. In order to reduce the signal and the number of characters for further comparison, principal component analysis (PCA) was conducted for the 21 measurable characters of the 4 sturgeon species. The value of χ 2 was high (cumulative inertia: 0.999, χ 2 = 6,304.226, P = 0.000), which indicated that the species differed substantially with respect to measurable characters. Figure 2 shows distribution and direction of vectors on the graph depicted by the first two principal components of 21 morphological variables. The PCA revealed that the measurable characters with the highest contribution to differentiation of the 4 sturgeon species included head characters along with distance from ventral fin base to anal fin base (VA) and length of ventral (pelvic) fin (lV). Characters with the highest contribution to the first two principal components were selected for stepwise discriminant analysis. All compared measurable characters were taken into account in the discriminant function model (Wilks' lambda: 0.00014 approx. F (27.2065) = 1,512.9 P < 0.0000). A classification matrix for the stepwise discriminant analysis generated for nine measurable characters of the species was characterized by a high level of correctness (99.5828% Table 4). The classification matrix displayed the lowest correctness for A. oxyrinchus and A. stellatus, and the highest for A. baerii and A. gueldenstaedtii. Only individual specimens of A. oxyrinchus might be incorrectly classified as A. stellatus or A. gueldenstaedtii (Table 4). On the basis of both discriminant functions, canonical discriminant analysis clearly discriminated the four species with respect to their biometric characters (Figure 3). The first discriminant function separated A. gueldenstaedtii, A. oxyrinchus, A. stellatus, and A. baerii. The second function discriminated between A. oxyrinchus and A. gueldenstaedtii and also between A. stellatus and A. gueldenstaedtii. According to the position of A. baerii on the graph, the second function situate this species between A. oxyrinchus and A. gueldenstaedtii. In the first function (root 1), character VA made the most significant contribution to discrimination, while, in the second function (root 2), the significantly contributing characters included horizontal eye diameter and preorbital distance. Postorbital distance had a significant contribution only in the first function (Table 5, Figure 4). Countable Characters The analysis of standardized coefficient values for canonical variables indicated number of lateral scutes as the character with the most significant contribution to discrimination (Table 6). Correctness of the classification matrix for countable characters was calculated as 79.92%, lower than for measurable characters (Table 7). In spite of there being relatively fewer A. oxyrinchus, 98% of A. oxyrinchus individuals were correctly discriminated. The highest possibility of an error based on countable characters was observed for A. gueldenstaedtii, which showed similarity of meristic characters with A. baerii, as well as the distinctiveness of the A. oxyrinchus from other species on the basis of having significantly fewer lateral scutes ( Figure 5). Discussion The data from the present study may be used to supplement existing systematic keys and lay the foundations for creating a catalogue or atlas of popular species and interspecific hybrids of sturgeon, including their complete systematic determination (Krylova, 1997). Correct identification is especially important in the context of species restoration. Stocking of juvenile European sea sturgeon and A. oxyrinchus as part of their re-establishment programme in Europe (Elvira, Gessner, 1996;Gessner et al., 2010) along with the co-occurrence of alien species, requires an informational campaign for users of the waters, focusing on distinguishing between native and non-native species. A reference book containing a simple and user-friendly key with sound taxonomic data is necessary. The data available for comparison with the results of the present biometric analyses were conducted on only a small number of living specimens (Artiukhin, Vecsei, 1999;Debus, 1999). When examining 27 museum specimens and a single live sturgeon, it was noted that variability of some characters of the sturgeon head, mainly due to scraping the substratum while feeding, precluded considering distances connected with snout structure as distinguishing characteristics. Comparison of dorsal and lateral scutes suggested that A. oxyrinchus differs from A. sturio from the Gironde (Atlantic Ocean) and the Rioni (Black Sea) rivers (Ninua, 1976). The analyses discussed in the present paper compared morphometric data of A. oxyrinchus, A. stellatus, A. gueldenstaedtii, and A. baerii. Among measurable characters expressed as per cent of L T , preorbital distance, distance from ventral fin base to anal fin base (VA), horizontal eye diameter (O), and postorbital distance (OP) had the highest contribution to discrimination (Figure 4). Mainly on the basis of VA, the first function clearly discriminated between A. baerii and those of A. oxyrinchus, A. gueldenstaedtii, and A. stellatus (Figure 3). A longer VA distance distinguished A. baerii from the other three sturgeon species (Figure 4). In the second function, preorbital distance contributed significantly to discrimination due to the positive value of the canonical variable coefficient; whereas, in the second function, horizontal eye diameter contributed significantly to discrimination (Table 5). On the basis of this analysis it can be concluded that the shorter the rostrum of an A. oxyrinchus individual, the higher the possibility of mistaking it for A. gueldenstaedtii. Distinguishing A. oxyrinchus from A. baerii may present difficulties due to considerable changes of the rostrum length during the lifetime of the latter, as well as the existence of short-rostrum and long-rostrum forms of A. baerii (Keszka et al., 2009). Thus, the risk of an erroneous classification is high in the case of juvenile A. baerii individuals, which have a longer rostrum than adults of the species. The risk of mistaking A. oxyrinchus for another species on the basis of measureable characters is the highest with respect to A. stellatus. The second classification function, with rostrum length having the major contribution to discrimination, clearly distinguished between the A. gueldenstaedtii and A. baerii grouping and the A. stellatus and A. oxyrinchus grouping while the first classification function separated the group of A. baerii from the groups of the remaining three sturgeon species. Based on the five analysed countable characters, exotic species are clearly distinguishable from A. oxyrinchus. The number of lateral scutes had the highest contribution to discrimination (Table 6). In the sample, the number of scutes ranged from 21 to 30, with a mean of 25.22. The value was lower than that observed for A. oxyrinchus in the St Lawrence River in Canada (28.67) (Artiukhin, Vecsei, 1999) and considerably lower than found for exotic species occurring in Polish waters (Keszka, Heese, 2003;Keszka, Krzykawski, 2008;Keszka et al., 2009). Low variability of the countable characters in A. oxyrinchus specimens in the present study might be due to the limited number of broodstock used to produce stocking material as well as to an effect of artificial rearing conditions on juvenile fish, as had been noted in the case of other cultured sturgeon species (Ruban, Sokolov, 1986). Exotic sturgeon introduction into European waters may occur from fish released by aquarium owners and hobbyists who want to dispose of large individuals, sturgeons intentionally released by fishing clubs and associations, and accidental escape from ponds and farms. Despite clear regulations applying to alien species in aquaculture under the European Commission Regulation (EC) no. 535/2008 of 13 June 2008 rules for implementation of the Council Regulation (EC) no. 708/2007 concerning use of alien and locally absent species in aquaculture, a fourth source of exotic sturgeon introduction involves artificial breeding and release (Britton, Davis, 2006). The occurrence of non-native sturgeon species in German and Polish coastal waters and river estuaries has been growing since the beginning of the 1990s (Spratte, Rosenthal, 1996;Keszka, Stepanowska, 1997;Arndt et al., 2000Arndt et al., , 2002Keszka, Heese, 2003;Keszka et al., 2011). The presence of exotic sturgeon species was also observed in the Gironde River in France in 1999 after a period of storms that caused water levels to rise and resulted in the escape of several thousand A. baerii from fish farms. This presented a serious problem for inland fishers and anglers in distinguishing between two sturgeon species, of which one (A. sturio) was strictly protected and the second (A. baerii) needs to be eliminated from open waters (Gessner et al., 2010). Alien species release, taking place in spite of the prohibitions connected with environmental protection, should be considered a potential threat, since alien species may have a negative effect on native species and their populations, especially in strongly altered ecosystems (Leppäkoski et al., 2002). Apart from the above implications for alien species introduction, genetic implications cannot be dismissed, as sturgeons display a potential for hybridization (Kozhin, 1964). Current information suggests that the hybrids are fertile, which means that mixed genetic material is passed on to the next generation, resulting in a dispersal of the original genetic information and reducing adaptation ability to habitats. The existence of the first hybrids created as a result of release of exotic A. baerii has been confirmed by genetic analyses conducted on specimens collected from the Danube River (Ludwig et al., 2009). Avoidance of loss connected with fisheries is an important principle behind protecting the remaining native populations in situ , Gessner et al., 2010), Key factors in effective restoration of a population include decreasing fishing-associated mortality and obtaining a high level of acceptance and support from the fishery sector. Measures that need to be taken include instituting and monitoring programs to limit accidental catch based on the ability of fishers to identify species. The example of A. sturio in France and preliminary results obtained from Germany clearly show that this is possible (Gessner et al., 2010).
v3-fos-license
2021-06-27T05:19:57.354Z
2021-06-01T00:00:00.000
235643447
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8247/14/6/550/pdf", "pdf_hash": "01ae7461e5d8c176d548d71c120e6ab03f4f4571", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43937", "s2fieldsofstudy": [ "Biology" ], "sha1": "01ae7461e5d8c176d548d71c120e6ab03f4f4571", "year": 2021 }
pes2o/s2orc
Site-Specific Radiolabeling of a Human PD-L1 Nanobody via Maleimide–Cysteine Chemistry Immune checkpoint inhibitors targeting the programmed cell death-1 (PD-1) and its ligand PD-L1 have proven to be efficient cancer therapies in a subset of patients. From all the patients with various cancer types, only 20% have a positive response. Being able to distinguish patients that do express PD-1/PD-L1 from patients that do not allows patients to benefit from a more personalized and efficient treatment of tumor lesion(s). Expression of PD-1 and PD-L1 is typically assessed via immunohistochemical detection in a tumor biopsy. However, this method does not take in account the expression heterogeneity within the lesion, nor the possible metastasis. To visualize whole-body PD-L1 expression by PET imaging, we developed a nanobody-based radio-immunotracer targeting PD-L1 site-specifically labeled with gallium-68. The cysteine-tagged nanobody was site-specifically conjugated with a maleimide (mal)-NOTA chelator and radiolabeling was tested at different nanobody concentrations and temperatures. Affinity and specificity of the tracer, referred to as [68Ga]Ga-NOTA-mal-hPD-L1 Nb, were assayed by surface plasmon resonance and on PD-L1POS or PD-L1NEG 624-MEL cells. Xenografted athymic nude mice bearing 624-MEL PD-L1POS or PD-L1NEG tumors were injected with the tracer and ex vivo biodistribution was performed 1 h 20 min post-injection. Ideal 68Ga-labeling conditions were found at 50 °C for 15 min. [68Ga]Ga-NOTA-mal-hPD-L1 Nb was obtained in 80 ± 5% DC-RCY with a RCP > 99%, and was stable in injection buffer and human serum up to 3 h (>99% RCP). The in vitro characterization showed that the NOTA-functionalized Nb retained its affinity and specificity. Ex vivo biodistribution revealed a tracer uptake of 1.86 ± 0.67% IA/g in the positive tumors compared with 0.42 ± 0.04% IA/g in the negative tumors. Low background uptake was measured in the other organs and tissues, except for the kidneys and bladder, due to the expected excretion route of Nbs. The data obtained show that the site-specific 68Ga-labeled NOTA-mal-hPD-L1 Nb is a promising PET radio-immunotracer due to its ease of production, stability and specificity for PD-L1. Introduction Immune responses are managed through activatory and inhibitory signals at different checkpoints. Programmed cell death-protein-1 (PD-1) and its ligands, programmed death receptor ligands 1 and 2 (PD-L1, PD-L2), are a part of the inhibitory checkpoints modulating the activity of T lymphocytes [1,2]. Cytotoxic T cells have the ability to recognize and eliminate cancer cells. PD-L1 expression on healthy cells is a mechanism to prevent autoimmunity [3,4], and some cancer cells evade the anti-tumor immune response by Nanobody Functionalization, Characterization and Affinity Analysis From the periplasmic extracts of transformed and shake-flask cultured E. coli bacteria grown, we recovered 20 mg of Cys-tagged Nb product, consisting of a mixture of dimer and monomer (32328 Da, 15590 Da, respectively; final yield = 5.8 mg/L culture). After 90 min of incubation with the mild reducing agent 2-mercaptoethylamine (2-MEA), more than 95% of the Nb was converted to monomeric Nb, as observed by size-exclusion chromatography (SEC). The reduction step was followed by site-specific coupling of mal-NOTA on the free thiol of the C-terminal cysteine. SEC purification of the conjugated Nb resulted in an average recovery yield of 62 ± 6% (n = 4) from the conjugation procedure. Quality control (QC) by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and SEC revealed a purity >95%, as depicted in Figures S1 and S2 in the Supplemental Materials. Electrospray-ionization quadrupole time-of-flight mass spectrometry (ESI-Q-ToF) of the site-specifically modified NOTA-mal-Nb showed a major peak of NOTA-mal-Nb (15588 Da) and deamidated NOTA-mal-Nb (15571 Da), as shown in Figure S3. Starting dimeric Nb or monomeric Nb were not observed. The melting point of NOTA-mal-Nb was determined via a protein melting program of a RealTime PCR instrumentation and measured at 75 °C. The affinity kinetics with conjugated and unconjugated Nbs were measured by surface plasmon resonance (SPR) on immobilized hPD-L1 recombinant protein. Modified and unmodified Nbs exhibited an equilibrium dissociation constant (KD) in the same range, namely 4.38 nM (n = 2) and 2.1 nM, respectively, suggesting that the procedure did not impact the affinity of the Nbs. Nanobody Functionalization, Characterization and Affinity Analysis From the periplasmic extracts of transformed and shake-flask cultured E. coli bacteria grown, we recovered 20 mg of Cys-tagged Nb product, consisting of a mixture of dimer and monomer (32328 Da, 15590 Da, respectively; final yield = 5.8 mg/L culture). After 90 min of incubation with the mild reducing agent 2-mercaptoethylamine (2-MEA), more than 95% of the Nb was converted to monomeric Nb, as observed by size-exclusion chromatography (SEC). The reduction step was followed by site-specific coupling of mal-NOTA on the free thiol of the C-terminal cysteine. SEC purification of the conjugated Nb resulted in an average recovery yield of 62 ± 6% (n = 4) from the conjugation procedure. Quality control (QC) by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and SEC revealed a purity >95%, as depicted in Figures S1 and S2 in the Supplemental Materials. Electrospray-ionization quadrupole time-of-flight mass spectrometry (ESI-Q-ToF) of the site-specifically modified NOTA-mal-Nb showed a major peak of NOTA-mal-Nb (15588 Da) and deamidated NOTA-mal-Nb (15571 Da), as shown in Figure S3. Starting dimeric Nb or monomeric Nb were not observed. The melting point of NOTA-mal-Nb was determined via a protein melting program of a RealTime PCR instrumentation and measured at 75 • C. The affinity kinetics with conjugated and unconjugated Nbs were measured by surface plasmon resonance (SPR) on immobilized hPD-L1 recombinant protein. Modified and unmodified Nbs exhibited an equilibrium dissociation constant (K D ) in the same range, namely 4.38 nM (n = 2) and 2.1 nM, respectively, suggesting that the procedure did not impact the affinity of the Nbs. Radiolabeling and In Vitro Stability Studies In order to optimize the labeling of NOTA-mal-hPD-L1 Nbs with 68 Ga, different parameters, such as the Nb concentration, temperature and incubation time, were evaluated. Increasing the temperature while keeping a constant Nb concentration of 3.6 µM had a remarkable impact on radiochemical purity (RCP), as depicted in Figure 2A. Likewise, the concentration of the Nb in the reaction mixture, ranging from 2.9 to 4.2 µM, was tested at 50 • C. Increasing the concentration above 3.6 µM did not lead to an improved RCP, as depicted in Figure 2B. After optimization, an average RCP > 90% (before purification) could be obtained using a Nb concentration of 3.6 µM and 15 min incubation at 50 • C. After purification, RCP was higher than 99%, with an 80 ± 5% radiochemical purity decay-corrected DC-RCY (n = 2). In order to optimize the labeling of NOTA-mal-hPD-L1 Nbs with 68 Ga, different parameters, such as the Nb concentration, temperature and incubation time, were evaluated. Increasing the temperature while keeping a constant Nb concentration of 3.6 μM had a remarkable impact on radiochemical purity (RCP), as depicted in Figure 2A. Likewise, the concentration of the Nb in the reaction mixture, ranging from 2.9 to 4.2 µ M, was tested at 50 °C. Increasing the concentration above 3.6 μM did not lead to an improved RCP, as depicted in Figure 2B. After optimization, an average RCP > 90% (before purification) could be obtained using a Nb concentration of 3.6 µ M and 15 min incubation at 50 °C. After purification, RCP was higher than 99%, with an 80 ± 5% radiochemical purity decaycorrected DC-RCY (n = 2). The stability of the radiolabeled Nb was tested in injection buffer and in human serum at 37 °C. A RCP of > 95% was retained up to 180 min post-labeling in injection buffer and >85% in human serum, as shown by radio-SEC in Figure S4. In addition, the stability of the NOTA-mal-(hPD-L1) Nb precursor was also followed. After two months of storage at −20 °C in 0.1 M NH4OAc pH 7, SEC analysis of the NOTAmal-hPD-L1 Nb showed >98% purity ( Figure S5A). Radiolabeling and stability results also remained constant. After 15 min incubation at 50 °C, an RCP of >95% was measured by radio-instant thin layer chromatography (radio-iTLC) and RCP remained at >99% after purification in injection buffer over 180 min ( Figure S5B). Cell Binding Study To assess the specificity of the site-specific NOTA-coupled Nb to bind to hPD-L1 expressed on cells, it was labeled with 68 Ga and added to either hPD-L1 POS or hPD-L1 NEG 624-MEL cells in either 3 or 15 nM concentrations. After incubation, the unbound fraction was washed away, and the cell-associated activity was measured. The percentage of cell-associated activity of the radiolabeled Nb showed specific binding on hPD-L1 POS cells, which was confirmed by absence of cell-associated activity in control conditions (hPD-L1 NEG cells and excess of unlabeled Nb). For the 3 nM concentration, a significantly higher amount of bound activity on hPD-L1 POS cells than on hPD-L1 NEG cells was measured (p = 0.0009), as well as on blocked cells (p = 0.0009), as represented in Figure 3 (3.94 ± 0.73% vs. 0.25 ± 0.03% vs. 0.21 ± 0.06%, respectively, n = 3). When increasing the concentration of Nb from 3 nM to 15 nM on cells, the percentage of cell-associated activity was lower (3.94 ± 0.73% vs. 1.07 ± 0.35%, respectively, p = 0.0154), which could demonstrate that the fraction of unlabeled Nb starts competing with the fraction of 68 Ga-labeled Nb. As a result, differences between the percentage of cell-associated activity at 15 nM on hPD-L1 POS and hPD- The stability of the radiolabeled Nb was tested in injection buffer and in human serum at 37 • C. A RCP of > 95% was retained up to 180 min post-labeling in injection buffer and >85% in human serum, as shown by radio-SEC in Figure S4. In addition, the stability of the NOTA-mal-(hPD-L1) Nb precursor was also followed. After two months of storage at −20 • C in 0.1 M NH 4 OAc pH 7, SEC analysis of the NOTAmal-hPD-L1 Nb showed >98% purity ( Figure S5A). Radiolabeling and stability results also remained constant. After 15 min incubation at 50 • C, an RCP of >95% was measured by radio-instant thin layer chromatography (radio-iTLC) and RCP remained at >99% after purification in injection buffer over 180 min ( Figure S5B). Cell Binding Study To assess the specificity of the site-specific NOTA-coupled Nb to bind to hPD-L1 expressed on cells, it was labeled with 68 Ga and added to either hPD-L1 POS or hPD-L1 NEG 624-MEL cells in either 3 or 15 nM concentrations. After incubation, the unbound fraction was washed away, and the cell-associated activity was measured. The percentage of cellassociated activity of the radiolabeled Nb showed specific binding on hPD-L1 POS cells, which was confirmed by absence of cell-associated activity in control conditions (hPD-L1 NEG cells and excess of unlabeled Nb). For the 3 nM concentration, a significantly higher amount of bound activity on hPD-L1 POS cells than on hPD-L1 NEG cells was measured (p = 0.0009), as well as on blocked cells (p = 0.0009), as represented in Figure 3 (3.94 ± 0.73% vs. 0.25 ± 0.03% vs. 0.21 ± 0.06%, respectively, n = 3). When increasing the concentration of Nb from 3 nM to 15 nM on cells, the percentage of cell-associated activity was lower (3.94 ± 0.73% vs. 1.07 ± 0.35%, respectively, p = 0.0154), which could demonstrate that the fraction of unlabeled Nb starts competing with the fraction of 68 Ga-labeled Nb. As a result, differences between the percentage of cell-associated activity at 15 nM on hPD-L1 POS and hPD-L1 NEG cells, as well as in blocking conditions (1.07 ± 0.35% vs. 0.19 ± 0.01% vs. 0.19 ± 0.05%, respectively, n = 2), were not significant (p = 0.0709 and p = 0.0720, respectively). L1 NEG cells, as well as in blocking conditions (1.07 ± 0.35% vs. 0.19 ± 0.01% vs. 0.19 ± 0.05%, respectively, n = 2), were not significant (p = 0.0709 and p = 0.0720, respectively). Biodistribution and In Vivo Tumor Targeting The final step consisted of investigating whether the [ 68 Ga]Ga-NOTA-mal-hPD-L1 Nb tracer was able to target hPD-L1 positive tumors in vivo. Organ and tissue biodistribution results at 80 min post-injection (p.i.) are represented in Figure 4 and Table S1. The 68 Ga-labeled Nb showed higher uptake of 1.86 ± 0.67% IA/g (n = 12) in the positive tumors compared with 0.42 ± 0.03% IA/g (n = 6) in the negative tumors, which is significantly different (p = 0.0002). Kidney uptake was 27.9 ± 5.1% IA/g (n = 12) in the group bearing hPD-L1 POS tumors. All other organs were at background level, as depicted in Figure 4. An average tumor-to-blood ratio of 5.17 ± 1.82% (n = 12) and an average tumor-to-muscle ratio of 27.70 ± 12.59% (n = 12) were measured for the group bearing hPD-L1 POS tumors. Flow cytometry (FC) analysis confirmed the hPD-L1 expression of the positive cells in tumors compared with the negative tumors ( Figure S6). Biodistribution and In Vivo Tumor Targeting The final step consisted of investigating whether the [ 68 Ga]Ga-NOTA-mal-hPD-L1 Nb tracer was able to target hPD-L1 positive tumors in vivo. Organ and tissue biodistribution results at 80 min post-injection (p.i.) are represented in Figure 4 and Table S1. The 68 Galabeled Nb showed higher uptake of 1.86 ± 0.67% IA/g (n = 12) in the positive tumors compared with 0.42 ± 0.03% IA/g (n = 6) in the negative tumors, which is significantly different (p = 0.0002). Kidney uptake was 27.9 ± 5.1% IA/g (n = 12) in the group bearing hPD-L1 POS tumors. All other organs were at background level, as depicted in Figure 4. An average tumor-to-blood ratio of 5.17 ± 1.82% (n = 12) and an average tumor-to-muscle ratio of 27.70 ± 12.59% (n = 12) were measured for the group bearing hPD-L1 POS tumors. Flow cytometry (FC) analysis confirmed the hPD-L1 expression of the positive cells in tumors compared with the negative tumors ( Figure S6). Biodistribution and In Vivo Tumor Targeting The final step consisted of investigating whether the [ 68 Ga]Ga-NOTA-mal-hPD-L1 Nb tracer was able to target hPD-L1 positive tumors in vivo. Organ and tissue biodistribution results at 80 min post-injection (p.i.) are represented in Figure 4 and Table S1. The 68 Ga-labeled Nb showed higher uptake of 1.86 ± 0.67% IA/g (n = 12) in the positive tumors compared with 0.42 ± 0.03% IA/g (n = 6) in the negative tumors, which is significantly different (p = 0.0002). Kidney uptake was 27.9 ± 5.1% IA/g (n = 12) in the group bearing hPD-L1 POS tumors. All other organs were at background level, as depicted in Figure 4. An average tumor-to-blood ratio of 5.17 ± 1.82% (n = 12) and an average tumor-to-muscle ratio of 27.70 ± 12.59% (n = 12) were measured for the group bearing hPD-L1 POS tumors. Flow cytometry (FC) analysis confirmed the hPD-L1 expression of the positive cells in tumors compared with the negative tumors ( Figure S6). Discussion Immune checkpoint inhibitors blocking the mechanisms exploited by cancer cells to evade the immune system have proven to be a successful approach in treating cancer. Differences in response rate to PD-L1 inhibitor treatments are observed amongst patients, making the development of predictive markers helpful for identifying patients who are most likely to benefit from such treatments. Developing a PET-tracer would allow us to estimate the PD-L1 expression in the tumor lesions in a non-invasive and reproducible way. We previously developed a hPD-L1 Nb that was functionalized with the NOTA chelator via conjugation to the accessible lysines, and efficiently radiolabeled with 68 Ga for PET imaging. The Nb was also site-specifically functionalized and radiolabeled using the sortase-A enzymatic approach [12,13,18]. In the current study, we aimed to site-specifically radiolabel the hPD-L1 Nb via an alternative chemical strategy that does not require the use of an enzyme. Cysteine-maleimide couplings are a popular and straightforward alternative for functionalizing proteins and have already made their way to the clinic. One promising example is the antibody-drug conjugate (ADC) against S. aureus used in phase I clinical trials, for which the drug was conjugated to the monoclonal antibody via a maleimide-thiol coupling, yielding a homogeneous ADC product with improved therapeutic potential [19,20]. Compared to click-chemistry, which is a more recent technique for site-specific coupling between proteins and moieties, mal-Cys couplings show some advantages, such as the ease of introducing a free cysteine in the protein structure compared with the introduction of a click-reactive group. In particular, this technique can be applied to functionalize Nbs bearing a free cysteine in a His 6 -linker-Cys-tag at their C-terminus [14,16]. One known main disadvantage in the production of Cys-tagged Nbs is an average of 50% loss in production yield compared to productions of His 6 -tag only containing Nbs [14]. In the case of the hPD-L1 Nb, the production yield remained comparable to the sortag-His 6 -tag Nb's production and was about 45% lower than the His 6 -tag Nb's production (5.8 mg/L (n = 1) vs. 4.5 ± 0.9 mg/L (n = 2) vs. 11.2 ± 9.9 mg/L (n = 2), respectively). Maleimide-NOTA was site-specifically coupled to the hPD-L1 Nb bearing the Cys-tag, in similar recovery yields as for a random coupling on the lysines. Although, a cysteine-maleimide coupling requires an extra mild reduction step to ensure the thiol function of cysteine is free and no dimer and monomer capped with a glutathione are present. The reaction time remains similar as for the random coupling and is lower for the sortase-A mediated coupling (3 h 30 min vs. 2 h 30 min vs. 16 h, respectively). 68 Ga-labeling of NOTA-mal-(hPD-L1) Nb required heating at 50 • C to reach similar DC-RCY as for the NCS-NOTA-coupled Nb (80% at 50 • C for 15 min vs. 86% at RT for 10 min, respectively), due to the difference in structure between the mal-NOTA and the NCS-NOTA used for the random coupling [13]. The latter possesses an isothiocyanate (NCS) function attached to its backbone structure and is not interfering with the chelation capacity of the three free carboxylic arms, while the maleimide function on mal-NOTA is attached to one of the arms bearing the carboxylic function. The functionalization on the arm is most likely the reason that temperature is required to increase the radiolabeling reaction kinetics, as reported already with 111 In and 64 Cu [21,22]. This elevated temperature is not an issue for the stability of NOTA-mal-Nb, for which a melting temperature of 75 • C was measured. In addition, the mal-NOTA-functionalized Nb did not show affinity loss, as measured by SPR, and mass spectrometry (MS) analysis confirmed that no free cysteinecontaining starting Nb remained after purification. [ 68 Ga]Ga-NOTA-mal-(hPD-L1) Nbs proved to be stable in injection buffer and in human serum up to 3 h of incubation. In vitro cell binding studies confirmed the functionality and specificity of [ 68 Ga]Ga-NOTA-mal-(hPD-L1) Nbs. Stability during storage of the NOTA-mal-(hPD-L1) Nb at −20 • C in 0.1 M NH 4 OAc pH 7 was tested over two months. 68 Ga-labeling remained equally efficient over time. Stability of the labeled compound in injection buffer, produced with the two months old NOTA-mal-(hPD-L1) Nb, showed that this radiolabeled Nb could also remain stable. These results are promising for clinical practice since the NOTA-conjugated Nb may be stored for long periods before being radiolabeled and used, although later time points remain to be tested. In vivo biodistribution showed that kidney retention was higher than for the randomly coupled, 68 Ga-labeled Nb analogue, as well as for the sortase-A mediated site-specifically NOTA-coupled hPD-L1 Nb (27.9 ± 5.1% IA/g vs. 13.8 ± 2.7% IA/g vs. 8.2 ± 1.9% IA/g). The Cys-tagged NOTA-mal-(hPD-L1) Nb contains a hexahistidine and a rigid linker of 14 amino acids (AAs), necessary to minimize disturbance during the bacterial expression to produce the Nb [14,23]. This extra tag results in an increase in overall charges, leading to higher kidney retention compared with the random NOTA-Nb, which only contains a His 6 -tag or the sortase-A mediated NOTA-coupled Nb, for which the sortag-His 6 -tag is cleaved during coupling [13,24]. The cysteine-maleimide linkage is known to be unstable in vivo due to the competition of thiol-containing proteins and irreversible hydrolysis [25]. These effects are often observed a day to several weeks after injection [26]. Therefore, this issue is not of concern when using a 68 Ga-labeled NOTA-mal-Nb, since imaging is possible as early as 1 h p.i. In addition, the biodistribution profile at 80 min p.i. did not show any uptake in organs or expected blood retention, confirming that no radio-metabolite reactive with thiol-containing protein was formed during this timeframe. The tumor uptake of [ 68 Ga]Ga-NOTA-mal-(hPD-L1) Nbs was significantly higher in the mice with hPD-L1 POS tumors than with hPD-L1 NEG tumors (1.86 ± 0.67% IA/g vs. 0.42 ± 0.04% IA/g, respectively). The hPD-L1 expression in the dissected tumors was confirmed by FC. In the same tumor model, the tumor uptake was as similar to the currently reported [ 68 Ga]Ga-NOTA-mal-(hPD-L1) as it was for the previously reported randomly and sortase-A mediated NOTA-coupled 68 Ga-labeled Nb analogues (1.86 ± 0.67% IA/g, 1.89 ± 0.40% IA/g and 1.77 ± 0.28% IA/g, respectively, non-significant (NS)) [13]. Tumorto-blood (T/B) and tumor-to-muscle (T/M) ratios were also similar to the two previously reported analogues (T/B: 5.17 ± 1.82% vs. 5.37 ± 1.49% vs. 6.28 ± 2.95%, respectively, NS; T/M: 27.70 ± 12.59% vs. 28.00 ± 10.62% vs. 34.53 ± 13.24%, respectively, NS) [13]. These results support that this site-specifically 68 Ga-labeled hPD-L1 Nb analogue is as efficient as the randomly and sortase-A mediated NOTA-Nb analogues. The cysteine-maleimide coupling reaction is as straightforward as the random coupling, with the advantage of yielding a homogenous site-specifically coupled NOTA-Nb. In vivo, [ 68 Ga]Ga-NOTA-mal-(hPD-L1) Nb demonstrated a similar behavior to the two other analogues. One known main disadvantage is the loss in production efficiency compared with productions of His 6 -tag-containing only Nbs used in random labeling, although efficiency is comparable with the sortag-His 6 -tag-bearing Nb. Before clinical translation, the full His 6 -linker-Cys-tag should be optimized. This should be performed to, on one hand, reduce kidney retention (by removing the His 6 -tag or modifying the AAs in the linker) and, on the other hand, to improve the production yield by optimizing the linker (length and AAs). Finally, the process may be optimized for GMP production in yeast or E. coli. Purification of hPD-L1 from Perisplasmic Extract The periplasmic extracts (PE) containing the (hPD-L1)-cysteine-tagged nanobody (Cys-tagged-hPD-L1 Nb) were produced and provided in collaboration with Cellular and Molecular Immunology (CMIM), Vrije Universiteit Brussel, Belgium. IMAC 250 µL Nickel beads' NTA-resin (Thermo Fisher Scientific, Merelbeke, Belgium) per 50 mL of periplasmic extract was added, and the mixture was shaken for 1 h. The mixture was centrifuged at 1400 rpm for 5 min and the supernatant was replaced by PBS. The periplasmic extract (PE) was shaken again for 1 h, centrifuged and the supernatant was discarded. Chromatographic Analysis SEC columns were purchased from GE Healthcare (Diegem, Belgium). SEC purification of Nb isolated from the PE was performed on a HiLoad 16/600 Superdex 30 pg column using metal free phosphate buffer saline (1× PBS: 2.68 mM KCl, 137 mM NaCl, 1.47 mM KH 2 PO 4 , 8.1 mM Na 2 HPO 4 ) at a flow rate of 1 mL/min. The SEC purification and QC analyses of the site-specifically functionalized Nb were performed on a Superdex 75 Increase 10/300 GL column using 0.1 M NH 4 OAc pH 7, at a flow rate of 0.8 mL/min. For QC, RCP was also assayed with binderless glass microfiber paper that was impregnated with silica gel (iTLC-SG) (Agilent Technologies, Diegem, Belgium) using 0.1 M sodium citrate buffer pH 4.5-5 as eluent. Serum samples were analyzed by SEC on a Superdex 5/150 GL using 2× PBS at a flow rate of 0.3 mL/min. Production and Purification of the Cysteine-Tagged hPD-L1 Nb The (hPD-L1)-Cys-tag Nb was produced in E. coli, as previously described for other Cys-tagged Nbs [14,27,28]. QC of the end product was performed through SEC, SDS-PAGE and ESI-Q-ToF-MS. SPR was performed to assess the affinity, and the thermostability of the Nb was determined following procedures described in the Supplementary Materials. Stability Studies The stability of the 68 Ga-labeled Nb (15-50 MBq, after filtration) was tested over 4 h at RT and in human serum at 37 • C. RCP was assayed by iTLC and SEC. The samples were further diluted in 0.1 M sodium citrate buffer, 0.1% tween. The samples containing serum were filtered through a 0.22 µm filter before analysis. Animal Models and Cell Lines Dr. S.L. Topalian (National Cancer Institute, Baltimore, MD 21231, USA) provided the melanoma cell line HLA-A*0201 + 624-MEL. The 624-MEL cells were stably transduced to express hPD-L1, as previously described [12], and cultured in RPMI1640 medium supplemented with 10% Fetal clone I serum (Thermo Scientific, Merelbeke, Belgium), 2 mM Pharmaceuticals 2021, 14, 550 9 of 11 L-Glutamine, 100 U/mL penicillin, 100 µg/mL streptomycin, 1 mM sodium pyruvate and nonessential amino acids. Female, five-to six-week-old athymic nude Crl:NU(NCr)-Foxn1nu mice were purchased from Charles River (France, Saint-Germain-sur-l'Arbresle). All experiments were performed in accordance with the European guidelines for animal experimentation under the license LA1230272. Experiments were approved by the Ethical Committee for the use of laboratory animals of the Vrije Universiteit Brussel (17-272-6). Intravenous injections were performed in the tail vein. Animals were anesthetized with 2.5% isoflurane in oxygen (Abbott Laboratories, North Chicago, IL 60064, USA) for injections, samplings, imaging and euthanasia. Cell Binding Study The radiolabeled Nb binding capacity was tested on hPD-L1 POS 624-MEL cells. Then, 5 × 10 4 cells in 1 mL of medium per well were allowed to attach in a 24-well plate at 37 • C two days prior to the experiment. The plate was cooled to 4 • C 1 h prior to the experiment. The supernatant was removed, and the cells were incubated for 1 h at 4 • C with 500 µL of a 3 nM or a 15 nM radiolabeled Nb solution in unsupplemented medium (n = 3 wells per conditions). Unbound fractions were collected, and wells were washed two times with ice-cold PBS. Lysis of the cells was performed twice with 0.75 mL of 1 M NaOH at RT for 5 min. All fractions were collected and transferred to counting tubes to be measured in the γ-counter (Cobra Inspector 5003, Canberra-Packard, Schwadorf, Austria). Specificity was assayed on hPD-L1 NEG 624-MEL cells, and on hPD-L1 POS cells in the presence of a 100-molar excess of unlabeled competitor (unmodified Nb) following the same procedures. The percentage of bound activity was calculated as the measured cell-associated activity in the bound fractions divided by the activity of the added solution × 100. Biodistribution and Tumor Targeting Studies Female athymic nude mice (n = 6/group, experiment repeated for the hPD-L1 POS group) were injected subcutaneously in the right leg with 4.2 × 10 6 hPD-L1 POS 624-MEL or hPD-L1 NEG 624-MEL cells. Tumor volume was measured twice weekly using an electronic calliper. The tumor volume was calculated using the following formula: (length × width 2 )/2. In about 30 days, tumors were allowed to reach a size of 296 ± 224 mm 3 for hPD-L1 NEG tumors and a size of 129 ± 79 mm 3 for hPD-L1 POS tumors. Xenografted mice bearing hPD-L1 POS or hPD-L1 NEG tumors were injected intravenously with 6 µg of [ 68 Ga]Ga-NOTA-mal-Nb; 15.2 ± 1.5 MBq, 39.6 GBq/µmol or 15.4 ± 0.6 MBq, 39.9 GBq/µmol, respectively. Apparent molar specific activities are reported for the time of injection. The biodistribution was evaluated at 80 min p.i. After euthanasia, main organs and tissues were isolated, weighed and counted against a standard of known activity using a γ-counter. The amount of radioactivity in organs and tissues was expressed as a percentage of the injected activity per gram (% IA/g), corrected for decay. A single cell suspension from the tumors was prepared and FC analysis was performed to characterize hPD-L1 expression (procedure in the Supplementary Materials). Statistical Analyses Results are expressed as mean ± SD. A non-parametric Mann-Whitney U test was carried out to compare data sets. Sample sizes and number of repetitions of experiments are indicated in the figure legends or in the materials and methods section. The number of asterisks in the figures indicates the statistical significance as follows: *, p < 0.05; **, p < 0.01; ***, p < 0.001; ****, p < 0.0001; Non-significant (NS). Conclusions In this study, we have confirmed that our lead Nb targeting human PD-L1 could be efficiently coupled to mal-NOTA via the cysteine-maleimide strategy, yielding a homogenous product for site-specific incorporation of the 68 Ga-radionuclide. The 68 Ga-labeling of NOTA-mal-conjugated Nb was efficient at 50 • C. The 68 Ga-labeled Nb was stable and specific in vitro, and could specifically target hPD-L1 preclinically in vivo. Taken together, the 68 Ga-labeled Nb via the mal-Cys chemistry is a promising PET imaging agent for future clinical assessment of PD-L1 expression. Figure S1: Size exclusion chromatography analysis (UV profile) of the purified NOTA-mal-(hPD-L1) Nb, Figure S2: SDS PAGE result, Figure S3: Mass determination analysis of the NOTA-mal-Nb, Figure S4: In Vitro stability study of [ 68 Ga]Ga-NOTA-mal-(hPD-L1) Nb, Figure S5: Stability of NOTA-mal-(hPD-L1) Nb after two months storage at −30 • C in 0.1 M NH4OAc, Figure S6: Ex Vivo assessment of hPD-L1 expression by flow cytometry, Table S1 Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2018-11-15T03:07:04.550Z
2016-11-06T00:00:00.000
53691303
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5539/elt.v9n12p22", "pdf_hash": "25d9b31b1a30cc75362094e825fd5e050a2fe05c", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43940", "s2fieldsofstudy": [ "Education" ], "sha1": "740302e7c2e15548c8330614780c8c2e21043dc1", "year": 2016 }
pes2o/s2orc
Self-Regulated Strategy Instruction for Developing Speaking Proficiency and Reducing Speaking Anxiety of Egyptian University Students The aim of this study was to investigate the effect of teaching some self-regulated strategies to Egyptian university students on improving their speaking proficiency and reducing their speaking anxiety. The design of the study was a one group pre-posttest quasi experimental design. Forty 3year EFL university students were selected to form the experimental group of the study. This experimental group was tested using the pre-post speaking test and speaking anxiety scale before and after being exposed to the self-regulated strategy treatment. The experiment lasted for three months during the first term of 2015-2016 academic year. Pairedsamples t-test revealed significant differences between the pre-test and posttest of speaking proficiency as well as speaking anxiety in favor of the posttests. Additionally, a negative correlation was shown between speaking proficiency and speaking anxiety. It was concluded that teaching self-regulated strategies to university students was effective in developing their speaking proficiency and reducing their anxiety considerably. Background of the Study In foreign language settings, speaking is considered one of the most important skills among the four language skills.Shabani (2013) as well as Bailey and Savage (1994) agree that speaking has always been the most demanding skill compared to the other skills such as listening, reading and writing.It is an important skill to everyday life.According to Tanveer (2007), a critical challenge of most FL learners in language classes is speaking a foreign language.What makes speaking more challenging than other skills is that speaker needs to have a quick access to all the relevant knowledge required to produce the appropriate language in short time, whereas in other skills the learner may have enough time to match the input with the existing knowledge (Shabani, 2013).Additionally, in the past the development of learners' speaking abilities is often neglected.It was thought that students would learn speaking while learning to write, read and listen.However, this assumption did not seem to produce the desired results of learning to speak a foreign language. Recently, speaking anxiety is widely recognized as one of the most frequently observed problems in speaking classes (Humphries, 2011;MacIntyre, 1999;MacIntyre & Gardner, 1994;E. Horwitz, M. Horwitz, & Cope, 1986).MacIntyre and Gardner (1994) reveal that students with speaking anxiety have difficulty in expressing their own views and underestimate their abilities.Therefore, EFL students always cite speaking as their "most anxiety-producing experience" (Young 1990, p. 539), and "difficulty in speaking in class" as their most common worry (Horwitz et al., 1986, p. 126).Even university students are found to have problems in FL speaking (Abdullah and Abdul Rahman, 2010).In most universities, the oral part of speaking classes is taught mostly in reading and reciting activities.Therefore, investigating speaking anxiety and trying to reduce it is an important area of research. In the Egyptian universities, the problem seems more serious.The researcher as a lecturer of TEFL notices that EFL university students in Suez Faculty of Education, after years of studying English either in primary, preparatory or secondary as well as university settings, are neither fluent nor confident English speakers.Surveying a number of studies that investigated speaking skills at the university level in the Egyptian context (e.g., Salem, 2014;Diyyab, Abdel-Haq, & Aly, 2013;Hussein, 2001;Khater, 1997), the researcher revealed that EFL university students encounter different problems in their speaking skills.To ensure her observations, the researcher conducted a pilot study on thirty 3 rd year English majors at Suez Faculty of Education (out of the sample of the study).The results of the pilot study revealed that the majority of students (78%) encountered difficulties in speaking and had a high degree of speaking anxiety. Statement of the Problem The problem of the present study was stated as follows: There was a low level of speaking proficiency among EFL university students at Suez Faculty of Education and they experienced high levels of foreign language anxiety while speaking.In an attempt to find a solution for this problem, the present study would train them to use some self-regulated strategies to develop their speaking proficiency and lower their anxiety. Hypotheses of the Study The present study included three hypotheses as follows: a) There would be a statistically significant difference in the mean scores of the experimental group exposed to the self-regulated strategy training, on the pre-post test of speaking proficiency.b) There would be a statistically significant difference in the mean scores of the experimental group exposed to the self-regulated strategy training, on the pre-post test of speaking anxiety.c) There would be a statistically significant correlation between the mean gain scores of the experimental group on the posttest of speaking proficiency and the posttest of speaking anxiety. Significance of the Study The significance of the present study lies in the following points: a) It would add to the growing body of research on the effect of self regulated strategy instruction on developing various language skills. b) The findings of this study can be helpful for both EFL teachers and learners in terms of the application of self-regulated strategies in classrooms in order to reduce anxiety in speaking, since foreign language anxiety has negative impact, not only on different aspects of language performance, but also on students' attitudes and perceptions of language learning in general. c) The findings of this study will hopefully help language teachers in making the classroom environment less stressful.d) Teaching self-regulation to pre-service teachers will enable them to transfer their knowledge of those strategies to their students. Self-Regulated Strategies Self regulated strategies were operationally defined as a number of strategies the speakers use to control their speaking and reduce their speaking anxiety.They are specific strategies speakers apply to better perform orally in speaking classes such as: elaboration strategies, rehearsal strategies, planning strategies, monitoring strategies, evaluation strategies, reflection strategies, effort management strategies, help seeking strategies, goal orientation, and self-talk strategies. Speaking Proficiency In the present study, speaking proficiency was operationally defined as the progress participants achieved in their speaking fluency and accuracy as manifested by the participants' scores on the pre-post test of speaking proficiency. Speaking Anxiety In the present study speaking anxiety is operationally defined as participants' feelings of fear and apprehension of using the language orally as manifested in participants' scores on speaking anxiety scale developed by the researcher.Such feelings recur every time the participants attempt to use the foreign language in speaking. Review of Related Literature There is a consensus that self-regulation is neither a specific personality trait that students either do or do not possess, nor is it a mental ability or particular academic performance skill.Instead, it is a selective use of strategies by which learners transform their mental processes into academic skills adapted to individual learning tasks (Zimmerman, 2002).This process of self-regulation motivates students to plan, monitor, and assess their learning independently (Zumbrunn, Tadlock, & Roberts, 2011).Therefore, the regulation of learning is considered one of the fundamental pillars of pedagogy, and one whose importance has increasingly been appreciated during the current century (Priego, Munoz, & Ciesielkiewicz, 2015).Also, Costa Ferreira, Veiga Simão and Da Silva (2015) assure that regulation of learning is a fundamental requirement for the successful attainment of skills in academic contexts and moreover, in life-long learning. Several researchers and practitioners state that students should learn to regulate their own learning for many reasons.For example, self-regulation has a positive influence on the learning outcomes (Pintrich, 2000) as it helps students to apply better learning habits and improve their study skills (Wolters, 2011), use learning strategies to enhance academic outcomes (Harris, Friedlander, Sadler, Frizzelle, & Graham, 2005), monitor their performance (Harris et al., 2005), and evaluate their academic progress (De Bruin, Thiede & Camp, 2011).Consequently, self-regulation turns learners into independent ones. Considering speaking skill, teaching self-regulation strategies and practicing them in class can create opportunities that help students manage and monitor their speaking (Priego et al., 2015).Mahjoob (2015) argues that students should be trained to use specific strategies to be able to self-regulate their speaking.To the researcher's view, training in self-regulation will increase students' understanding of their own capabilities and make learning to speak more enjoyable and fruitful.Therefore, it can be said that if a learner is a self-regulated one, he may use specific strategies and also a certain number of them, while speaking to control his speech and reduce his anxiety.For example, positive self-talk strategies, making meaning and joy out of the speaking task itself, managing stressors are sometimes used by learners to control negative affect and anxiety while speaking (e.g., "don't worry about criticism," "don't think about peers' reaction," "move on to defend your view", "you are doing well"). 1.6.1 Theoretical Foundations of Self-Regulated Learning Bandura's social cognitive theory presents the bases for self-regulated learning (Bembenutty, White, & Velez, 2015).This theory puts a model in which personal, environmental and behavioral factors play a central role in the understanding of human behavior.Accordingly, students are viewed not merely as reactive organisms acting on instinct and impulse, but as self-organizing, self-reflecting beings affected by the social conditions and cognitive processes they experience.Thus, this theory formed the basis for Zimmerman's enduring definition of self regulated learning (Salter, 2012). The social cognitive theory is founded upon four core properties of human agency: Intentionality, forethought, self-reactiveness, and self-reflectiveness.Consistent with Bandura's four core properties of human agency, self-regulated learners are those who independently activate cognition, affect, and behavior in order to pursue goals and reflect on outcomes (Bembenutty et al., 2015).In addition, they exercise control over their learning experiences with their competencies, self-beliefs, and outcome expectancies.Zimmerman (2000) depended on those four properties in deciding the three phases of the cyclical model of self-regulation that will be explained afterwards.Consequently, Zimmerman (2000) has successfully applied the concept of self-regulation to academic contexts.1.6.2Phases of Self Regulation Pintrich and Zusho (2002) as well as Zimmerman (2000) introduce three main phases of self-regulated learning cyclical model.Each phase includes sub-processes or strategies that the learners use during learning.The three phases of self-regulation are: a) The forethought phase (Planning phase): In this phase learners are proactive agents.They set goals, identify strategies to pursue those goals, and assess their self-efficacy beliefs and intrinsic interest on those tasks as well as their goal orientation. b) The performance monitoring phase: students use strategies to move forward on the learning task and monitor the effectiveness of using those strategies as well as their motivation towards continuing progress that leads to achieving the aims of the task.For example, learners engage in self-monitoring strategies and self-control of those goals, strategies, and motivation while seeking help from knowledgeable individuals and delay gratification when it is necessary for the sake of completing goals. c) The reflection on performance phase: Learners engage in self-evaluation of tasks completed, examine their level of self-satisfaction and adapt to their circumstances by determining whether tasks need to be repeated or whether the learner will move on to a new task if the previous one is considered at a satisfactory level.These self-reflections then influence students future planning and goals, initiating the cycle to begin again. Self-Regulated Strategies To promote self-regulated learning (SRL) in classrooms, teachers must teach students the self-regulated strategies that facilitate the learning process.Depending on a combination of commonly used taxonomies and classifications, Many researchers and theoreticians (e.g., Dignath, Büttner, & Langfeldt 2008;Mayer, 2008;Pressley, 2002;Boekaerts, 1997;Weinstein & Mayer, 1986) have introduced four main categories of selfregulated strategies: a) Cognitive strategies: they are categorized into repetition strategies, elaboration strategies, organizational strategies, and problem-solving strategies.Firstly, rehearsal or repetition strategies help the learners to store information in the memory by repeating the material.Elaboration strategies help create connection between new material and what is already known.As for organizational strategies, they help the learner to consolidate information in order to be processed and stored more efficiently.Finally, problem solving strategies help the learner to break a problem into smaller bits for easier solution to visualize the material to facilitate learning.b) Metacognitive strategies: They help the learners control, monitor, and regulate cognitive activities (Papaleontiou-Louca, 2003).Metacognitive strategies include: Planning a learning task, monitoring comprehension, and evaluating the progress towards the completion of a learning task.That kind of strategies are used in the three phases of self-regulated learning previously described by Zimmerman (2002).c) Management strategies: focus on the environment surrounding the learning process and how to create the optimal learning conditions.Those strategies may focus on the learner him/herself (e.g., effort management strategies that help learners persist in case of difficulties), on others (e.g., help-seeking strategies), or on the physical environment (e.g., using dictionaries and/or going to the library).d) Motivational strategies: That kind of strategies aims to enhance specific types of impetus.Examples of such strategies are the formulation of a learning objective, valuing the task, and the development of a positive feeling.As for the formulation of a learning objective, it enhances the goal orientation: the reason why one undertakes a task, which is either performance or mastery-orientation (Harackiewicz, Barron, Pintrich, Elliot & Thrash, 2002).Valuing the task enhances the task value beliefs: the degree to which the task is considered as relevant, important and worthwhile (Wigfield & Eccles, 2002).The development of a positive attitude towards task enhances the student's self-efficacy: That is the student's belief in his or her ability to successfully complete the task (Pintrich, 2003). Self-Regulated Strategies and the Process of Speaking From a psycholinguistic point of view, the process of speaking is analyzed through an information processing model of speech production, which was developed by Levelt (1989).According to that model speaking is seen as a productive and interactive skill in which the speaker is actively involved in communication (Carter & Nunan, 2002).Based on Level's model of speech production, Carter and Nunan (2002) introduce four main stages for speaking: Conceptualization, formulation, articulation, and self-monitoring.Conceptualization refers to a sort of pre-speaking stage in which the speaker plans what to say.This involves connecting background knowledge to the topic and the conditions in which the speech is made.This step is closely related to forethought phase of self-regulation where learners use planning, elaboration, and repetition strategies.In the formulation step of speaking, appropriate linguistic forms (words and phrases) are found and they are matched with the correct grammatical markers (affixes, articles, auxiliaries).In the stage of articulation, the speaker articulates every word by means of articulatory organs.Those two steps are closely connected to the second phase of self-regulation namely: Performance phase where learners engage in self-monitoring and self control strategies.In the last stage, the speaker checks the speech and correct mistakes by self-evaluation which is similar to the self-reflection phase of self-regulated learning.Consequently, there is a close relationship between the phases of self-regulated strategies and the stages of the speaking process. Hence, the process of speaking should be completed in a very short time, the previously mentioned stages require automaticity on the part of the speaker and each stage must be accomplished in a limited time (Carter & Nunan, 2002).Therefore, using self-regulated strategies may help along the different stages of speech production and consequently improve the speaking proficiency.Aregu (2013) found that self-regulated learning has had a significant effect on students' speaking performance.The results of the study show that students in the experimental group achieved significant improvement in their speaking efficacy.And such improvement seems to have resulted from the self-regulated learning intervention.Consequently, it seems that the knowledge and use of self-regulated learning strategies helped the experimental group students succeed in spoken communication and develop their speaking.On the contrary, students in the control group made no significant change in speaking.On the contratry, Mahjoob (2015) found a weak relationship between self-regulation and Iranian EFL speaking proficiency.Sixty advanced female and male students studying in the adult section of the ILI, Shiraz, Iran participated in the study.Regarding the result of t-test for speaking, high achievers are found as self-regulated as the low achievers in speaking a foreign language.There are just minor differences between the students from the two groups in the order they used the self-regulation strategies.So, it can be said that there existed a relationship, although weak, between students' self-regulation and their speaking proficiency. 1.6.4Self-Regulated Strategies and Speaking Anxiety Reviewing the literature concerning self-regulated strategies and foreign language speaking anxiety, it is noticed that there exists a certain relationship in common between those two variables.Marwan (2007) revealed that there are four main strategies (i.e.preparation, relaxation, positive thinking, and peer seeking) learners use in order to reduce foreign language anxiety.In 2009, Noormohamadi found that learners with high levels of foreign language anxiety mostly use metacognitive and memory strategies while learners with low levels of anxiety mostly employ metacognitive and social strategies.Moreover, Liu (2013) pointed out that metacognitive strategies are among the most frequently used strategies by the learners with low level of anxiety.Furthermore, Liu and Chen (2014) found that social strategies relate strongly with language anxiety, while cognitive and metacognitive strategies follow.Additionally, Martirossian and Hartoonian (2015) investigated the relationship between foreign language classroom anxiety (FLCA) and self-regulated strategies.The findings revealed that there is a negative relationship between these two variables.Building on the above literature review and empirical studies the researcher assumes that teaching self-regulated strategies to EFL university students may reduce their speaking anxiety. Speaking Proficiency and Speaking Anxiety Speaking anxiety is one of the most frequently observed problems in relation to the affective domains in language learning process.This obstacle is mostly seen in speaking classes (Humphries, 2011;MacIntyre, 1999), where students need to process linguistic inputs and produce their thoughts at the same time (Harmer, 2004).It is such a complex issue that researchers have been unable to agree on a concise definition (Zhanibek, 2001). Britannica defined speaking anxiety as an abnormal and overwhelming sense of apprehension and fear often marked by physiological signs (as sweating, tension, and increased pulse), by doubt concerning the reality and nature of the threat, and by self-doubt about one's capacity to cope with it.Pertaub, Slater, and Carter (2001) postulate that anxiety usually comes out when the speakers have a fear of being judged or humiliated by the other people.Although people are aware that this nervousness is irrational, they cannot help feeling the anxiety, which can result in depression, distress, and frustration (Pertaub et al., 2001).Horwitz et al. (1986) put forward that such an anxiety easily emerges in foreign language speaking process and might multiply when communicating with a native speaker of that language. Speaking anxiety can simply be defined by Wilson (2006) as the feeling of fear occurring when using the language orally.In the same vein of thought, Balemir (2009) states that foreign language anxiety as a feeling of inhibition in using the foreign language.The definitions of anxiety that have been proposed by several scholars have some common characteristics: the state of apprehension, fear, tension and feelings of uneasiness (Brown, 1994).Hence, speaking anxiety is associated with negative feelings such as uneasiness, frustration, self doubt, apprehension and worry.Speaking anxiety, in the present study, is a situation specific anxiety.It is that kind of anxiety that occurs every time the learner attempts to use the language. Since the ultimate aim of the speaker is to convey meaning successfully, it can be said that the demanding nature of speaking can be a source of anxiety.Several research studies have found a mutual relationship between speaking proficiency and speaking anxiety.In other words, each variable affects the other positively or negatively.In his study, Price (1991) found that speaking is a very anxiety provoking activity for the foreign language learners because they were ancious about making mistakes in their pronunciation and thus being laughed at.Koch and Terrell (1991) reached similar findings about students' speaking anxiety.Dalkılıç (2001) examined the correlational relationship between students' foreign language anxiety levels and their achievement in speaking courses.The sample included 126 Turkish freshman EFL learners and used both qualitative and quantitative data.The results of the study revealed that there was a significant relationship between students' anxiety levels and their achievement in speaking classes.Additionally, Ay (2010) found that students showed anxiety in an advanced level in productive skills.Moreover, Balemir (2009) investigated the relationship between proficiency level and degree of foreign language speaking anxiety in a Turkish EFL context.The study revealed that Turkish EFL university students experience a moderate level of speaking anxiety in speaking classes.Furthermore, Saltan (2003) investigated EFL speaking anxiety from students' and teachers' perspectives.The results of her study indicated that although students experience a certain degree of EFL speaking anxiety, its intensity is not that high. In view of above, it can be said that there exists a relationship between speaking anxiety and speaking proficiency.But the kind of this relationship (negative or positive) is not settled.Thus the present study aimed to find the correlation between speaking proficiency and speaking anxiety. Design The present study is a one group pre-posttest quasi-experimental study.The researcher used one experimental group.The experiment lasted for 3 months during the first term of 2015-2016 academic year. Participants The participants of this study were forty 3 rd year English majors from faculty of education, Suez University, Egypt.Those students were assigned to only one experimental group.All participants spent at least 14 years learning EFL. Instruments IELTS Speaking Test as a standard test of speaking was used in order to test the participants' speaking proficiency.The test was scored using IELTS speaking coding system.Accordingly, there are five grading criteria: a) fluency that assesses speech continuity, that is if learners are able to speak at normal rates, without having to stop or hesitate to find words or grammar, b) coherence which is the learners' ability to link sentences together in a logical sequence using appropriate cohesive devices, c) pronunciation that assesses learner's ability to produce understandable speech, d) lexicon which refers to the range and precision of the vocabulary learner use, e) accuracy refers to the accurate and appropriate use of grammatical structures.To reduce subjectivity in marking, two raters who are experienced in teaching speaking marked the oral responses of the test.The markers were, of course, oriented in advance to help them effectively apply the IELTS Speaking band descriptors (public version).Each grading criterion is assessed on a 9-point scale (9, 8, 7, etc with appropriate descriptions).Correlational analysis was run to find the relationship between the scores of the first rater and those of the second rater.The obtained value of the correlational analysis (0.83) indicated that there existed a significant correlation between the two raters' scores.The inter rater reliability was 0.89 indicating a high level of reliability. After a detailed review of literature, a speaking anxiety scale was adapted, by the researcher, from various instruments used to assess foreign language anxiety scale, student motivation, cognitive strategy use, and metacognition (e.g., Eccles, 1983;Harter, 1981;E. Horwitz, M. Horwitz, & Cope, 1986;Weinstein, Schulte, & Palmer, 1987).Factor analysis was used to guide scale construction.Twenty items were chosen because they were directly related to speaking anxiety.The Cronbach's Alpha for these items was found as .89suggesting internal reliability for the adapted scale. Since the questionnaire is a 5-graded Likert scale (strongly agree, agree, neutral, disagree, strongly disagree), the total score ranged from 20 to 100.First, total scores for each student's speaking anxiety were calculated.A total score more than 75 showed a high level of speaking anxiety; from 50 to 75 presented a moderate level of speaking anxiety, and participants whose score was less than 30 presented a low level of speaking anxiety. Materials of the Study The topics of "ELT Methodology Course" were used as the main material of the speaking class.This course is taught to the participants by the researcher in the form of speaking classes handling the topics of the course. Pretesting Before being exposed to self-regulated strategy intervention, all participants were pre-tested on speaking proficiency as well as speaking anxiety using the speaking pre-test and the speaking anxiety scale, respectively. Intervention After pre-testing the participants in speaking as well as speaking anxiety, they were exposed to self-regulated strategy intervention.Each lesson in the intervention is divided into three main stages: Preparation, Performance, reflection.Each stage is divided into two sub-stages as follows: 1) Preparation Stage (Pre-speaking stage) This stage is divided into two sub-stages: A. The explanation stage: During this stage, the instructor explicitly teaches and models some self-regulated strategies that help learners prepare the topic for speaking (e.g., preparation strategies, rehearsal strategies, elaboration strategies, repetition strategies, help-seeking strategies, organizational strategies). B. The application stage: The teacher assigns the speaking topic to let the students apply the self-regulated strategies they have just taught.They set goals for their topic, generate ideas, organize the ideas, elaborate the ideas, use dictionaries, ask the peers, ask the teacher….etc. This stage is divided into two sub-stages: A. The explanation stage: Here, the researcher explains some self-regulated strategies that help students organize and monitor their performance while speaking.Those strategies are such as self-monitoring, self-control, problem-solving and management strategies.She models to them how to control their speech and monitor it while speaking.She also models how to overcome problems affecting them during speaking. B. The application stage: She divides the class into small groups of five to speak freely and held discussions about the topic exploiting the strategies they have just learned.During this stage the instructor goes round the groups to guide them and provide help if needed. 3) Reflection stage (Post speaking stage).It is also divided into two sub-stages: A. Explanation stage: During this stage the teacher models to students the self-regulated strategies that help them self-evaluate and reflect on their speaking experience.Also, the instructor explains the strategies that help learners develop their positive attitudes towards speaking and reduce their speaking anxiety. B. Application stage: In small groups the students start reflect on each other speaking as well as their own speaking using reflective journals and evaluation checklists made by the researcher.The researcher goes round the class and provides help if needed. Posttesting Having taught all the instructional sessions, speaking proficiency and speaking anxiety posttests were administered to the participants. Results and Discussion The paired samples t-test was used to investigate the first hypothesis of the study which stated that "There would be a statistically significant difference in the mean scores of the experimental group exposed to the self-regulated strategy intervention, on the pre/post test of speaking proficiency."The result of the paired samples t-test is shown in the following As shown in Table 1, the difference between the mean scores of the pre-post test of speaking was significant (t=29.668,p≤0.05).Additionally, using Cohen's (1988) formula, effect size for this difference was 1.920.This effect size is large according to Feldt (as cited in Hinkle, Wiersma, & Jurs, 1994, p. 316).Therefore, it was concluded that the self-regulated strategy instruction significantly improved the speaking proficiency of participants.In light of this statistical result, the first hypothesis was accepted. One of the possible explanations for the observed result is that the teaching of the self-regulated strategies raised the conscious awareness of the strategies taught and that awareness might have led to proficiency in using them during the speaking process.In turn, that proficiency in using strategies while speaking might have improved the proficiency of speaking performance.As stated by different scholars (e.g.Zimmerman & Martinez-Pons, 2004), self-regulated learning strategies play a great role in improving one's performance.A second possible explanation is that self-regulated strategy instruction might have raised participants' motivation and interest in speaking. Teaching students how to self-monitor and self-evaluate their speaking performance might have been a third possible explanation for that result.Self-monitoring and self-assessment might have helped participants to identify the weaknesses in their speaking.Identifying their weaknesses was an important step towards the improvement of their speaking proficiency.This finding found empirical support in the study of Aregu (2013). Paired-samples t-test was used to investigate the second hypothesis of the study which stated that "There would be a statistically significant difference in the mean scores of the experimental group exposed to the self-regulated strategy instruction, on the pre-post test of speaking anxiety."The findings of the paired-samples t-test was presented in the following table: As shown in Table 2, the paired samples t-test revealed that a statistically significant difference existed in the mean scores of the experimental group between the pre-posttest of speaking anxiety (t=15.28,p<0.05).Therefore, it was concluded that the self-regulated strategy instruction significantly affected the speaking anxiety of the participants.In other words, there is a negative relationship between strategy use and speaking anxiety, the frequent use of self-regulated strategies is related to less amount of speaking anxiety.Accordingly, the second hypothesis was accepted.A possible explanation of this finding may be attributed to using self regulated strategies.As students become able to self-regulate their speaking performance, this creates safe and supporting learning environments for students.Because self-regulated learning involves various strategies that learners use in order to manage their tasks, emotions and the like, it contributes directly or indirectly to the reduction of their speaking anxiety.For example, positive self-talk, managing stressors, making meaning and joy out of the speaking task itself, controlling negative emotions and so forth are viewed as very important elements of self-regulated strategies that reduce learner's anxiety.The findings corroborate previous studies that found there is a negative relationship between levels of language anxiety and strategy use (Shabani, 2015;Ghorban Mohammadi, Biria, Koosha, & Shahsavari, 2013;Noormohamadi, 2009;Woodrow, 2006). There would be a statistically significant correlation between the mean scores of the posttest of speaking proficiency and those of speaking anxiety was the third hypothesis of the study.To test this hypothesis, the Pearson Correlation Coefficient was used.It was used to measure whether any improvement in speaking leads to a reduction in speaking anxiety or not.In other words, the two variables (speaking proficiency and speaking anxiety) were correlated using Pearson's Coefficient of Correlation.The result of the correlation was shown in the following table.The correlation coefficient for the speaking proficiency posttest and speaking anxiety posttest was -0.704.This coefficient is significant at the 0.01 level.Thus, it was concluded that there is a significant correlation between improvements in speaking proficiency and reduction of speaking anxiety.In other words improvement in speaking proficiency led to reduction in speaking anxiety.This finding has empirical support in the study of Awan, Azher, Anwar, and Naz (2010) whose study show that language anxiety and achievement are negatively related to each other. Conclusion Within the delimitations of the study as well as the statistical findings, the researcher could conclude that: a) The self-regulated strategy instruction was effective on developing the speaking proficiency of EFL university students. b) The self-regulated strategy instruction was effective on reducing the speaking anxiety of EFL university students.c) Improvement of speaking proficiency led to reduction of speaking anxiety. Recommendations and Suggestions for Further Research In light of the findings of the study, the following recommendations have been formulated: a) Self-regulated strategies must be explicitly taught and intensively practiced by the students in speaking classes, b) Teachers should liberate themselves from the traditional ways of teaching and effectively apply the self-regulated strategy instruction in teaching speaking, c) University staff should be familiarized with new methods in reducing speaking anxiety.Moreover, the need for further studies in the following areas becomes apparent: a) The impact of using self regulated strategy instruction on the communication skills, b) Research is needed on the effect of self-regulation of learning on students' achievement, c) It seems worth doing further studies on the effects of self-regulated learning intervention on students' performances, attributions, apprehension, and the like in wider contexts, d) students' attitudes towards using self-regulated strategies in language classes. table : Table 1 . Paired samples t-test for the differences in the mean scores of the experimental group on the pre-post test of speaking proficiency Table 2 . Paired samples T-test for the differences in the mean scores of the experimental group between the pre/ post test of speaking anxiety Table 3 . Pearson correlation coefficient between the mean scores of speaking proficiency posttest and those of speaking anxiety
v3-fos-license
2021-05-07T00:03:11.485Z
2021-03-05T00:00:00.000
233779388
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-4701/11/3/424/pdf", "pdf_hash": "e379859f5bd368bfa9d0b203a5da83476fbbdaab", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43943", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "7800913a18dbf7c7dbb42f2c267ef317098fb7a9", "year": 2021 }
pes2o/s2orc
Comparative Study on Electronic Structure and Optical Properties of α -Fe 2 O 3 , Ag/ α -Fe 2 O 3 and S/ α -Fe 2 O 3 : The electronic structures and optical properties of pure, Ag-doped and S-doped α -Fe 2 O 3 were studied using density functional theory (DFT). The calculation results show that the structure of α -Fe 2 O 3 crystal changes after Ag and S doping, which leads to the different points of the high symmetry of Ag-doped and S-doped α -Fe 2 O 3 with that of pure α -Fe 2 O 3 in the energy band, as well as different Brillouin paths. In addition, the band gap of α -Fe 2 O 3 becomes smaller after Ag and S doping, and the optical absorption peak shifts slightly toward the short wavelength, with the increased peak strength of S/ α -Fe 2 O 3 and the decreased peak strength of Ag/ α -Fe 2 O 3 . However, the optical absorption in the visible range is enhanced after Ag and S doping compared with that of pure α -Fe 2 O 3 when the wavelength is greater than 380 nm, and the optical absorption of S-doped α -Fe 2 O 3 is stronger than that of Ag-doped α -Fe 2 O 3 . atom for pure α -Fe 2 O 3 is 4.16 µ B, which is the same as the result of Pozum [23] but lower than the experimental value (4.6 µ B) [25]. After Ag and S doping, the magnetic moments of different iron atoms are different; however, the values become small, and the magnetic moments of iron atoms for S/ α -Fe 2 O 3 are smaller than those for Ag/ α -Fe 2 O 3 , suggesting that the magnetic property of S/ α -Fe 2 O 3 is weaker than that of Ag/ α -Fe 2 O 3 . Introduction Nowadays, all human beings are facing a serious shortage of resources because of the overconsumption of limited natural resources, such as oil, coal, mineral resources and natural gas [1]. Thus, we should find some methods to resolve the resource crisis as soon as possible. As we know, solar energy is a resource that is reliable, clean and endless. However, there is a problem: how can we make full use of solar energy? Semiconductor photocatalysts can effectively improve the utilization ratio of solar energy. Since Fujishima et al. [2] found that semiconductors could decompose water into hydrogen as a photocatalyst, more and more scientists joined the research of the photocatalytic field. In past decades, people usually used TiO 2 or other common semiconductors to realize water splitting [3][4][5][6]. Researchers improved the sunlight utilization of TiO 2 photocatalysts by using various methods. However, research on TiO 2 tends to be saturated, and its utilization rate is not high enough, so it is necessary to find other effects of semiconductor photocatalysts. After a long period of research, Al-Kuhaili et al. [7] found that hematite α-Fe 2 O 3 had a 2.18 eV direct band gap, which theoretically allows the utilization of approximately 40% of the solar spectrum. However, the actual conversion efficiency of solar energy is not ideal due to poor performance, such as a very short excited-state lifetime [8][9][10], relatively poor absorptivity [11,12] and poor oxygen evolution reaction kinetics [13]. Therefore, improving the properties of α-Fe 2 O 3 becomes hard work. The principle of photocatalysis can be explained by the semiconductor energy band theory. The band structure of semiconductors is composed of a low-energy valence band filled with electrons and a high-energy conduction band filled with holes. When the radiation energy is higher than the band gap, the electrons in the valence band are stimulated to transfer into the conduction band, forming photogenerated electrons in the conduction band and photogenerated holes in the valence band [14]. These photogenerated electrons and holes will recombine in the semiconductor or on the surface of the semiconductor, becoming so-called electron-hole pairs. The holes in the valence band can be used for oxidants to react with H 2 O and OHto form -OH. Similarly, the electrons in the conduction band can be used for reductants to react with O 2 absorbed on the semiconductor surface to form O 2 -. The -OH and O 2 with strong oxidation can converse organics into inorganics, such as CO 2 and H 2 O. The biggest bottleneck for α-Fe 2 O 3 in the application of photocatalysis is the rapid recombination speed of photogenerated electron-hole pairs, which reduces the photocatalytic activity of α-Fe 2 O 3 . Nowadays, doping modification is commonly used to enhance the photocatalytic activity of α-Fe 2 O 3 . For example, Cong et al. [15] found that tantalum and aluminum codoped with α-Fe 2 O 3 greatly reduce the effect of anodic overpotential on water oxidation and charge transfer. In brief, tantalum and aluminum codoping is beneficial for α-Fe 2 O 3 to separate and transfer photogenerated electron-hole pairs. We know that the most typical feature of noble metal nanoparticles (such as Au, Ag, Pt and so on) is their strong absorption peaks in the visible light range [16]. Noble metal doping can also promote the transportation of carriers in photocatalysts because of the special electronic structure of noble metal nanoparticles, which includes a large number of free-conducting electrons that can respond to incident light. Ag-doping modification has considerable application potential in the future. Zhang et al. [17] proved that Ag could improve the photocatalytic performance of TiO 2 by doping modification, but there is a lack of research for Ag-doped α-Fe 2 O 3 at present, especially regarding its theoretical study. In addition, the study showed that nonmetal doping can enhance electron mobility and consequently photoelectric performance. For example, Zhang et al. [18] prepared P-doped α-Fe 2 O 3 and found that it has high photoelectrochemical performance due to electron mobility caused by P doping. Carraro et al. [19] synthesized F-doped α-Fe 2 O 3 nanomaterials by plasma-enhanced chemical vapor deposition and showed enhanced photocatalytic H 2 production. Although a lot of research work has been undertaken on the metal and nonmetal doping of α-Fe 2 O 3 , the mechanism of metal and nonmetal doping changing the optical properties of α-Fe 2 O 3 is still unclear. In this paper, the electronic structures and optical properties of Ag-and S-doped α-Fe 2 O 3 were studied by using density functional theory, including the band structure, density of states, Mulliken population, electron density and optical absorption property, with a special emphasis on the comparative study of Ag and S doping. This study will establish a certain theoretical foundation for developing a new doping modification system of hematite, with better properties and production applications in different fields. Computational Methods Calculations of bulk α-Fe 2 O 3 were carried out using first-principle methods based on density functional theory (DFT) in the CASTEP program module developed by Payne et al. [20], and plane-wave basis sets were also used in this paper [21]. To express the interactions between valence electrons and the ionic core, ultrasoft pseudopotential methods were used. The plane-wave cutoff energy of all calculations was set to 340 eV, which was tested as the most stable value. In order to choose the calculation parameters suitable for the α-Fe 2 O 3 system, the test data of two main parameters (correlation function and k-point) for bulk α-Fe 2 O 3 were shown in Tables 1 and 2. By analyzing the data of Tables 1 and 2 (system energies and lattice parameters), the generalized gradient approximation (GGA) developed by Perdew and Wang (PW91) [22] was selected as the exchange-correlation functional, and a Monkhorst-Pack k-point sampling density of 3 × 3 × 1 was used in all calculations. The system energy is the lowest, and the lattice parameters calculated under these parameters are a = b = 5.088 Å and c = 14.138 Å, respectively, which are very close to the experimental results (a = b = 5.036 Å, c = 13.747 Å), showing that the calculation is reliable. This study used the BFGS (Broy-den C.G., Fletcher R., Goldforb D. and Shanno Metals 2021, 11, 424 3 of 13 D.F.) algorithm to achieve geometry optimization calculations. In the BFGS algorithm, the maximum energy change was set to 2.0 × 10 −5 eV·atom −1 , the maximum force was set to 0.05 eV·Å −1 , the maximum stress was set to 0.1 GPa, the maximum displacement was set to 0.002 Å and the self-consistent field was set to 2.0 × 10 −6 eV·atom −1 . These were the convergence tolerances settings for the geometry optimization calculations, as shown in Table 3. In addition, spin-polarization calculations were conducted to consider the magnetic moments of the individual Fe atoms. Table 3. Convergence tolerances for geometry optimization calculations. Convergence Tolerance Parameters Tolerances Maximum energy change 2.0 × 10 −5 eV·atom −1 Maximum force 0.05 eV·Å −1 Maximum stress 0.1 GPa Maximum displacement 0.002 Å Self-consistent field (SCF) 2.0 × 10 −6 eV·atom −1 atom for pure α-Fe 2 O 3 is 4.16 µB, which is the same as the result of Pozum [23] but lower than the experimental value (4.6 µB) [25]. After Ag and S doping, the magnetic moments of different iron atoms are different; however, the values become small, and the magnetic moments of iron atoms for S/α-Fe 2 O 3 are smaller than those for Ag/α-Fe 2 O 3 , suggesting that the magnetic property of S/α-Fe 2 O 3 is weaker than that of Ag/α-Fe 2 O 3 . Computational Models Å with the lattice angles of α = 88.44°, β = 91.56° and γ = 120.41°. The formation ene of Ag/α-Fe2O3 and S/α-Fe2O3 were calculated, and their values are −6.73 and −3.82 eV spectively, suggesting that the models of Ag/α-Fe2O3 and S/α-Fe2O3 are stable, and A Fe2O3 is more stable than S/α-Fe2O3. The calculated magnetic moment of every iron for pure α-Fe2O3 is 4.16 µ B, which is the same as the result of Pozum [23] but lower the experimental value (4.6 µ B) [25]. After Ag and S doping, the magnetic momen different iron atoms are different; however, the values become small, and the mag moments of iron atoms for S/α-Fe2O3 are smaller than those for Ag/α-Fe2O3, sugge that the magnetic property of S/α-Fe2O3 is weaker than that of Ag/α-Fe2O3. (Figure 4b) for Ag/α-Fe2O3. It is observed that the high symmetry point in the energy band of Ag/α-Fe2O3 is different from that of pure α-Fe2O3, which shows that Ag doping induces the change of the symmetry of α-Fe2O3 and leads to the change of the Brillouin path. Generally, the unit cell type cannot be changed with a low doping concentration. However, it is seen from Figure 2c that there is a large change of the crystal structure of α-Fe2O3 after Ag doping, which may be because more Ag-O bonds are formed (six O-Ag bonds). Though the concentration of Ag is very low, O-Ag bonds account for a large proportion in the whole α-Fe2O3 crystal, and the lengths of O-Ag bonds are much larger than those of the corresponding bonds of pure α-Fe2O3, which may lead to the change of α-Fe2O3 crystal structure. In addition, the doping of Ag greatly changes the band structure of α-Fe2O3 and makes it have a specific character. For the upspin band of Ag/α-Fe2O3, it is metallic with electronic states overlapping the Fermi level, while it possesses a semiconducting nature with a forbidden energy range for the downspin band. The materials with this property are called half-metals. Thus, Ag doping makes α-Fe2O3 become half-metal from the semiconductor. Electronic Structures of Pure, Ag-Doped and S-Doped α-Fe2O3 For the upspin band and PDOS of Ag/α-Fe2O3 (Figure 4a), the energy level in the The materials with this property are called half-metals. Thus, Ag doping makes α-Fe 2 O 3 become half-metal from the semiconductor. forms a small peak from Fe 3d and O 2p in the DOS curve. The energy level in the valence band is broadened, which means that the localization of the DOS of Fe 3d and O 2p is weakened after Ag doping. In addition, it is also seen that the energy level at the top of the valence band is split, forming a peak from Fe 3d and O 2p in the DOS curve, in which the DOSs of Fe 3d and O 2p are almost the same in this peak. Like the upspin band, the doping energy level of Ag in the downspin band and corresponding PDOS appear by accompanying those of the Fe and O orbitals of α-Fe2O3 (Figure 4b). The main contribution in the conduction band is also from the Ag 5p and Ag 5s orbitals and the Ag 4d orbital in the valence band. (Figure 4a). The main contribution in the conduction band is from Ag 5p and Ag 5s orbitals, while from the Ag 4d orbital in the valence band. As for the downspin band and PDOS of Ag/α-Fe 2 O 3 (Figure 4b), it is observed that the energy level between 5 and 13 eV in the conduction band extends to low energy, while that between 2 and 5 eV extends to high energy, and the energy level at about 5 eV is split and forms a small peak from Fe 3d and O 2p in the DOS curve. The energy level in the valence band is broadened, which means that the localization of the DOS of Fe 3d and O 2p is weakened after Ag doping. In addition, it is also seen that the energy level at the top of the valence band is split, forming a peak from Fe 3d and O 2p in the DOS curve, in which the DOSs of Fe 3d and O 2p are almost the same in this peak. Like the upspin band, the doping energy level of Ag in the downspin band and corresponding PDOS appear by accompanying those of the Fe and O orbitals of α-Fe 2 O 3 (Figure 4b). The main contribution in the conduction band is also from the Ag 5p and Ag 5s orbitals and the Ag 4d orbital in the valence band. Figure 5a,b shows the energy band and PDOS of upspin ( Figure 5a) and downspin (Figure 5b) for S/α-Fe 2 O 3 . Like Ag/α-Fe 2 O 3 , the symmetry of α-Fe 2 O 3 changes after S doping, and the high symmetry point in the energy band of Ag/α-Fe 2 O 3 and S/α-Fe 2 O 3 is the same; that is, Ag/α-Fe 2 O 3 and S/α-Fe 2 O 3 have the same Brillouin paths. In addition, the doping of S also makes it become a half-metal. For the upspin band and PDOS of S/α-Fe 2 O 3 , the energy level of the conduction band extends to high energy compared with that of pure α-Fe 2 O 3 , suggesting that the nonlocality of corresponding DOS after S doping is enhanced. The main contribution in the range of 16 to 22 eV is from Fe 4p, while that between 5 and 16 eV is from Fe 4p and Fe 4s, with a few contributions from O 2p. The energy level in the valence band shifts to high energy and some of the energy levels pass through the Fermi level. The energy level near the Fermi level is split (energy band of Figure 5a) and forms two DOS peaks (PDOS of Figure 5a), which are from the contributions of Fe 3d and O 2p. In addition, the DOS curve in the valence band changes a lot. In the range of −7 to 0 eV, there is only an energy level group, which is mainly composed of Fe 3d and O 2p, with a few contributions from Fe 4p and Fe 4s, while for pure α-Fe 2 O 3 , the valence band is divided into two groups. The group in the range of −8 to −5 eV is mainly from Fe 3d and O 2p, while that between −5 and 0 eV is mainly from O 2p and Fe 3d, with a few contributions from Fe 4p and Fe 4s. The doping energy level of S appears by accompanying the Fe and O levels of α-Fe 2 O 3 (Figure 5a). However, the main contribution of S is from S 3p, with a few contributions of S 3s. Furthermore, the contribution of S in the range of 5 to 23 eV is very small. For the downspin band and PDOS of S/α-Fe 2 O 3 , the energy level larger than 5 eV extends toward high energy and leads to its broadening, suggesting a strong nonlocality of the corresponding DOS, which is similar to that of the upspin band and DOS of S/α-Fe 2 O 3 . The energy level between 0 and 5 eV extends to low energy and is split into two groups, which are both from Fe 3d and O 2p, with a few contributions from Fe 4p and Fe 4s. The energy level in the valence band shifts to low energy and its nonlocality is weakened. Like the upspin band, the doping energy level of S for the downspin band and corresponding PDOSs appear by accompanying those of the Fe and O orbitals of α-Fe 2 O 3 (Figure 5b). The main contribution in the conduction band is also from S 3p, with a few contributions of S 3s, and from the S 3p orbital in the valence band. (Figure 6d). It is observed from Figure 6a,c that the Ag atom loses more electrons (Figure 6c) compared with the Fe atom replaced by Ag (Figure 6a), while oxygen atoms (O1 and O2) bonded to Ag after doping seem to obtain fewer electrons relative to those before doping. For S-doped α-Fe 2 O 3 , the S atom obtains fewer electrons (Figure 6d) than the O atom replaced by the S atom (Figure 6b), while Fe atoms bonded to the S atom lose fewer electrons than that before S doping. These results are in good agreement with the Mulliken atomic charge (Table 4). It is seen that the charge of the Ag atom for Ag-doped α-Fe 2 O 3 is 1.730 e, which is larger than that of the Fe atom for pure α- Figure 7 shows the absorption spectra of α-Fe 2 O 3 , Ag/α-Fe 2 O 3 and S/α-Fe 2 O 3 . There is an obvious absorption peak at about 230 nm for pure α-Fe 2 O 3 . After Ag and S doping, the absorption peaks of α-Fe 2 O 3 shift slightly to a short wavelength, and the peak strength of S/α-Fe 2 O 3 increases, while that of Ag/α-Fe 2 O 3 decreases. However, the optical absorption in the visible range is enhanced after Ag and S doping. It is seen that the optical absorptions of Ag/α-Fe 2 O 3 and S/α-Fe 2 O 3 are much stronger than that of pure α-Fe 2 O 3 when the wavelength is greater than 380 nm, and that of S-doped α-Fe 2 O 3 is stronger than Ag-doped α-Fe 2 O 3 . The result shows that the doping of metal and nonmetal improves significantly the optical absorption in the visible range, which is beneficial for the photocatalytic reaction. As we know, the light with the photon energy larger than the band gap can excite electrons into the conduction band. A photocatalyst is characterized by its ability to simultaneously adsorb two reactants, which can be reduced and oxidized by the electron and hole. However, the photogenerated electrons and holes will recombine quickly in the semiconductor, which reduces the photocatalytic efficiency. After substitutional doping, the charges of Ag and S are 1.730 and −0.230 e, which are larger than those of corresponding Fe (1.140 e) and O (−0.760 e) for pure α-Fe2O3, suggesting that Ag and S may capture the holes in the valence band of α-Fe2O3, thus preventing the recombination of electrons and holes. According to the absorption spectra (Figure 7), the optical absorption of the doped α-Fe2O3 in the visible range increases greatly, which means that the photocatalytic activity of α-Fe2O3 is enhanced. In addition, it is seen from Figure 7 that the optical absorption of α-Fe2O3 in the visible light is enhanced greatly after Ag and S doping. Therefore, metal and nonmetal doping can significantly improve the utilization of sunlight of α-Fe2O3 when used as a photocatalyst. According to our previous analysis, the doping of Ag and S makes α-Fe2O3 become a half-metal, which is suitable for applications in spintronic devices, such as the spin-tunnel junctions, spin vale and giant magnetoresistant (GMR) devices. As a result, the doping of Ag and S makes α-Fe2O3 have a much wider application. Conclusions In this study, the electronic structure and optical properties of pure, Ag-and S-doped α-Fe2O3 were studied by using density functional calculation. The main results are as follows: (1) The doping of Ag and S results in a large change of the crystal structure of α-Fe2O3. The lengths of all Ag-O and S-Fe bonds are larger than those of corresponding O-Fe bonds before Ag and S doping, which induces the expansion of the α-Fe2O3 crystal. (2) The band gaps of α-Fe2O3 decrease after Ag and S doping. For Ag/α-Fe2O3, the energy levels near the Fermi level for the upspin and downspin bands are split and form small DOS peaks, respectively. The main contribution of Ag in the conduction band is from Ag 5p and Ag 5s orbitals and from Ag 4d orbital in the valence band. For S/α-Fe2O3, the energy levels near the Fermi level for the upspin and downspin bands are also split; As we know, the light with the photon energy larger than the band gap can excite electrons into the conduction band. A photocatalyst is characterized by its ability to simultaneously adsorb two reactants, which can be reduced and oxidized by the electron and hole. However, the photogenerated electrons and holes will recombine quickly in the semiconductor, which reduces the photocatalytic efficiency. After substitutional doping, the charges of Ag and S are 1.730 and −0.230 e, which are larger than those of corresponding Fe (1.140 e) and O (−0.760 e) for pure α-Fe 2 O 3 , suggesting that Ag and S may capture the holes in the valence band of α-Fe 2 O 3 , thus preventing the recombination of electrons and holes. According to the absorption spectra (Figure 7), the optical absorption of the doped α-Fe 2 O 3 in the visible range increases greatly, which means that the photocatalytic activity of α-Fe 2 O 3 is enhanced. In addition, it is seen from Figure 7 that the optical absorption of α-Fe 2 O 3 in the visible light is enhanced greatly after Ag and S doping. Therefore, metal and nonmetal doping can significantly improve the utilization of sunlight of α-Fe 2 O 3 when used as a photocatalyst. According to our previous analysis, the doping of Ag and S makes α-Fe 2 O 3 become a half-metal, which is suitable for applications in spintronic devices, such as the spin-tunnel junctions, spin vale and giant magnetoresistant (GMR) devices. As a result, the doping of Ag and S makes α-Fe 2 O 3 have a much wider application. Conclusions In this study, the electronic structure and optical properties of pure, Ag-and S-doped α-Fe 2 O 3 were studied by using density functional calculation. The main results are as follows: (1) The doping of Ag and S results in a large change of the crystal structure of α-Fe 2 O 3 . The lengths of all Ag-O and S-Fe bonds are larger than those of corresponding O-Fe bonds before Ag and S doping, which induces the expansion of the α-Fe 2 O 3 crystal. (2) The band gaps of α-Fe 2 O 3 decrease after Ag and S doping. For Ag/α-Fe 2 O 3 , the energy levels near the Fermi level for the upspin and downspin bands are split and form small DOS peaks, respectively. The main contribution of Ag in the conduction band is from Ag 5p and Ag 5s orbitals and from Ag 4d orbital in the valence band. For S/α-Fe 2 O 3 , the energy levels near the Fermi level for the upspin and downspin bands are also split; however, two DOS peaks are formed, respectively. The main contribution of S is from S 3p with a few contributions of S 3s. (3) The absorption peaks of Ag-doped and S-doped α-Fe 2 O 3 shift slightly to a short wavelength accompanying the increased peak intensity of S/α-Fe 2 O 3 and decreased peak intensity of Ag/α-Fe 2 O 3 . When the wavelength is greater than 380 nm, the optical absorptions of Ag-and S-doped α-Fe 2 O 3 in the visible range are stronger than that of pure α-Fe 2 O 3 , and the optical absorption of S-doped α-Fe 2 O 3 is stronger than that of Ag-doped α-Fe 2 O 3 .
v3-fos-license
2021-09-27T21:22:50.788Z
2021-07-01T00:00:00.000
237763389
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.mdpi.com/2071-1050/13/15/8278/pdf", "pdf_hash": "8aea340bf90ac80c98e3f424898d5c520e3483c7", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43944", "s2fieldsofstudy": [ "Economics", "Agricultural And Food Sciences" ], "sha1": "22109b696c5ffc9d6655ba3a4f858e86fc7c8332", "year": 2021 }
pes2o/s2orc
A Win–Win Scenario for Agricultural Green Development and Farmers’ Agricultural Income: An Empirical Analysis Based on the EKC Hypothesis Due to severe resource and environmental constraints, agricultural green development is a vital step for the low-carbon development of China. How to achieve the goal of a win–win scenario that simultaneously improves agricultural green total factor productivity (GTFP) and farmers’ agricultural income was the main focus of this study. Based on the panel dataset for 31 provinces in China from 2000 to 2018, this study calculated the agricultural GTFP using the global Malmquist–Luenberger (GML) index to measure the green development of agriculture. Furthermore, this study investigated the relationship between the agricultural GTFP and agricultural income in an environmental Kuznets curve (EKC) framework, together with the key factors affecting agricultural GTFP. The main results show that, first, driven by technical progress, the agricultural GTFP gradually increased across the country, while there existed a certain degree of heterogeneity in the growth of different regions. Second, the relationships between the agricultural GTFP and agricultural income exhibited a significant U-shape for the whole country and the four regions, indicating that a win–win scenario can be achieved between green development and income level. Third, industrialization and urbanization negatively affected agricultural GTFP, capital deepening played a positive role, and due to the mediated effect of capital deepening, the outflow of the agricultural labor force did not cause substantial harm to agricultural GTFP. The findings of our study provide useful policy implications for the promotion and development of agriculture in China. Introduction Multiple achievements were obtained by China's agricultural reform, making significant contributions to the development of the world's agriculture. However, the early stage of development of China's agricultural sector mainly relied on the substantial consumption of fossil fuels, which brought a large amount of greenhouse gas (GHG) emissions and environmental degradation. The Food and Agriculture Organization (FAO) highlighted that the agricultural sector accounts for 30% of GHG emissions due to human activities [1]. Furthermore, China's agricultural sector accounts for approximately 11% of global agricultural GHG emissions [2]. As the population grows, so do their needs. In 2018, the population of China was 1.4054 billion, and the urbanization rate was 61.50%, but the arable land per capita in China was almost half of the world's average [3]. These situations imply that the conflict over land use as the population rises will intensify with the acceleration of urbanization. At the same time, China's agricultural development relies more heavily on the use of electricity and diesel oil, as well as chemical fertilizers and pesticides to secure food supply. However, this growing demand for energy poses a serious risk to sustainable development and the climate due to the energy-consumption-related GHG (ECR-GHG) emissions [4]. Additionally, climate change will increase the frequency of natural disasters and exacerbate the possibility of a negative impact of natural disasters on agricultural production [5]. In responding to the challenge of environmental degradation, the net-zero emissions target became a long-term global strategy for sustainable development. China committed to adopting effective policies to realize the peak of carbon emissions by 2030 and carbon neutrality by 2060 [6]. Given that agriculture is a major source of ECR-GHG emissions, stimulating the green development of agriculture will significantly contribute to the carbon neutralization of China. Scholars generally believe that the green total factor productivity (GTFP) is one of the most representative indicators to measure industrial sector green development performance [7]. Therefore, this study took the agricultural GTFP as a proxy variable to measure the green development of the agricultural sector. Chinese farmers are dependent on agricultural income from farmlands, which has been a concern of the Chinese government [8]. The Chinese government has taken measures to make farming profitable, such as abolishing agricultural tax and providing financial support. In addition, the Chinese government proposed a reform of the household contract responsibility system and the 2010 to 2020 strategy of farmers' income multiplication to improve agricultural income. The 2020 Central No. 1 Document devoted a large amount of attention to agricultural income, as well as agricultural production and supply. For a long period, China's policy emphasis was on ensuring sufficient agricultural output and raising agricultural income, but not all developments were equally positive. Agricultural income in China still increases erratically, and the total amount is much lower than that of non-agricultural income [9]. These issues affect farmers' willingness to participate in agricultural production [10]. For this reason, labor endowments cause farmers to transfer their family labor to non-agricultural sectors [11]. Furthermore, limited by their low agricultural income, farmers are less concerned with environmental conservation. They consider the benefits of abusing farmland to be greater than its costs, particularly as they are not individually responsible for paying the costs for the environmental damage [12]. However, the increase in efforts to boost the agricultural income per capita at the expense of the environment corresponds to an increase in the negative impacts of these efforts on the growth of agricultural income. Thus, can farmers make agricultural income gains without damaging green development performance? This issue warrants careful investigation. In light of the abovementioned discussions, low income has been established as one of the major causes of environmental degradation as environmental degradation increases with income until a certain income threshold is reached, after which continued increases in income will reduce environmental pressure. This concept is known as the environmental Kuznets curve (EKC) hypothesis expounded by Grossman and Krueger (1992), which postulates an inverted U-shaped relationship between income per capita and environmental degradation [13]. Since then, a large number of studies have tested the validity of the EKC hypothesis using a panel dataset, including the performance of EKC in different pollutants, such as CO 2 , NO X , CH 4 , and PM 2.5 [14][15][16][17], regions with different income levels [18], and different stages [19]. However, the major criticism of the EKC is that it is sensitive to the measurement of environmental pollutions, and its empirical results vary depending on the type of the pollutants [20]. Additionally, since pollutants are only outputs of the production process, treating pollutants as an indicator of environmental performance and analyzing the relationship between income and pollutants in such a simplified form cannot reflect the process of converting factor inputs into desirable outputs and undesirable outputs [21]. Even at higher income levels, the modification of the production process may improve environmental quality and efficiency [22]. Furthermore, the common shortcoming of the above-mentioned studies is that the notions of green and sustainability have not been involved, which suggests maximizing economic development performance while maintaining environmental quality. Therefore, this study made efforts to investigate the implications of green development and its possible application in the EKC framework, that is, further investigated the existence of the EKC relationship between GTFP and income growth in the agricultural sector. The remainder of this paper is structured as follows. In Section 2, a brief review of the relevant literature on the GTFP and EKC hypothesis applied in the agricultural sector is provided. The study area, empirical methodology, and variables' data and sources are explained in Section 3. In Section 4, the spatial-temporal evolution of agricultural GTFP in China is described, and the green growth index is divided into technical efficiency and technical progress. In Section 5, the empirical results of the relationships between agricultural GTFP and agricultural income in the whole country and the four regions are presented. The key factors affecting agricultural GTFP are discussed in Section 6. In Section 7, conclusions are drawn and a discussion of recommendations for agricultural policy is presented. Literature Review The total factor productivity (TFP), which can measure economic performance that accounts for the influences of technical progress and efficiency, was proposed by Solow (1957) [23]. However, the traditional TFP ignores environmental constraints and thus exaggerates economic performance [24]. As an indicator of green development, GTFP not only takes economic performance but also resource and environmental constraints into consideration. The literature on agricultural GTFP is limited. First of all, using the related data regarding energy from the energy balance sheet, the previous literature calculated agricultural energy consumption by excluding indirect energy consumption; therefore, energy consumption and ECR-GHG emissions of agricultural production were underestimated [25][26][27]. Apart from direct energy consumption, such as electricity and diesel, the production of diesel, electricity, pesticides, chemical fertilizers, agricultural machinery, and plastic films consumes a large amount of energy and creates ECR-GHG emissions. Second, agricultural GTFP must take the constraints of water resources into consideration. Agriculture occupies more than 70% of global water resources and is the largest user of water resources [28]. However, global freshwater is becoming increasingly scarce [29]. At the same time, FAO (2020) highlighted that the growth of income and urbanization was leading to a rising water demand for industry and services, as well as for water-intensive foods [30]. These issues further aggravate the shortage of agricultural water. In addition, China's agricultural sector accounted for approximately 61.39% of total water consumption in 2018, but the contribution rate to GDP was only 4.2%. Therefore, the low utilization efficiency of water resources is among the key problems that restrict China's green agricultural development. The application of the EKC hypothesis to agriculture is gaining interest. Scholars have examined the relationship between economic development and carbon emissions in the agricultural sector [14,15]. Nevertheless, green efficiency or productivity in EKC hypothesis literature was rarely analyzed. In addition, the existing literature has ignored the influence of income structure on the green development of agriculture [7]. Undoubtedly, the higher the income from agricultural activities, the more farmers are willing to devote themselves to agricultural production. However, Chinese farmers earn a higher income from non-agricultural activities; at the same time, they care about the urban-rural income gap [31]. Although China's urban-rural income ratio began to decline from the peak of 3.3 in 2009, the relatively lower agricultural income has not reversed the trend of a massive wave of farmers abandoning agricultural production [32]. The net outflow of the rural labor force inevitably must be substituted by agricultural machinery, pesticides, and chemical fertilizers, leading to an increase in energy consumption and ECR-GHG emissions. In conclusion, using an extensive production model to improve agricultural income will cause damage to the environment and reduce agricultural GTFP; on the other hand, agricultural income growth will increase farmers' enthusiasm and improve agricultural GTFP. Therefore, realizing a positive interaction between agricultural income and agricultural GTFP is necessary. As indicated above, a consensus was reached regarding the importance of green agricultural development. However, few studies have quantified the indirect energy consumption of agricultural production and thus underestimate ECR-GHG emissions. One contribution of this study is to consider direct and indirect energy consumption, ECR-GHG emissions, and water resources in the calculation of agricultural GTFP. Although there were studies on the EKC hypothesis in the agricultural sector, the empirical evidence on the green development of agriculture and agricultural income is insufficient. Therefore, this study intended to empirically determine the relationships between agricultural income and GTFP in China to fill this gap. Methods, Indicators, and Data In this section, first, this study calculated direct and indirect energy consumption and ECR-GHG emissions of agricultural production by sorting out the related conversion coefficients in Section 3.1. Second, the global Malmquist-Luenberger (GML) index was introduced to calculate agricultural GTFP in Section 3.2. Furthermore, panel regression models were implemented to investigate the EKC hypothesis relative to the agricultural GTFP and income in Section 3.3. Finally, the data and indicators were described in all necessary detail in Section 3.4. Methods for Determining Agricultural Energy Consumption and ECR-GHG Emissions The demand for energy of agricultural production consists of direct and indirect energy. First, direct energy consumption includes the diesel oil and electricity used to operate agricultural machinery for sowing, irrigation, fertilization, weeding, and harvesting. Second, indirect energy consumption includes the energy used during the production process of agricultural machinery, pesticides, chemical fertilizers, plastic films, diesel oil, and electricity [33,34]. This paper describes a method that uses the raw data multiplied by related energy conversion coefficients to calculate agricultural energy consumption (except for manpower and animal power) and converts the value of energy consumption into standard coal [31]. The energy conversion coefficients were derived from the China energy statistics yearbook and previous research [35][36][37][38][39]. For the purpose of measuring the environmental pressure caused by agricultural production, ECR-GHG emissions were calculated for each type of energy consumption, including the production or use of diesel oil, electricity, agricultural machinery, pesticides, chemical fertilizers, and plastic films. This study converted the value of ECR-GHG emissions into carbon dioxide equivalents using global warming potential (GWP) parameters [40]. The emissions coefficients were from previous research [41][42][43][44]. Methods for Determining Agricultural GTFP The GTFP was calculated using the Malmquist-Luenberger (ML) index by several scholars [45]. The ML index and the directional distance function (DDF) were proposed by Chung et al., (1997) to deal with both undesirable outputs and desirable outputs simultaneously [46]. However, the progress of agricultural production is long-term and continuous; meanwhile, the geometric mean of the ML index is not cumulative, meaning that it is unsuitable for measuring long-term changes in GTFP. Moreover, the ML index may face the problem of finding no solution for linear programming and non-transitivity. On the basis of the ML index, Oh (2010) developed the global Malmquist-Luenberger index, which could avoid finding no solution for linear programming [47]. To measure the dynamic trend of agricultural GTFP and further explore the impact of technical progress and efficiency on agricultural GTFP, this study adopted a GML index. In this study, each province was defined as a decision-making unit (DMU). Based on a panel of k = 1, . . . , K DMUs and t = 1, . . . , T periods, the DMUs use S inputs, x = (x 1 , x 2 , x 3 , · · ·, x S ) ∈ R S + to produce N desirable outputs, y = (y 1 , y 2 , y 3 , · · ·, y N ) ∈ R N + , and M undesirable outputs, b = (b 1 , b 2 , b 3 , · · ·, b M ) ∈ R M + . Let g = (g y , g b ) be a direction vector, and g ∈ R N + × R M + . Then, the DDF was defined as D (x, y, b; g y , g b ) = max β (y + βg y , b − βg b ) ∈ P(x) . Since the indices require a heavy dose of additional notations, this study omitted the direction vector g = (y, b) to save space when defining the indices in the remainder. For example, D (x, y, b; g y , g b ) was replaced by D (x, y, b) in all places. In defining the GML index, there were two definitions of the production possibility set (PPS): a contemporaneous PPS and a global PPS. The contemporaneous PPS was defined as P t (x t ) = (y t , b t ) x t can produce (y t , b t ) . Additionally, the global PPS was defined as P G (x) = P 1 (x 1 ) ∪ P 2 (x 2 ) ∪ · · · ∪ P T (x T ). By the definition of the global PPS, the model of the GML index at period t and period t + 1 can be expressed as follows: where the global DDF was defined as D G (x, y, b) = max β (y + βy, b − βb) ∈ P G (x) , G = t, t + 1. The GML t,t+1 represents the productivity at period t + 1 with respect to the period t, where a value of GML t,t+1 greater (lower) than 1 indicates an increase (decrease) in GTFP. Additionally, the GML t,t+1 was decomposed into technical efficiency (GEC) and technical progress (GTC), as expressed in Equations (2)-(4): Therefore, the changes in GML include the changes in technical efficiency and technical progress: where a value of GEC t,t+1 greater or lower than 1 reflects a technical efficiency improvement or loss, respectively, from period t to period t + 1; a value of GTC t,t+1 greater or lower than 1 reflects technical progress or regress, respectively, from period t to period t + 1. Then, the GML and its decomposition across many periods can be expressed as follows: Methods for Determining the EKC Relating Agricultural GTFP and Agricultural Income This study used panel regression models based on the EKC hypothesis to analyze the relationship between agricultural GTFP and farmers' agricultural income. In addition, the quadratic polynomial model was used flexibly to judge the shape of the EKC, and the logarithmic transformation of certain variables was processed to avoid large differences in the magnitude of the variables [48]. The models were as follows: where i and t indicate the province and year, respectively; the α coefficients represent the parameters to be estimated; µ refers to the individual effect; ε represents the random error term; and GTFP denotes the agricultural GTFP. Given that the GML index is a dynamic index, this study transformed it into a cumulative value for comparability and assumed that the value of GTFP in 2000 was 1, and the GEC and GTC were treated in the same way [49]. Additionally, AIPC denotes the logarithm of agricultural income per capita, and agricultural per capita income was converted into the 2000 base period using the consumer price indices for rural residents; AIPC 2 denotes the logarithm of the square of agricultural income per capita. According to the EKC hypothesis, if α 1 < 0 and α 2 > 0, then a U-shaped curve is observed between agricultural GTFP and agricultural income, whereas if α 1 > 0 and α 2 < 0, then an inverse U-shaped curve is observed. Finally, Z denotes the set of control variables, as shown in Table 1. Control Variable Symbol Description Industrial structure IS The proportion of the added value of the secondary and tertiary industries to regional GDP. Proportion of agricultural labor force PALF The proportion of agricultural labor force to the total labor force. Capital deepening CD The logarithm of the proportion of agricultural real capital stock to the agricultural labor force; the calculation of agricultural real capital stock is based on Zhang et al., (2004) [50] and Zong et al., (2014) [51]. Educational level EL The proportion of the population with a high school degree and above among the population aged 6 years and above to total population. R&D RD The proportion of R&D internal expenditure to regional GDP. Governmental financial support GFS The proportion of agricultural financial expenditure to total financial expenditure. Relative price RP The proportion of the price index of agricultural means of production to the price index of agricultural products. Environmental regulation ER The number of environmental regulations issued. Agriculture tax AT A dummy variable-the timing of the abolition of agricultural tax varies from province to province. When the agricultural tax was completely abolished, AT = 1; otherwise, AT = 0. External dependence ED The proportion of total import and export of agriculture products to regional GDP. Natural disaster ratio NDR The proportion of sown area affected by natural disaster to the total sown area of agriculture products. The fixed effect (FE) and the random effect (RE) models are regression models that are used for panel data, where the difference between the two depends on individual effects. Specifically, individual effects may be present in the form of fixed and random effects and are independent of other explanatory variables. Therefore, it was important to introduce the Hausman test to examine whether there were individual effects and whether these effects were associated with other explanatory variables so as to determine whether the FE or RE model fit the data more accurately in this study [52]. The null hypothesis of the Hausman test is that individual effects are not related to other explanatory variables. If the results reject the null hypothesis, the FE model is adopted. In addition, as the value of the dependent variable (GTFP) in this study is non-negative and truncated, the conventional ordinary least squares regression (OLS) would have caused a biased estimation if used. As a censored regression model, the Tobit model can be used to check the regression when the dependent variables are observed only in a restricted way and the explanatory variables are observable [53]. Therefore, this study used the Tobit model to judge whether the empirical results were consistent and observe the reliability of the regression results. Finally, to further analyze the regional differences in the relationship between agricultural GTFP and agricultural income, the FE, RE, and Tobit models were employed for grouped regressions in the same way. Indicators and Data Given the availability and integrity of data, this study used balanced panel data from 2000 to 2018 of 31 provinces in China for the empirical tests, except for Hong Kong, Macao, and Taiwan. According to the standard regional divisions of the National Bureau of Statistics, the 31 provinces are divided into four regions, namely, the northeastern, eastern, central, and western regions (see Figure 1 and Table A2). employed for grouped regressions in the same way. Indicators and Data Given the availability and integrity of data, this study used balanced panel data from 2000 to 2018 of 31 provinces in China for the empirical tests, except for Hong Kong, Macao, and Taiwan. According to the standard regional divisions of the National Bureau of Statistics, the 31 provinces are divided into four regions, namely, the northeastern, eastern, central, and western regions (see Figure 1 and Table A2). Figure 1. China's regional divisions and study areas. In this study, the indicators of the agricultural GTFP included agricultural input and output. Specifically, agricultural outputs were divided into desirable outputs and undesirable outputs (see Table 2). Table 2. Input and output indicators of agricultural GTFP. Indicator Description Input Energy Direct and indirect energy consumption Water Water for irrigation Land Sown area of agriculture products Labor Agricultural labor force In this study, the indicators of the agricultural GTFP included agricultural input and output. Specifically, agricultural outputs were divided into desirable outputs and undesirable outputs (see Table 2). To ensure the reliability of the original data, the data for this study were collected from the official statistical database and governmental reports, including the China Statistical Yearbook, the China Rural Statistical Yearbook, the China Statistical Yearbook on Science and Technology, the China Agricultural Machinery Industry Yearbook, the China Energy Statistics Yearbook, the China Water Resources Bulletin, the China Environmental Yearbook, and the statistical yearbooks of each province. Additionally, this study converted the total values of the import and export of agricultural products into the average annual price of the RMB exchange rate. To reveal the real economic growth, agricultural income per capita, the added value of the three industries, and regional GDP were all converted into the 2000 base period using the consumer price indices of rural residents and regional gross domestic product indices. The descriptive statistics of all the variables are shown in Table 3. Empirical Analysis Results for Agricultural GTFP In this section, the empirical findings based on the overall, regional, and provincial levels were described to confirm the changing trend and driving factors of the agricultural GTFP. Figure 2 shows the values of the GTFP, GEC, and GTC indices. Analysis of the Overall Agricultural GTFP As seen from Table A1 and Figure 2, the overall GTFP of agriculture in China showed a gradual increase from 2000 to 2018, with a total growth of 20.61% and an average annual growth rate of 1.15%. Additionally, the technical progress of agricultural production increased by 21.53%, while technical efficiency decreased by 0.52% during this period. Furthermore, there was a modest increase in the overall GTFP by 5.65% from 2000 to 2009, during which time technical efficiency even showed negative growth; the overall GTFP had a significant increase from 2010 to 2018, during which time technical progress in 2018 was 3.14 times what it was in 2009. The results indicate that technical progress had a positive impact on the overall agricultural GTFP, and the loss of technical efficiency played a negative role. Analysis of the Overall Agricultural GTFP As seen from Table A1 and Figure 2, the overall GTFP of agriculture in China showed a gradual increase from 2000 to 2018, with a total growth of 20.61% and an average annual growth rate of 1.15%. Additionally, the technical progress of agricultural production increased by 21.53%, while technical efficiency decreased by 0.52% during this period. Furthermore, there was a modest increase in the overall GTFP by 5.65% from 2000 to 2009, during which time technical efficiency even showed negative growth; the overall GTFP had a significant increase from 2010 to 2018, during which time technical progress in 2018 was 3.14 times what it was in 2009. The results indicate that technical progress had a positive impact on the overall agricultural GTFP, and the loss of technical efficiency played a negative role. Analysis of the Regional Agricultural GTFP The agricultural GTFP in the northeastern, eastern, central, and western regions increased by 24.12%, 26.39%, 16.49%, and 15.45%, respectively, from 2000 to 2018. Additionally, there were regional differences in the agricultural GTFPs. First, the growth range of the GTFP in the eastern and northeastern regions was higher than that in the central and western regions. Second, the agricultural GTFP in the eastern region was higher than the overall average, while that in the central and western regions was lower than the overall average, which was related to the imbalance of regional economic improvement. The central region was among the main agricultural areas, but its slow agricultural GTFP increase means that the agricultural achievements were likely at the expense of the environment. Furthermore, the green development of agriculture was shown to be an urgent task for the central and western regions. Third, technical efficiency in the northeastern, eastern, central, and western regions changed by −4.98%, 3.61%, −1.18%, and 0.45%, respectively, from 2000 to 2018. Finally, technical progress in all four regions showed a remarkable increase. Therefore, the growth of regional GTFP was due to the facilitation of technical progress greater than the inhibition of technical efficiency loss. Analysis of the Provincial Agricultural GTFP According to Table A2 and Figure 3, there was one province where the agricultural GTFP had negative growth in 2018, namely, Tibet. In the same year, there were six provinces where the growth range of agricultural GTFP was more than 30%, most of which were in the eastern region. The reasons for this finding were that the eastern region could strengthen technological innovations and rapidly reach the frontier of production technologies. Furthermore, there were provincial differences in agricultural GTFPs. First, due to the greater promotion of technical progress than the inhibition of technical efficiency loss, the agricultural GTFPs in ten provinces showed an increase from 2000 to 2018, namely, Liaoning, Jilin, Hebei, Guangdong, Shanxi, Anhui, Hubei, Sichuan, Yunnan, and Xinjiang. Second, due to technical progress and fixed technical efficiency, the agricultural GTFPs in eight provinces showed positive growth, namely, Beijing, Tianjin, Shanghai, Shandong, Hainan, Guangxi, Chongqing, and Guizhou. In addition, the rising GTFPs in twelve provinces, namely, Heilongjiang, Jiangsu, Zhejiang, Fujian, Jiangxi, Henan, Hunan, Inner Mongolia, Shaanxi, Gansu, Qinghai, and Ningxia, were due to both the promotion of technical progress and technical efficiency gain. Eventually, due to technical efficiency loss and technical regress, the agricultural GTFP in Tibet showed a decrease in 2018. In terms of the contribution of provinces to regions, there were sixteen provinces in which the agricultural GTFPs were lower than the related regional average in 2018, which means that more than half of the provinces did not play a significant role in promoting the region's agricultural GTFPs, including major agricultural provinces in China, such as Jilin, Shandong, Guangdong, Anhui, and Hunan. In conclusion, technical efficiency loss had a mainly inhibitory effect on the provincial GTFPs. Additionally, the main agricultural provinces in China need to improve their agricultural GTFPs. golia, Shaanxi, Gansu, Qinghai, and Ningxia, were due to both the promotion of technical progress and technical efficiency gain. Eventually, due to technical efficiency loss and technical regress, the agricultural GTFP in Tibet showed a decrease in 2018. In terms of the contribution of provinces to regions, there were sixteen provinces in which the agricultural GTFPs were lower than the related regional average in 2018, which means that more than half of the provinces did not play a significant role in promoting the region's agricultural GTFPs, including major agricultural provinces in China, such as Jilin, Shandong, Guangdong, Anhui, and Hunan. In conclusion, technical efficiency loss had a mainly inhibitory effect on the provincial GTFPs. Additionally, the main agricultural provinces in China need to improve their agricultural GTFPs. In this study, there was great synchronization between agricultural GTFP and technical progress, but no strong link between agricultural GTFP and technical efficiency, which is consistent with the conclusions drawn by Kumar (2006) [54] and Choi et al., (2015) [55]. Moreover, the average value of technical efficiency was lower than that of the agricultural GTFP and technical progress. In short, technical progress made major contributions to the improvement of agricultural GTFP in China, but technical efficiency loss played a restrictive role. Analysis of the Overall EKC According to the overall sample regression results in Table 4, regardless of the control variables, the relationship between the agricultural GTFP and agricultural income exhibited a U-shaped curve, which implies a win-win scenario for the green development of agriculture and farmers' agricultural incomes. Additionally, the results of the Hausman tests were at least significant at the 5% level, indicating that the FE models fit the data more accurately than the RE models for this study, and the reliability of FE models was demonstrated by the regression results of the Tobit models. Considering the control variables, the regression equation was obtained as being GTFP = −1.442AIPC + 0.094AIPC 2 , and the threshold of the EKC between the agricultural GTFP and agricultural income was approximately CNY 2143.54, which was obtained via the following calculation: threshold = exp(−β 1 /2β 2 ) = exp[−0.094/2 × (−1.442)]. When the agricultural income per capita was lower than CNY 2143.54, the agricultural GTFP decreased with an increase in agricultural income. Above the threshold, agricultural income increased in step with the agricultural GTFP. In the early stages, farmers made agricultural income gains but ignored protecting the environment, and the demand for water, fossil energy, pesticides, and chemical fertilizers increased. Moreover, farmers' awareness of their effects on the water and air that are public goods was insufficient, further leading to ECR-GHG emissions and resource waste. As agricultural income increased, agricultural GTFP improved as there were more advanced technologies and economic strengths to reduce resource inputs and ECR-GHG emissions. Meanwhile, farmers' awareness of low carbon started to strengthen, driving the sustainable development of agriculture. Note: ***, **, and * indicate that the statistical value was significant at 1%, 5%, and 10%, respectively; the standard errors are in parentheses. From the perspective of provinces, there are several interesting findings. The number of provinces that passed the threshold increased after 2000. In 2018, more than 90% of provinces' agricultural income per capita was greater than CNY 2143.54, except Beijing and Shanghai. Specifically, the secondary and tertiary industries and the residents' daily life in Beijing and Shanghai occupied the vast majority of land, labor force, and water resources, which played a negative role in agricultural development. Moreover, the two provinces' agricultural products were highly dependent on the supply of other provinces. Therefore, under the shortage of agricultural labor force and farmland, Beijing and Shanghai were bound to heavily depend on agricultural machinery, chemical fertilizers, and pesticides to increase the energy consumption agricultural output per unit to a high level, which reduced their agricultural GTFPs. In conclusion, it is of concern that the industry and services sectors may affect the sustainable development of agriculture. Analysis of the Regional EKC There were regional differences in economic development, industrial structure, and natural environment. Therefore, it was expected that the impacts of agricultural income in the northeastern, eastern, central, and western regions on their agricultural GTFPs would be different. According to Table 5, the relationships between the agricultural GTFP and agricultural income were U-shaped in the regions, which was consistent with the overall regression results. Thus, agricultural GTFP corresponded to the increase in farmers' agricultural income in the northeastern, eastern, central, and western regions. Second, although the shapes of the regional EKCs were similar, the thresholds in the four regions were different, with the gradual increase from the northeastern region to the central, eastern, and western regions. Particularly, the threshold in the western region was higher than those in the other regions. Third, the western region passed its threshold at the latest in 2014. In contrast, economic development showed a backward trend in the vast majority of provinces in the western region, reflecting the catch-up effect in backward areas. Finally, as China's major agricultural area, the northeastern and central regions realized the simultaneous growth of agricultural GTFP and agricultural income early, which was positive for the green development of China's agriculture. Obs 57 57 57 190 190 190 114 114 114 228 228 228 N 3 3 3 10 10 10 6 6 6 12 12 12 Note: ***, **, and * indicate that the statistical value is significant at 1%, 5%, and 10%, respectively; the standard errors are in parentheses. Discussion According to Section 4, all four regions in China achieved their agricultural GTFPs gains during the sample period and had significant heterogeneity, which is consistent with the recent works of Liu et al., (2021) [7] and Liu and Feng (2019) [56]. According to Section 5, the relationship between the agricultural GTFP and agricultural income exhibited a U-shaped curve. Green production is conducive to reducing environmental pollutants, growing agricultural products, and thus increasing farmers' income. At the same time, this perceived gain will encourage farmers to engage in environmentally friendly production. Farmers with higher agricultural income are more likely to invest more time and money to adopt and master new technologies to improve GTFP; then, a win-win scenario can be achieved between agricultural green development and income growth. This study further discussed key factors affecting China's agricultural GTFP from the perspectives of the whole country and the four regions. From the perspective of the overall sample: The estimated coefficient of IS was −0.566 at the 1% significance level, which means that the rising added value of the industry and services in regional GDP had a significant inhibitory effect on the agricultural GTFP. First, with the rapid growth of industrialization and urbanization, large-scale farmland was occupied. Second, economic growth is associated with the transfer of the labor force from the countryside to the cities and from the agriculture sector to the industrial and service sector, which led to a decline in the quality and quantity of the agricultural labor force. Furthermore, more pesticides and chemical fertilizers were used to relieve the pressure of agricultural labor outflow, increasing energy consumption per unit of output. The findings are consistent with those postulated by Wang et al., (2016) [59]. The coefficients of PALF and CD were −0.281 and 0.028, respectively, which were significant at the 1% level. The results indicate that the outflow of the rural labor force had not caused substantial harm to China's agricultural GTFP. Additionally, capital deepening played a significant positive role due to the research and development of agricultural machinery, equipment, infrastructure, and new breeds with high quality and production. Under the shortage of farmland and the rural labor force, the rational division of agriculture, rising degree of specialization, and intensification of agricultural production typically drove the sustainable growth of agriculture, which all benefited from capital deepening. Therefore, capital deepening could be an effective factor substitution for the outflow of the agricultural labor force. To confirm this speculation, this study used the FE model to further estimate the mediated effect of the variable of CD [60]. According to Table A3, capital deepening played a significant mediating role in the relationship between the proportion of agricultural labor force and agricultural GTFP, approximately 60%, which is consistent with the finding reported by Li et al., (2016) [61]. The coefficient of RD was 0.031 and significant at the 1% level; R&D investment could promote the innovation of green agricultural technology, which is in line with the conclusion of Adetutu and Ajayi (2020) [62]. The coefficient of RP was −0.089 at the 5% significance level; the agricultural production cost reduction and farmers' disposable income growth improved farmers' enthusiasm for farming. However, the variables of EL and GFS did not have a significant positive effect on agricultural GTFP, which was different from our expectations. Farmers in education had more opportunities to hold non-agricultural jobs, which caused the outflow of high-quality rural labor, thus hindering the growth of agricultural GTFP. Additionally, governmental financial support must pay further attention to the field of green and sustainable development of agriculture. Studies by Yang et al., (2017) [63] and Xu et al., (2020) [64] support this study's outcomes. According to Table 4, the coefficients of ER and AT were 0.003 and 0.015, respectively, which were significant. First, environmental regulation was an important means for production management and pollutant supervision, with the advantages of convenient operation and quick effects. Second, the abolition of agricultural tax increased farmers' enthusiasm for agricultural production. In addition, the coefficient of NDR was −0.063 and significant at the 1% level, indicating that the rising natural disaster ratio played a significantly negative role in agricultural GTFP. These findings are aligned with those of Zhan and Xu (2019) [65], Wang and Shen (2014) [66], and Xu et al., (2017) [5]. Finally, the variable of ED positively affected the agricultural GTFP, as China could leverage external trade to increase the import of virtual water and land to relieve the native pressure of the resource shortage, which is consistent with the recent work of [67]. From the perspective of the regional sample: First, the coefficient estimates indicating the impact of IS on agricultural GTFP were found to be negative in most regions, but positive and not statistically significant in the central regions, as the urbanization and industrialization were lower than those in the eastern region [7]. Second, the coefficients of PALF in the northeastern, eastern, and central regions were −1.128, −1.159, and −0.539, respectively, at the 1% significance level, which was consistent with the whole country. However, the variable of PALF in the western region played a weak role, which was related to the fact that the western region was the main area of rural labor force output [61]. Moreover, capital deepening played a greater role in agricultural GTFP in western regions than it did in the others, as the western region did not show obvious advantages in infrastructure and technological innovation [7]. In addition, the coefficients of EL and GFS were not significant in the four regions, which agreed with the overall regression results. At the same time, R&D in the eastern and central regions played a significant role in improving their agricultural GTFPs. The role of RP in improving the GTFP was significant in most regions but not in the western region. Although financial subsidies could reduce the price of agricultural means of production, they inevitably caused farmers to use extensive amounts of chemical fertilizers, pesticides, and agricultural plastic films to maintain agricultural output growth, which caused ECR-GHG emissions [12]. Furthermore, the abolition of agricultural tax played a greater role in promoting the GTFP in the central and western regions, due to which the incentive effect on farmers was greater in the backward and major agricultural regions, which is consistent with the finding reported by Xu et al., (2012) [68]. Finally, the rising natural disaster ratio had a significant negative impact on agricultural GTFP, which was corroborated in the four regions. This finding further summarizes that green development is an essential way to realize the sustainable development of agriculture. Conclusions and Suggestions For most countries, energy conservation and GHG emissions reduction and improvement of farmers' agricultural income are the footholds of main policies in the agricultural sector [31]. Thus, it is meaningful to discuss the relationship between green development and income growth in the agricultural sector, especially for China, which is a large agricultural country. In addition, since the regional development in China is imbalanced, a further study of the regional EKCs is necessary to determine their spatial-temporal characteristics. Different from current EKC literature, this study incorporated agricultural GTFP into the EKC framework, providing insights into the planning of effective mitigation measures for diminishing natural resources and deteriorating environment, together with the driving factors of the regional differences. Using the panel dataset from 2000 to 2018 of China's 31 provinces, this study first calculated the direct and indirect energy consumption and ECR-GHG emissions during agricultural production. Next, the GML index was employed to calculate the agricultural GTFP and its decompositions. Based on the EKC hypothesis, this study used panel regression models to analyze the relationship between agricultural GTFP and agricultural income, as well as the key factors affecting agricultural GTFP. The main conclusions drawn from the empirical analysis are as follows: (1) The overall agricultural GTFP in China increased by 20.61% from 2000 to 2018, indicating China's agricultural green performance followed a progression during the sample period. There were regional and provincial differences in agricultural GTFPs. The agricultural GFTPs in the central and western regions were lower than the overall average level. Additionally, the provinces with a higher growth range of agricultural GTFP were mainly in the eastern region rather than the main agricultural regions. Hence, the central and western regions should be the focus of improving the agricultural GTFP. Furthermore, technical progress was the main driving force of China's agricultural GTFP growth, while technical efficiency loss played a restrictive role. (2) There were U-shaped relationships between the agricultural GTFP and agricultural income in the whole country and the four regions, indicating that agricultural income gains were at the expense of the environment and the overexploitation of natural resources at the early stage. With economic growth and technical progress, agricultural GTFP and agricultural income growth could achieve a win-win scenario. Furthermore, the turning point of the overall EKC was calculated, which corresponded to an agricultural income per capita of CNY 2143.54. The thresholds of the regional U-shaped curves were different, increasing from the northeastern region to the central, eastern, and western regions. (3) The results suggest that several factors affected agricultural GTFP. First, there was a significant negative correlation between industrial structure and agricultural GTFP, indicating that the rising added value of the secondary and tertiary industries in regional GDP had a significant negative impact on agricultural GTFP. Second, owing to capital deepening, the outflow of the agricultural labor force did not cause substantial harm to the agricultural GTFP. Second, capital deepening promoted the agricultural GTFP development; at the same time, it played a mediating role in the relationship between the outflow of agricultural labor and agricultural GTFP. Then, R&D investment, governmental financial support, the relative price, external dependence, environmental regulation, and agriculture tax all positively affected agricultural GTFP, but the growth of the educational level and natural disasters were not conducive to agricultural GTFP development. Finally, due to imbalanced development, there were differences in the factors affecting the agricultural GTFP in the regions. Based on the above and empirical conclusions, China's agriculture sector is on a path to resource conservation and environmental friendliness. However, different regions should implement measures in agricultural green development based on specific circumstances as indicated by differences in the GTFP. Furthermore, local government should strengthen regional cooperation, especially emissions-reduction technologies transfer to the central and western regions. Since the rapid development of industrialization and urbanization negatively affected the agricultural sector, it is of great significance to optimize the efficient allocation of resources between industries, build an efficient connection between agriculture and industry, and encourage the technologies and capabilities of the industrial and service sector to flow into the agriculture sector. Technical progress is the main contributor to the agricultural GTFP growth across China and in each region. Therefore, to increase investment and technological innovation can make sense in energy conservation and emissions reduction and achieve the continuous growth in the agricultural economy. China should take measures to optimize agricultural machinery and facilitate the research and development of energy-saving and emissionsreduction technologies. Furthermore, the Chinese government could provide subsidies to encourage farmers to purchase and use agricultural machinery that can save energy and reduce emissions, with the purpose of phasing out energy-intensive and emissions-intensive agricultural machinery. Meanwhile, the government needs to accelerate the production and effective application of energy-efficient agricultural machinery, while also pay attention to the degree of farmers' acceptance of agricultural machinery. Thus, it is necessary to actively cultivate scientific and technological talents, make advanced technologies of agricultural green development effectively absorbed, and adopt policies suitable for different regions to absorb agricultural labor with high quality and skill. Besides agricultural machinery, the use of chemical fertilizers and pesticides is also the main cause of ECR-GHG emissions. Hence, China should encourage the use of organic fertilizers and pesticides, as well as agricultural plastic film recycling. In addition, China can strengthen international cooperation to expand the use of organic agricultural technologies, such as biological pesticides and fertilizers. Eventually, supervising and providing subsidies for enterprises to produce organic fertilizers will be necessary. This study mainly focused on the relationship between green development and income growth in the agricultural sector based on the historical data. However, the trend prediction for the energy consumption and ECR-GHG emissions of agricultural production has not been taken into consideration, which is of significance in pushing for China's carbon neutrality by 2060. Future research on this aspect can be taken up. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest. Note: *** indicates that the statistical value is significant at 1%, respectively; the standard errors are in parentheses.
v3-fos-license
2023-02-22T16:02:33.501Z
2023-02-20T00:00:00.000
257074197
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://iopscience.iop.org/article/10.1088/1681-7575/acbd51/pdf", "pdf_hash": "6903f55b2c3fa53bf901c33d58c34cb530a45aac", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43945", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "sha1": "bb1111d884e62124a911883281327415215767c6", "year": 2023 }
pes2o/s2orc
Bilateral comparison of irradiance scales between PMOD/WRC and PTB for longwave downward radiation measurements In this work, comparison measurements are presented between two independently realised and characterised blackbody cavities which serve as irradiance standards, namely the well-established Tilted Bottom Cavity BB2007 and the new Hemispherical Blackbody (HSBB). Both are used to realise the unit for thermal infrared irradiance. The BB2007 at the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center has long provided the reference for longwave downward radiation within the Baseline Surface Radiation Network. Longwave downward radiation is constantly measured at multiple stations around the world using specific broadband infrared radiometers with a hemispherical acceptance angle, for example pyrgeometers. The HSBB, developed at the Physikalisch-Technische Bundesanstalt (PTB) in recent years, was specifically designed to calibrate radiometers with a hemispherical acceptance angle which measure longwave downward radiation. The HSBB is directly traceable to the Radiation Temperature Scale of PTB and, in turn, via this scale to the SI. Comparison measurements between the BB2007 and HSBB in this work were carried out with three transfer instruments: the dedicated Temperature Stabilised Radiation Thermometer, an Infrared Integrating Sphere (IRIS) instrument and a Kipp and Zonen CG4 pyrgeometer. The results show good agreement with respect to the target irradiance uncertainty of 0.5 W m−2 provided by the HSBB. This study thus supports and validates the traceability of atmospheric longwave downward radiation to the SI linking measurements performed with the World Infrared Standard Group of pyrgeometers to the traceability of the reference blackbody BB2007 using IRIS instruments as the transfer standard. Introduction Longwave downward radiation refers to the infrared radiation that is emitted in and transmitted through the atmosphere and is incident on the surface of the Earth. It is an important quantity for the surface energy budget of the Earth [1]. Furthermore, it is closely linked to the Earth's greenhouse effect and is therefore particularly interesting for climate research. Longwave downward radiation measurements take place outdoors with specific broadband infrared radiometers that have a hemispherical acceptance angle and are sensitive in the relevant wavelength range from approximately 4 µm to 50 µm. Pyrgeometers are mainly used for this, and measurements take place at many weather stations and research institutes around the globe. These measurements are coordinated, for example, by national weather services or worldwide by the Baseline Surface Radiation Network (BSRN) [2]. The reference for tracing longwave downward radiation measurements within the BSRN to the SI is provided by the Tilted Bottom Cavity BB2007 [3] which was developed at the Physikalisch-Meteorologisches Observatorium Davos/World Radiation Center (PMOD/WRC) and is in operation there. The BB2007 therefore plays a central role in the comparability of longwave downward radiation to other quantities of the surface energy budget of the Earth. Furthermore, PMOD/WRC developed the Infrared Integrating Sphere (IRIS) instruments [4] -windowless transfer standard radiometers featuring a hemispherical acceptance angle and a broad spectral sensitivity -to connect pyrgeometer measurements to the irradiance provided by the BB2007 as the reference. In addition, PMOD/WRC operates the World Infrared Standard Group (WISG), a set of four pyrgeometers with longterm stability that serve as an international standard, and carries out various international comparison measurements with the radiometers in operation [5]. The Physikalisch-Technische Bundesanstalt (PTB) is the national metrology institute of Germany and has a long history of top-level radiation thermometry dating back to the end of the 19th century. Non-contact temperature measurements in the range from −170 • C to 962 • C are realised at PTB with traceability to the fixed points of the International Temperature Scale of 1990 and thus to the SI [6] and at temperatures above 962 • C as thermodynamic temperature directly traceable to the SI [7]. The Radiation Temperature Scale of PTB is verified through various international comparison measurements such as within the scope of the Traceability in Infrared Radiation Thermometry project [8] or by comparison to the Radiation Temperature Scale of the National Physical Laboratory [9]. Four heat-pipe blackbodies of different heat-pipe materials are operated at PTB and serve as national standards for the Radiation Temperature Scale at PTB, covering the temperature range from −60 • C to 962 • C. They are traceable via standard platinum resistance thermometers [6]. Monte Carlo simulations to determine the effective emissivity of these blackbody cavities are performed with high precision. This is achieved by characterising the blackbody wall materials via spectral emissivity measurements at PTB which are traceable to the SI [10]. Outline of the measurements The traceability of the BB2007 is provided by contact thermometry and Monte Carlo simulations [3]. The objective of this work is to validate the traceability of the BB2007 by performing comparison measurements against an independent irradiance standard that has a target irradiance uncertainty of 0.5 W m −2 . Using those comparison measurements, a second independent traceability path of the BB2007 is established. The comparison measurements may help re-evaluate and reduce the uncertainties in the calibration chain of global longwave downward radiation measurements. More specifically, comparison measurements between the BB2007 and the Hemispherical Blackbody (HSBB), which was developed, built and brought into operation at PTB [11], are presented. In the first stage of the measurements, the HSBB was calibrated by comparison to an ammonia heat-pipe blackbody which is a national standard for the Radiation Temperature Scale at PTB. In the second stage, the HSBB was brought to PMOD/WRC for blackbody comparison measurements between the BB2007 and HSBB. The chain for tracing longwave downward radiation measurements within the BSRN to the BB2007 is presented in detail in figure 1. Furthermore, the different traceability paths of the BB2007 and HSBB are illustrated. Experimental setup and procedure A detailed description of the BB2007 can be found in [3]. While the BB2007 is a well-established instrument, the HSBB was developed more recently. As the HSBB is a novel blackbody for the measurements presented in this work, a brief description of the HSBB is given in section 3.1. The HSBB The HSBB was specifically designed to calibrate pyrgeometers and other broadband infrared radiometers with a hemispherical acceptance angle. Details of the development, operation and Monte Carlo simulations of the HSBB are described in [11]. In short, the HSBB consists of a black coated cone in combination with a highly specular reflecting golden hemisphere. The cone of the HSBB that was employed for the measurements in this work is coated with Nextel 811-21, and this blackbody is also referred to as HSBB1 if it needs to be distinguished from the HSBB2 [11]. For simplicity, the acronym HSBB is used rather than HSBB1 in the following. Four preaged and calibrated Pt100 resistance thermometers are used to The HSBB consists of a black coated cone in combination with a highly specular reflecting golden hemisphere and is designed to specifically calibrate broadband infrared detectors with a hemispherical acceptance angle. Marked are the positions of the Pt100 thermometers at the cone vertex (1), cone edge (2) and the hemisphere (3). In the picture, an IRIS instrument (4) is placed below the opening of the HSBB. measure the temperatures of the HSBB at different positions, namely at the cone vertex, the cone edge, the hemisphere on the left and the hemisphere on the right. The Pt100 thermometers of the HSBB are read out by a resistance readout measurement device. Temperatures are calculated from the resistances according to individual calibration of each thermometer. A rendered sectional view of the HSBB is presented in figure 2. For all measurements presented in this work, the HSBB had an opening aperture diameter of 40 mm. The HSBB was developed to provide a target irradiance uncertainty of 0.5 W m −2 and was specifically designed to show an almost constant effective emissivity for different viewing conditions, i. e., for different opening or acceptance angles. The effective emissivity results for normal incidence and the hemispherical opening angle determined by Monte Carlo simulations are the same within their uncertainties [11]. These results are supported by measurements with a radiation thermometer under angles of observation from 0 • to 30 • with respect to the optical axis of the HSBB. The measurement results show that the radiation temperature of the HSBB is independent of the angle of observation [11]. The difference in radiation temperature between normal incidence and the hemispherical opening angle is considered negligible in relation to other uncertainty contributions in measurements with the HSBB. Dedicated radiation thermometer For the comparison measurements at normal incidence, a Heitronics TRT IV.82-type radiation thermometer was used which measures at normal incidence and is spectrally sensitive in the wavelength range from 8 µm to 14 µm. It was inserted into a dedicated temperature-controlled housing with good thermal isolation which had been specifically built for these measurements. The radiation thermometer is therefore called the Temperature Stabilised Radiation Thermometer (TSRT). It is shown in figure 3(a). Throughout all measurements, the housing was actively temperature-controlled at 23 • C in order to prevent possible changes in the signal of the radiation thermometer due to changes in room temperature and therefore ensure reproducible measurement conditions. It was necessary to use a tilted mirror mounted in front of the radiation thermometer, as shown in figure 3(b), because the BB2007 and HSBB were operated face down and the radiation thermometer needed to detect the radiation beam horizontally. The mirror was also used in the measurements between the HSBB and the ammonia heat-pipe blackbody. The calibration of the reflectivity of the mirror in the wavelength range in which the radiation thermometer is sensitive is implicitly included in the calibration of the HSBB against the ammonia heat-pipe blackbody. General measurement procedure Following the outline described in section 2, the actual measurements were carried out in two stages: • First stage, at PTB: calibration of the HSBB by comparison to the ammonia heat-pipe blackbody; performed using the TSRT • Second stage, at PMOD/WRC: comparison between the BB2007 and HSBB; performed using the TSRT, IRIS instrument and pyrgeometer During the measurements, all blackbodies -the ammonia heat-pipe blackbody, the BB2007 and HSBB -were operated between −20 • C and 20 • C, representing the relevant temperature range for atmospheric longwave downward radiation. The corresponding dominant wavelength spectrum ranges from approximately 4 µm to 50 µm. To avoid icing and water condensation from air humidity, all blackbody cavities were purged with dry air or nitrogen. The purging of the HSBB was set to 5 in the unit l min −1 for the measurements with the TSRT. During the relevant measurement periods with the pyrgeometer and IRIS instrument, the purging of the HSBB was briefly turned off to avoid cooling the optical surfaces of the radiometers. For each measurement, the two blackbodies involved -either the ammonia heat-pipe blackbody and HSBB or the BB2007 and HSBB -were operated at the same nominal temperature. Measurements with the TSRT were always performed as enclosed measurements: during the first stage, two measurements at the ammonia heat-pipe blackbody enclosed one measurement at the HSBB, and during the second stage, two measurements at the HSBB enclosed one measurement at the BB2007. This was done to account for possible drifts of the radiation thermometer which served as the transfer instrument. Measurements at PTB To ensure the comparability of the measurements with the TSRT between the ammonia heat-pipe blackbody and HSBB, an additional aperture resembling that of the ammonia heat-pipe blackbody was placed below the HSBB. Both apertures were black coated, actively temperature-controlled at 23 • C and had an opening diameter of 30 mm. This setup improves the comparability as it has the same influence on the measured radiation temperature of both blackbodies due to the size-of-source effect [12] or environment factor which is inherent to any radiation thermometer. Furthermore, the room temperature of the laboratory in which the measurements were carried out was actively temperature-controlled at 23 • C. Thereby, the radiation temperature of the HSBB was determined with respect to the cone vertex temperature of the HSBB. The data points denoted by 'First measurements' and 'Second measurements' refer to the measurements carried out before and after the comparison measurements performed at PMOD/WRC, respectively. To calibrate the HSBB, comparison measurements were performed between the HSBB and the ammonia heat-pipe blackbody at nominal temperatures of −20 • C, 0 • C and 20 • C before measurements took place with the BB2007. After the measurements against the BB2007, measurements of the HSBB against the ammonia heat-pipe blackbody were performed at all nominal temperatures at which the HSBB had been operated during the measurements at PMOD/WRC, including −20 • C, 0 • C and 20 • C. This procedure was done to preclude that transportation might have affected the calibration of the Pt100 thermometers or the coatings of the HSBB. In fact, the measurement results from beforehand could be well reproduced afterwards with deviations of less than 0.012 K. The results of the calibration of the HSBB against the ammonia heat-pipe blackbody are shown in figure 4. The fact that the measurement results from beforehand could be well reproduced implicitly verifies the reproducibility of the Pt100 thermometer measurements. Separate comparison measurements of the resistance readout device against a standard for electrical resistance at PTB were carried out before it was brought to PMOD/WRC. The results could also be well reproduced afterwards with deviations of less than 0.010 K after conversion of electrical resistance to temperature for measurements with Pt100 thermometers. The deviations are small in comparison to the measurement uncertainties presented in section 6. Measurements at PMOD/WRC The setup for the blackbody comparison measurements between the BB2007 and HSBB with the TSRT is similar to Figure 5. Setup for the comparison measurements between the BB2007 and HSBB. The TSRT measurements were carried out with the additional temperature-controlled aperture below the BB2007 and below the HSBB. In addition to the TSRT measuring at normal incidence, comparison measurements were also performed with an IRIS instrument and a Kipp and Zonen CG4 pyrgeometer which have a hemispherical acceptance angle. The picture is not drawn to scale. the setup described in section 3.4. The additional temperaturecontrolled aperture was used below the BB2007 as well as below the HSBB. Besides the measurements performed using the TSRT at normal incidence and in the wavelength range from 8 µm to 14 µm, measurements between the BB2007 and HSBB were also performed with an IRIS instrument and a Kipp and Zonen CG4 pyrgeometer which served as broadband transfer instruments with a hemispherical acceptance angle. The schematic setup for the measurements is depicted in figure 5. The TSRT measurements between the BB2007 and HSBB were carried out by calibrating the TSRT on site against the HSBB which is the calibrated transfer standard. Thereby, the radiation temperature of the BB2007 was compared to the radiation temperature of the HSBB. In contrast, the comparison measurements performed using the IRIS instrument and pyrgeometer were performed in a slightly different procedure. The IRIS instrument and pyrgeometer were calibrated at the BB2007 in their usual calibration procedure [3,4], and the obtained calibration coefficients of the IRIS instrument and pyrgeometer were used for irradiance measurements at the HSBB. The irradiances of the HSBB, known from the calibration of the HSBB against the ammonia heat-pipe blackbody, were compared to the irradiances detected by the IRIS instrument and the pyrgeometer. In accordance with the calibration procedures of the IRIS instrument and the pyrgeometer at the BB2007, measurements with the IRIS instrument at the HSBB were carried out at nominal blackbody temperatures of 0 • C, 5 • C, 10 • C, 15 • C and 20 • C as well as at −20 • C, −10 • C, 0 • C and 10 • C with the pyrgeometer at the HSBB. Measurements with the TSRT were carried out in the entire relevant temperature range at nominal blackbody temperatures of −20 • C, −10 • C, 0 • C, 10 • C and 20 • C. For redundancy, the measurements with the TSRT were repeated once in an independent measurement series. Contrary to the IRIS instrument, the housing of the pyrgeometer was actively temperature-controlled by a temperature-controlled plate at certain nominal temperatures following the standard calibration procedure at the BB2007. The housing temperatures of the IRIS instrument and the pyrgeometer were read out by internal sensors. The data points showing for which housing temperature which HSBB irradiance was present are plotted in figure 6. Data evaluation scheme In the following, the main evaluation procedure is presented with a description of the evaluation equations. For clarity, smaller corrections such as drift corrections of the TSRT between the measurements are not explicitly mentioned in the formulas. The radiation temperature of the HSBB is given by T rad,HSBB and is calculated in (1) from the comparison against the ammonia heat-pipe blackbody with the TSRT. Thereby, the calibration of the radiation temperature of the HSBB was performed with respect to the cone vertex temperature of the HSBB. The SI-traceable radiation temperature of the ammonia heat-pipe blackbody is denoted by T 90,heat-pipe . The signals of the TSRT at the HSBB and at the ammonia heat-pipe blackbody are denoted by T HSBB TSRT and T heat-pipe TSRT , respectively. The radiation temperature of the HSBB transferred to the BB2007 is calculated in (2) in analogy to (1). The signals of the TSRT at the BB2007 and at the HSBB during these measurements are denoted by T BB2007 TSRT and T HSBB TSRT , respectively. The data in figure 7, exemplarily shown for a nominal blackbody temperature of 0 • C, indicate the good stability of the temperatures of the Pt100 thermometers of the HSBB over all measurements at PTB and PMOD/WRC within their absolute standard measurement uncertainty of 0.041 K. The cone vertex temperature serves as the reference temperature of the HSBB and is denoted by T cone-vertex in the following. In the evalu-ation, the small changes in the cone vertex temperature during the measurements of the BB2007 against the HSBB with the TSRT, IRIS instrument and pyrgeometer compared to the calibration of the HSBB against the ammonia heat-pipe blackbody are accounted for. Thereby, only statistical uncertainties of the cone vertex temperature are used. Typical values corresponding to (2) are, exemplary for a nominal blackbody temperature of 0 • C: Here, the uncertainties of T BB2007 TSRT and T HSBB TSRT are statistical (Type A) uncertainties only, while the uncertainty of T rad,HSBB is an absolute (Type B) uncertainty. Due to the HSBB not having a perfect effective emissivity of 1.0, the radiation emitted by the environment is partly reflected by the HSBB. In fact, the laboratory room temperature at PMOD/WRC showed large variations. However, owing to the additional black coated aperture being consistently temperature-controlled at 23 • C and thus the temperature of most of the area around the blackbodies radiating towards them being well defined, a correction for the TSRT measurements is not necessary. In contrast, it is necessary to correct for the radiation that is emitted by the housings of the IRIS instrument and the pyrgeometer. This radiation is then reflected from the HSBB back to the radiometers. The correction is calculated for the different housing temperatures shown in figure 6. The irradiances onto the IRIS instrument and pyrgeometer are corrected downward by up to 0.74 W m −2 apart from one data point that is negligibly corrected upward. To perform the correction in the evaluation, the radiation temperature of the HSBB that is obtained from the comparison against the ammonia heat-pipe blackbody is corrected by T IRIS corr,HSBB and T Pyrg. corr,HSBB for the IRIS instrument and pyrgeometer, respectively. The calculation of the correction terms is given in (3). The conversion from the integrated radiances in (3) to radiation temperatures is done numerically with accuracy of better than 0.0003 K. The integrated radiances in (3) are defined in (4) and (5). The radiance of the HSBB during the comparison against the ammonia heat-pipe blackbody is given in (4), while the radiances of the HSBB during the measurements of the IRIS instrument and the pyrgeometer are denoted by L IRIS Planck,HSBB and L Pyrg. Planck,HSBB , respectively, and are given in (5). The integrated radiances correspond to the wavelength range in which the TSRT is sensitive because the temperature correction needs to be applied to the results of the comparison measurements against the ammonia heat-pipe blackbody with the TSRT. From these results, the irradiances for the IRIS instrument and pyrgeometers are subsequently calculated. The effective emissivity ε eff of the HSBB corresponds to normal incidence for TSRT measurements and is obtained from Monte Carlo simulations [11]. Its value amounts to ε eff = 0.99529 with u(k = 1) = 0.00099 and refers to the TSRT measurements with the additional aperture in the focus of the TSRT. The housing temperatures of the IRIS instrument and the pyrgeometer are denoted by T housing,IRIS and T housing,Pyrg. , respectively. After correction of the radiation temperature of the HSBB given in (6), the irradiances of the HSBB are obtained according to (7). The irradiances for the IRIS instrument and the pyrgeometer are denoted by E IRIS HSBB and E Pyrg. HSBB , respectively, and the Stefan-Boltzmann constant is denoted by σ. The use of (7) is appropriate due to the HSBB showing the same effective emissivity for normal incidence as well as the hemispherical opening angle and the coatings having spectrally flat optical properties [11]. The Stefan-Boltzmann law is used for the calculation of the irradiance in accordance with the common calibration procedure for IRIS instruments with the BB2007 [3]. While the IRIS instrument is a windowless radiometer with spectrally flat responsivity and therefore very suitable for blackbody calibration, pyrgeometers often have spectral transmission features due to their silicon domes and are typically calibrated outdoors [4]. The correction values of the radiation temperature and of the irradiance of the HSBB can be found in the appendix listed in tables A1 and A2 for the IRIS instrument and pyrgeometer, respectively. The corresponding uncertainties can be found in section 6. Results In this section, the results of the measurements are described. The agreement in radiation temperature and irradiance of the BB2007 and HSBB is explained in this section and is assessed in terms of the target irradiance uncertainty of 0.5 W m −2 . The corresponding uncertainty budget is presented in the following section. The HSBB was established as fit-for-purpose reference that provides an irradiance uncertainty of 0.5 W m −2 or better. This value was considered necessary to obtain meaningful blackbody comparison measurement results and to improve the traceability of the BB2007. In fact, the HSBB achieved even better uncertainties. The original target uncertainty of 0.5 W m −2 is considered as a reasonable benchmark value for the scale difference investigated here. The results from the TSRT measurements between the BB2007 and HSBB are shown in figure 8 and are given as the difference in the radiation temperature between the BB2007 and HSBB. Shown are the results of the two independent measurement series. For the TSRT measurements, the differences range from −0.116 K, averaged at a nominal temperature of 20 • C, to 0.086 K, averaged at −20 • C. The differences for the temperatures from −20 • C to 10 • C are within the target temperature uncertainty corresponding to the target irradiance uncertainty of 0.5 W m −2 . The target uncertainty is indicated by the grey area in figure 8. The differences for the temperature of 20 • C are outside the target uncertainty, but the uncertainty bars of the differences extend into the target uncertainty. For each nominal temperature, the corresponding differences are in agreement with each other within their uncertainties and are therefore considered well reproducible. In terms of reproducibility, the largest deviation between two differences for the same nominal temperature was found at −20 • C with a deviation of 0.050 K. The results from the measurements with the IRIS instrument and pyrgeometer are shown in figure 9. Here, the results are obtained by subtracting the irradiances of the HSBB calculated according to (7) from the irradiances of the HSBB which were detected by the IRIS instrument and the pyrgeometer, respectively, with their calibration coefficients obtained from measurements at the BB2007 as described in section 3.5. The differences found in the measurements performed using the IRIS instrument are all within the target irradiance uncertainty of 0.5 W m −2 . Here, the differences range from −0.17 W m −2 with u(k = 1) = 0.55 W m −2 at a 15 • C nominal blackbody temperature to 0.37 W m −2 with u(k = 1) = 0.84 W m −2 at 0 • C. It should be noted that, due to combined uncertainties, the uncertainties here are mostly larger than 0.5 W m −2 . For the measurements performed using the pyrgeometer, the differences range from −0.63 W m −2 with u(k = 1) = 0.46 W m −2 at 10 • C to 0.54 W m −2 with u(k = 1) = 0.50 W m −2 at −20 • C. Four out of six differences found in the pyrgeometer measurements are within the target uncertainty. The remaining differences, namely one each at nominal temperatures of −20 • C and 10 • C, are outside the target uncertainty. The uncertainty bars of these differences, however, extend into the target uncertainty. The differences corresponding to figures 8, 9(a) and (b) can be found in the appendix listed in tables A3, A4 and A5, respectively. Uncertainty budget The main individual uncertainty components are listed in table 1. The dominant uncertainty contribution of the radiation temperature -and subsequently the irradiance of the HSBBcorresponds to the uncertainty associated with the ammonia heat-pipe blackbody against which the HSBB was calibrated. In table 1, 'TSRT measurements between BB2007 and HSBB' corresponds to additional uncertainties for the TSRT comparison measurements between the BB2007 and HSBB, i. e., the influence of the size-of-source effect of the TSRT and the influence of the aperture's position. The items 'IRIS measurements between BB2007 and HSBB' and 'Pyrgeometer measurements between BB2007 and HSBB' correspond to the uncertainties associated with the calibration coefficients of the IRIS instrument and pyrgeometer, respectively, which were determined at the BB2007 and used for measurements at the HSBB. To correct the radiation temperature of the HSBB for IRIS and pyrgeometer measurements according to (3), the uncertainty of the effective emissivity of the HSBB amounting to 0.00099 is used. The uncertainty of the temperature measurement with the Pt100 thermometers of the HSBB is 0.041 K, and that of the room temperature is 1.1 K. The calculation of the uncertainty of the integrated radiances in (4) and (5) is based on [13] and is presented in (8). The correction of the reflection term was applied to all values for consistency even though it may have not been necessary for the values with housing temperatures close to 23 • C. Future measurements will reveal, depending on typical housing temperatures, whether the correction is necessary or not or if it would be useful to transform the correction into an uncertainty contribution only. Discussion and conclusion The objective of the presented measurements was to establish a second traceability path and at the same time to independently validate the existing traceability of the well-established BB2007 with the help of the new HSBB. To do so, comparison measurements between the BB2007 and HSBB were carried out. For the comparison measurements performed using the IRIS instrument, very good agreement was found with all differences lying within the target irradiance uncertainty of 0.5 W m −2 of the HSBB. For the comparison measurements performed using the TSRT and the pyrgeometer, the majority of the differences was within the target irradiance uncertainty of the HSBB. Small trends with decreasing values for increasing nominal temperatures of the BB2007 were obtained with the trends crossing the 0 K and 0 W m −2 line between −10 • C and 0 • C nominal temperature for both the TSRT and pyrgeometer measurements, respectively. The trends are insignificant for this work. The cause of the trends is difficult to fathom and will thus be the subject of further investigation. To conclude, the comparison measurements performed using the IRIS instrument as the relevant instrument for the irradiance scale transfer can be regarded as highly successful. Based on these and the very good results from the measurements performed using the pyrgeometer, it can be inferred that the BB2007 and HSBB are equally well suited for the uniform irradiation of radiometers with a hemispherical acceptance angle. Overall, no systematic errors in the existing traceability of the BB2007 could be identified. The results of this study will support the process initiated by the World Meteorological Organization to redefine the traceability chain for atmospheric longwave downward radiation that is currently based on the WISG with an estimated uncertainty of 10 W m −2 to one based on the BB2007 via IRIS instruments as the transfer standard with an uncertainty of 2 W m −2 , as proposed in [5]. As a result, measurement uncertainties associated with the WISG pyrgeometers will be reduced in the future. Several attempts have been made so far to describe the radiation and energy transfers between the Earth, the Sun and the atmosphere by means of simulations. Relatively large discrepancies arise from the results of different simulation approaches for longwave downward radiation [14]. Reduced measurement uncertainties may increase the comparability between measurements and simulations, help identify the appropriate simulation approaches and lead to resolving the differences. In the future, more measurements may be undertaken with the HSBB in a round robin-like system in order to improve the overall consistency of longwave downward radiation measurements within the BSRN. Pyrgeometers of different types, IRIS instruments and Absolute Cavity Pyrgeometers [15] may be employed. In summary, comparison measurements were carried out between two independently traceable reference blackbodies, the BB2007 operated by PMOD/WRC and the HSBB operated by PTB. Good agreement was found, representing the successful bilateral comparison of the irradiance scales of PMOD/WRC and PTB for longwave downward radiation measurements. Therewith, the existing traceability of the BB2007 to the SI could be verified to a highly satisfactory overall degree. Through this comparison, a second independent traceability path for the BB2007 was established. The results can be regarded as a major achievement on the road towards reducing measurement uncertainties and improving the significance and impact of longwave downward radiation measurements. Data availability statement Any data that support the findings of this study are included within the article. Acknowledgments This project, which is part of 16ENV03 METEOC-3 and 19ENV07 METEOC-4, has received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme.
v3-fos-license
2020-02-20T02:01:10.779Z
2019-10-01T00:00:00.000
211171675
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B673544642064451CE623E9EE3FF3CEB/S1743921320000873a.pdf/div-class-title-white-dwarfs-as-advanced-physics-laboratories-the-axion-case-div.pdf", "pdf_hash": "47ff7f33fc444525c29fe9c8df3a0bc93cfb7f6e", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43946", "s2fieldsofstudy": [ "Physics" ], "sha1": "47ff7f33fc444525c29fe9c8df3a0bc93cfb7f6e", "year": 2019 }
pes2o/s2orc
White Dwarfs as Advanced Physics Laboratories. The Axion case The shape of the luminosity function of white dwarfs (WDLF) is sensitive to the characteristic cooling time and, therefore, it can be used to test the existence of additional sources or sinks of energy such as those predicted by alternative physical theories. However, because of the degeneracy between the physical properties of white dwarfs and the properties of the Galaxy, the star formation history (SFH) and the IMF, it is almost always possible to explain any anomaly as an artifact introduced by the star formation rate. To circumvent this problem there are at least two possibilities, the analysis of the WDLF in populations with different stories, like disc and halo, and the search of effects not correlated with the SFH. These procedures are illustrated with the case of axions. Introduction Often non-standard theories predict the existence of particles which existence and properties cannot be tested in the terrestrial laboratories as a consequence of the large energies involved. One way to alleviate the problem is the use of stars to constrain their properties, either looking for a decay or a change of these properties during the trip, or examining the perturbations they introduce into the normal evolution of stars (Raffelt, 1996). Because the evolution of white dwarfs is a relatively simple process of cooling, the basic ingredients necessary to predict their behavior are well identified, and there is a solid observational background to test the theoretical results, these stars have proved to be excellent laboratories for testing new ideas in Physics . This procedure has allowed to put bounds on the mass of axions (Raffelt, 1986;Isern et al., 1992;, on the neutrino magnetic momentum (Blinnikov & Dunina-Barkovskaya, 1994), the secular drift of the Newton gravitational constant (Vila, 1976;García-Berro. E., et al., 1995), the density of magnetic monopoles (Freese, 1984) and WIMPS (Bertone & Fairbairn, 2008), as well as constraints on properties of extra dimensions (Malec, & Besiada, 2001), on dark forces (Dreiner et al., 2013), on modified gravity (Saltas et al., 2019), and formation of black holes by high energy collisions (Giddings & Mangano, 2008). In this talk only axions will be discussed. According to the standard theory, white dwarfs are the last stage of the evolution of low and intermediate mass stars. Since their core is almost completely degenerate, they cannot obtain energy from nuclear reactions and their evolution is just a process of 2 Jordi Isern contraction and cooling †. The main sources of energy are the gravo-thermal readjustment of the structure, represented by the first two terms of the r.h.s. of Equation 1.1, the gravitational settling of heavy species like 22 Ne, g s , the latent heat and sedimentation associated to crystallization, times the crystallization rateṁ s and any other exotic source or sink of energy (ε e ). The l.h.s. of Equation 1.1 contains the sinks of energy, photons and neutrinos. This equation has to be complemented with a relationship connecting the temperature of the core with the luminosity of the star. Typically L ∝ T α c with α ≈ 2.5 − 2.7. Energy losses by neutrinos The importance of neutrino losses in the evolution of white dwarfs was early recognized by Vila (1968) and Savedoff et al., (1969). During the first stages of cooling, when the star is still very hot, the energy losses are dominated by the plasma and photo neutrino processes and, as soon as the temperature decreases, by neutrino bremsstrahlung ( Iben & Tutukov 1984;D'Antona & Mazzitelli, 1989). Besides their role in the cooling, neutrinos force the thermal structures produced by AGB stars to converge towards a unique one, guaranteeing in this way the uniformity of models dimmer than log(L/L ) < −1.5 . However, in spite of the enormous progress experienced by the physics of neutrinos, several questions still remain. For instance, are neutrino Dirac or Majorana particles, do sterile neutrinos exist, which is their mass spectrum, do they have magnetic momentum?. This last problem, for instance, is specially important since the existence of a magnetic dipole momentum can notably enhance the neutrino losses in white dwarfs (Blinnikov & Dunina-Barkovskaya, 1994). Influence of the DA non-DA character The luminosity strongly depends on the properties of the envelope (mass, chemical composition and structure) as well as on the total mass and radius of the white dwarf. The main characteristics of the envelope is its tendency to become stratified, the lightest elements tending to be placed on top of the heaviest ones as a consequence of the strong gravitational field. However, this behavior is counterbalanced by convection, molecular diffusion and other processes that tend to restore the chemical homogeneity. In any case, the ∼ 80% of white dwarfs shows the presence of H-lines in their spectra while the remaining ∼ 20% not. This proportion is not constant along the cooling sequence. The first ones are generically called DAs and the second ones non-DAs. The most common interpretation is that the DAs have a double layered envelope made of H (M H ∼ 10 −4 M WD ) and He (M He ∼ 10 −2 M ) while the non-DAs have just a single He layer or an extremely thin H layer. An additional complication is that the initial conditions at the moment of formation are not well known and for the moment it is not possible to disentangle which part of this behavior is inherited and which part is evolutive, although probably both are playing a role (Althaus et al., 2010). In principle, it is possible to adjust the parameters of the AGB progenitors to obtain 25% of white dwarfs completely devoided of the hydrogen layer. But, since the relative number of DA/non-DA stars changes during their evolution, a mechanism able to transform this character must exist (Shipman 1997). It is commonly accepted that DAs start as the central star of a planetary nebula and asteroseismological data suggest they are born with a hydrogen layer of a mass in the range of 10 −8 −10 −4 M . As the star cools down, the outer convection zone deepens and, depending on the mass, completely mixes the hydrogen layer with the larger helium layer in such a way that DAs turn out into non-DAs and, consequently, the ratio DA/non-DA decreases with the effective temperature. The evolution of non-DAs is more complex. They are born as He-rich central stars of planetary nebulae and, as they cool down they look as PG 1159 stars first and DO after. The small amount of hydrogen present in the envelope floats up to the surface and when the temperature is ∼ 50, 000 K forms an outer layer thick enough to hide the helium layer to the point that the star becomes a DA. When the temperature goes below 30,000 K, the convective helium layer engulfs the hydrogen one and the white dwarf recovers the non-DA character, now as a DB, and, as it continues to cool down, it becomes a DC. Notice that a fraction of DCs has a DA origin. Besides the phenomenological differences between DA and non-DA families, the most important property is that they cool down at different rates since hydrogen is more opaque than helium. The axion case The so called strong CP problem, i.e. the existence in the Lagrangian of Quantum Chromodynamics of a term, not observed in Nature, that violates the charge-parity symmetry, is one of the most important problems that has to face the Standard Model. One possible solution consists on the introduction of a new symmetry that breaks at energies of the order of f a ∼ 10 9 − 10 11 GeV and gives raise to a new particle, the axion. This particle is a boson and has a mass m a = 6 meV(10 9 GeV/f a ) that is not fixed by the theory. The larger is the mass, the larger is the interaction with matter (Raffelt, 1996). The interaction with photons and fermions is described as: where g aγγ (GeV −1 ) and g ai are the corresponding coupling constants, F is the electromagnetic field tensor and γ 5 is the corresponding Dirac field. The values that these constants take are model dependent. There are several models of axions (Dias et al., 2014). Here only the DFSZ one is considered (Dine, Fishler & Srednick, 1981;Zhimitskii, 1980) since it predicts a profuse production of axions in the hot and dense interior of white dwarfs as a consequence of the interaction with electrons. In this model, the adimensional axion-electron coupling constant is related to the mass of the axion through where cos 2 β is a free parameter. For white dwarfs in the luminosity range 8.5 M bol 12.5 the production of neutrinos and axions is dominated by bremsstrahlung. The emission of thermal axions is similar to that of thermal neutrinos †. There are, however, subtle differences introduced by the boson character of axions. One is that when the white dwarf cools down, neutrino emission,˙ bremss ∝ T 7 (Itoh et al., 1996), is quenched before axionic emission,˙ a ∝ T 4 (Nakagawa et al., 1987;Nakagawa et al., 4 Jordi Isern 1988), as it can be seen in Fig. 1 ‡. The other effect is that when white dwarfs are hot, axion emission modifies the temperature profile and reduces the neutrino losses (Miller Bertolami et al., 2014). Variable white dwarfs During the process of cooling, white dwarfs cross some regions of the H-R diagram where they become unstable and pulsate. The multifrequency character and the size of the period of pulsation (10 2 − 10 3 s) indicate that they are g-mode pulsators †. As a variable white dwarf cools down, the oscillation period, P , changes as a consequence of the changes in the thermal and mechanical structure. This secular drift can be approximated by (Winget et al., 1983) where a and b are positive constants of the order of unity. The first term of the r.h.s. reflects the decrease of the Brunt-Väisälä frequency with the temperature, while the second term reflects the increase of the frequency induced by the residual gravitational contraction. There are at least three types of variable white dwarfs, the DOV, DBV, and DAV. In the first case, gravitational contraction is still significant and the second term of Eq. 3.1 is not negligible, for which reasonṖ can be positive or negative. The DBV stars are characterized by the lack of H-layer, their effective temperatures are in the range of 23, 000 − 30, 000 k and the secular drift is always positive and in the range of 10 −13 −10 −14 ss −1 . The DAV white dwarfs, also known as ZZ Ceti stars, are characterized by the presence of a thin atmospheric layer of pure hydrogen. Their effective temperature is in the range of 12, 000−15, 000 K and their period drift, always positive, is of the order of ∼ 10 −15 ss −1 . Therefore, these secular drifts can be used to test the predicted evolution of white dwarfs and, if the models are reliable enough, to test any physical effect able to change the pulsation period of these stars. In order of magnitude, whereṖ obs is the observed period drift, L 0 andṖ 0 are obtained from standard models and L x is the extra luminosity necessary to fit the observed period (Isern et al., 1992) . G117-B15A is a ZZ Ceti star discovered by McGraw & Robinson (1976) that has been monitored since then. The first measurement ofṖ gave a value that was a factor two larger than expected (Kepler et al., 1991). The temperature of such star is low enough to neglect the radial term of Eq. 3.1 and the influence of neutrinos. These two facts led to Isern et al., (1992) to postulate axions of the DSKZ type with g ae ≈ 2.2 × 10 −13 as responsible of the anomalous cooling. The analysis of Córsico et al., (2001) and Bischoff-Kim et al., (2008) showed that the presence of axions accelerates the drift but not changes the period of pulsation. The most recent estimation of the drift of the 215 s pulsation period isṖ obs = (4.07±0.61)×10 −15 s s −1 (Kepler et al., 2012), which suggests g ae = 4.9×10 −13 (Fig. 2) from full model fitting (Córsico et al., 2012a). These analysis indicate that the cooling rate is larger than expected if the pulsation modes are trapped in the outer envelope. This poses a problem since the trapping strongly depends on the detailed chemical gradients and the chemical structure of these layers, which are built during the AGB phase, is very sensitive to the methods used to treat convection and pulses during this epoch. Other problems come from the fact that these regions are partially degenerate and not all the physical inputs, specially axion emissivities, are correctly computed in this regime. Furthermore, there are still many uncertainties in the equation of state and opacities. In the case of DB white dwarfs the drift has been measured in PG1351+489 (Redaelli et al., 2011),Ṗ obs = (2.0 ± 0.9) × 10 −13 s s −1 , which provides a bound of g ae × 10 13 < 3.3 (Battich et al., 2016). See Figure 2. Notice that at these temperatures, neutrinos are still active and their emission can be affected by axions or even by the existence of a hypothetical magnetic momentum. The luminosity function The luminosity function (LF) is defined as the number density of white dwarfs of a given luminosity per unit magnitude interval: where M is the mass of the parent star (for convenience all white dwarfs are labeled with the mass of the zero age main sequence progenitor), t cool is the cooling time down to luminosity L, τ cool = dt/dM bol is the characteristic cooling time of the white dwarf, t ps is the lifetime of the progenitor of the white dwarf and T G is the age of the Galaxy or the population under study, and M u and M l are the maximum and the minimum mass of the main sequence stars able to produce a white dwarf, therefore M l satisfies the condition T G = t cool (L, M l ) + t ps (M l ). Φ(M ) is the initial mass function and Ψ(t) is the star formation rate (SFR) of the population under consideration. Additionally, hidden, there is an initial-final mass function connecting the properties of the progenitor with those of the white dwarf. In order to compare theory with observations, and since the total density of white dwarfs is not yet well known, the computed luminosity function is usually normalized to the bin with the smaller error bar, usually log L/L 3. This equation contains three sets of terms, the observational ones, n(L), the stellar ones, t cool , τ cool , t PS , M U , M i , plus the initial final mass function (Catalan et al., 2008), and the galactic ones Φ and Ψ. The first empirical luminosity function was obtained by Weidemann (1968) and was improved by several authors during ninetees (Figure 3) proving in this way that the evolution of white dwarfs was just a cooling process and that there was a cut-off in the distribution caused by the finite age of the Galaxy. The position of the cut-off is sensitive to the cooling rate and, consequently, it can be used to constrain any new theory or hypothesis implying the introduction of an additional source or sink of energy. However the low number of stars in the samples, few hundreds, and the uncertainties in the position of the cut-off prevented anything else than obtaining upper bounds. Evans (1992), full squares; Oswalt et al., (1996), open triangles; Legget et al., (1998), open diamonds; Knox, Hawkins & Hambly (1999), open circles. (Harris et al., 2006), open blue squares, only DAs (DeGennaro et al., 2008), black crosses (Krzesinski et al., 2009), and green stars . Magenta stars were obtained from the SCSS catalogue (Rowell & Hambly (2011)) and contain DA and non-DA stars. The advent of large cosmological surveys like the Sloan Digital Sky Survey (SDSS) and the Super COSMOS Sky Survey, both completely independent, introduced a noticeable improvement in the precision and accuracy of the luminosity function since they allowed to increase the sample size to several thousands of stars. Figure 4 displays both functions normalized to the M bol ≈ 12 bin. As can be seen, they nearly coincide over a large part of the brightness interval. At large brightness, M bol < 6, both luminosity functions show a large dispersion, not plotted in the figure, as a consequence of the fact that the proper motion method is not appropriate there. One way to circumvent this problem relies on White Dwarfs and Axions 7 the UV-excess technique (Krzsinski et al. 2009). The results obtained in this way are represented by black crosses after matching their dim region with the corresponding bright segment of the Harris et al. (2006) distribution. As a complement, the luminosity function of the dimmest white dwarfs obtained by Legget et al. (1998) has been included (red triangles). The discrepancies at low luminosities are due to the difficulty to separate DAs from non-DAs and to the different behavior of the envelope. The quality of this new luminosity functions allowed, for the first time, to determine their shape and to use the slope as a tool to test new physical theories. If an additional source or sink of energy is added, the characteristic cooling time is modified and its imprint appears in the luminosity function, as can be seen in Fig. 4, where a change of slope is evident at magnitudes M bol ∼ 8. This change is caused by the transition from a cooling dominated by neutrinos to one dominated by photons. As an example, this technique was used by to suggest that axions of the DFSZ type could be contributing to the cooling of white dwarfs. The main problem when using Eq. 4.1 is that the star formation rate has to be obtained independently from white dwarfs in order to break the degeneracy between the galactic properties and stellar models. Fortunately, the luminosity function has an important property. The shape of the bright branch, stars brighter than M bol ≈ 13, is almost independent of the assumed star formation rate as it can be seen in Fig. 4 where several theoretical white dwarf luminosity functions are displayed. The two solid black lines have been obtained assuming a constant SFR but two ages of the Galaxy, 10 and 13 Gyr respectively, the dashed line assuming a decreasing exponential SFR, Ψ ∝ exp(−t/τ ), τ = 3 Gyr, where t is the age of the Galaxy, and the dotted line assuming an almost constant SFR with an exponentially decreasing tail that represents models where the star formation propagated from the center to the periphery, Ψ ∝ (1 + exp[(t − t 0 )/τ ]) −1 , τ = 3 Gyr, t 0 = 10 Gyr, where t is the looking back time. As can be seen, in the region M bol ≈ 6 − 13, all luminosity functions overlap as far as the SFR is smooth enough. The differences due to the shape of the SFR only appear in the regions containing cool or very bright white dwarfs. Unfortunately, the observational uncertainties in these regions prevent at present to discriminate among the different possibilities. This behavior can be understood in the following way: Eq. (4.1) can be written as: Restricting this equation to the bright white dwarfs -namely, those for which t cool is small -the lower limit of the integral is satisfied by low-mass stars and, as a consequence of the strong dependence of the main sequence lifetimes with mass, it adopts a value that is almost independent of the luminosity under consideration. Therefore, if Ψ is a well-behaved function and T G is large enough, the lower limit is almost independent of the luminosity, and the value of the integral is incorporated into the normalization constant in such a way that the shape of the luminosity function only depends on the averaged physical properties of the white dwarfs ). This average is dominated by low mass white dwarfs and, as far as the mass spectrum is not strongly perturbed by the adopted star formation or the initial mass function, it is representative of the intrinsic properties of white dwarfs (Isern et al., 2009). This shape, however, can be modified by a recent burst of star formation since, in this case, low-mass main sequence stars have no time to become white dwarfs and M I in Eq. 4.2 becomes 8 Jordi Isern luminosity dependent. On the contrary, if the burst is old enough, the corresponding luminosity functions are barely modified. Another important property is that in the bright region considered here, the slope of the relationship between the luminosity and the core temperature of DA and non-DA white dwarfs almost coincide, and both luminosity functions almost overlap in this luminosity interval after normalization. This is the reason why the luminosity function of DeGennaro et al., (2008) containing only DAs coincidesafter normalization with those containing DAs and non-DAs as it can be seen in Fig. 4. Therefore, Eq. 4.2 offers the possibility to use the slope of the bright branch of the luminosity function to detect the presence of unexpected additional sinks or sources of energy in white dwarfs. In the case of axions, this method was used for the first time by who obtained g ae = (1.4 +0.9 −0.8 ) × 10 −13 , see WDLF08 label in Fig. 2, using a preliminary version of the Harris loc cit. Miller Bertolami et al. (2014) reexamined this result using all the luminosity functions of Fig. 4 (but with a consolidated Harris loc cit. luminosity function) and with a self consistent treatment of the neutrino cooling and concluded that DFSZ axions with a coupling constant g ae in the range of (0.7 − 2.1) × 10 −13 could exist (WDLF14 label in Fig.2). All these results, however, have to be regarded as qualitative, since the uncertainties plaguing the determination of both, observed and computed, luminosity functions are still large. Given the degeneracy between the stellar and galactic terms in Eq. 4.2, it is natural to wonder if the changes in the shape of the luminosity function attributed to axions are an artifact introduced by the star formation rate. One way to break this degeneracy and decide if it is necessary to introduce or not new physical effects is to examine the luminosity function of populations that have different formation histories. If axions exist and can modify the cooling of white dwarfs, their imprint will be present in all the luminosity functions at roughly the same luminosities. Furthermore, it is well known that the adopted white dwarf scale height above the galactic plane s has a noticeable effect on the shape of the luminosity function (García-Berro et al., 1988;Harris et al., 2006;Kilic et al., 1917). This argument reinforces the convenience of analyzing the luminosity function of white dwarfs with different scale heights and independent star formation histories (Isern et al., 2018). Rowell & Hambly (2011) provided, for the first time, a luminosity function for the thin and thick disks and noticeably improved that of the halo (Fig. 5). Table 1 shows how the discrepancies between theoretical calculations and observations decrease in all three cases when axions are included. The fit favors axions within the interval g ae = (2.24 − 4.48) × 10 −13 with some tension between the halo and the disk results, although the 2 σ bounds are compatible. See Fig. 2, label WDLF(RH). Munn et al. (2017) improved the Harris et al. (2006) luminosity function resolving the peak and increasing the precision of the brighter branch assuming a constant scale height above the Galactic plane. They also computed the luminosity function for the halo selecting white dwarfs with 200 v tan 500 km s −1 . As before, the inclusion of axions improves the concordance between theory and observations both in the disk and the halo. The best fit is obtained for g ae ≈ 2.24 × 10 −13 , label WDLF(M) Fig. 2, with (2 σ) upper bounds of g ae < (4.2 and 14) × 10 −13 coming, respectively, from the disk and halo luminosity functions. Kilic et al. (2017) reexamined the Munn et al. (2017) data assuming three variable scale heights above the galactic plane going linearly from 200 pc now to 900, 700 and 500 pc in the past ,respectively, and one with a fixed scale height of 300 pc. They concluded that the slight discrepancies in the region 6 M bol 12.5 were caused by the use of a fixed scale height. This argument is correct but, as it was shown by Isern et al (2018), Figure 5. Luminosity functions of DA and non-DA white dwarfs of the thin and thick discs and halo [Rowell & Hambley 2011]. The black line represent the case without axions. The blue and red lines correspond to the cases where DFSZ axions with coupling constants gae × 10 13 = 2.24 and 4.48 are included in the cooling model . Table 1. Reduced χ 2 obtained from the fitting of the observed white dwarf luminosity functions in the brightness interval 6 < M bol < 12.5 with different intensities of the coupling between electrons and axions [Isern et al. 2018]. see Fig. 7, the inclusion of axions improves the agreement. In these cases the best fit was also around g ae ≈ 2.24 × 10 −13 , Table 1, and a 2 σ global upper bound g ae < 6 × 10 −13 (Fig. 2, label WDLF(K)). Recently a second possibility has appeared. Tremblay et al., (2019) have been able to build a reliable and precise luminosity function of massive white dwarfs that belong to the solar neighborhood (d 100 pc) using the data provided by Gaia (Fig. 8, left panel). If the luminosity function, Eq. 4.1, is restricted to massive white dwarfs, i.e. those for which it is possible to neglect the lifetime of the progenitor in front of the cooling time, Figure 6. Luminosity functions of DA and nDA white dwarfs belonging to the disk (thin and thick) and the halo . The meaning of the solid lines is the same as in Fig. 5. Figure 7. White dwarf luminosity functions of the disk assuming different scale heights as proposed by Kilic et al. (2017). From top to bottom Φ200−900 + 3, Φ200−700 + 2, Φ200−500 + 1 and Φ300. Solid lines correspond to gae × 10 13 = 0, 1.12, 2.24, 4.48 (black, magenta, blue, red, respectively). The dashed line corresponds to a case with no axions and an SFR constant plus an exponential tail as in Fig 4. and Ψ is smooth enough, the age of any bin and the star formation rate corresponding to this time can be computed as with ∆t cool = t cool (l + 0.5∆l, M ) − t cool (l − 0.5∆l, M ), l = − log(L/L ) and ∆l the width of the luminosity function bin (Isern, 2019) and this allows to reconstruct the star formation history of the solar neighborhood † (Fig. 8, right panel). The star formation rate obtained in this way is not constant or monotonically decreasing as it is often assumed. It grew quickly in the past, during the first epochs of the Galaxy, it roughly stabilized and started to decrease 7 to 6 Gyr ago. A noticeable feature is a prominent peak centered around 2.5 Gyr ago, the exact position being model depending. The existing degeneracy between galactic properties and evolutionary models imply that different stellar models can lead to different star formation histories, for which reason it is necessary to compare these results with others obtained independently. Mor et al., (2018) computed the star formation history of the Galactic disk using main sequence stars (Tremblay et al., 2019). Right panel: in black, the corresponding star formation rate (Isern, 2019), in red, the Galactic disc star formation (Mor et al., 2019. from the Gaia DR2 and the Besançon Galaxy Model. Since this function is expressed in stars per unit of disk surface it has been divided by an arbitrary and constant height scale above the galactic plane (red line, Fig. 9). As it can be seen both methods, local and galactic, predict a concordant burst of star formation ∼ 2.5 Gyr ago but diverge at early times. This divergence may have several origins, a local delay in starting the star formation process, a different behavior of the outer and inner disks, a vertical dilution caused by a galaxy collision or just the conversion of DA white dwarfs into non-DAs. The peak that appears at ∼ 0.2 Gyr is in the limit of applicability of the method and deserves more attention. Figure 9 displays the star formation rate obtained when including axions (and crystallization sedimentation effects) in the cooling models. As it can be seen, the SFRs obtained in this way have the same behavior, but they are displaced towards shorter ages and larger rates thus providing an additional way to discriminate among theoretical models. Figure 10 displays a test of consistency where the SFR obtained from the luminosity function of the massive ones is used to compute the total one. The agreement between theory and observations is reasonable taking into account that observations were obtained independently and that the results are very sensitive to the DA/nDA properties as well as to the adopted initial mass function and initial final mass relationship. The excesses around M bol ∼ 9 − 10 could be due to recent bursts of star formation not resolved by the present binning of the massive luminosity function and, as mentioned before, deserves a detailed analysis. Figure 9 also shows that values of g ae > ∼ 3 × 10 −13 are not compatible with the results obtained by Mor et al., (2019). 2017) respectively.The theoretical ones were obtained with the star formation rates of Fig. 9 with gae = 0 (black lines) and 2.24 × 10 −13 (blue lines). Discussion and conclusions The two methods presented here are complementary since they measure the cooling rate of the same object using different phenomenologies. One, the drift of the pulsation period, applies to individual stars, while the other, the luminosity function, applies to the population ensemble. As it is evident from Fig. 2, there is some tension between both sets of data but, given the uncertainties, they can be considered as qualitatively concordant. Axions with the properties described here not only would perturb the cooling of white dwarfs but they would also modify in a subtle way the evolution of other kind of stars. An extensive review can be found in Giannotti et al., (2017), while Sedrakian (2019) provides a recent analysis of the cooling by DFSZ axions of the neutron star in Cas A. Here only the two more restrictive tests are presented. The luminosity of a star in the red giant branch depends on the mass of its core and is due to the hydrogen burning in a thin shell surrounding it (Paczynski, 1970). Thus, the luminosity grows with the core until He is ignited in the center. If axions were present, the core would be more massive than in the standard case and, consequently, the tip of the red giant branch would be brighter. Obviously, since H-burning occurs via CNO-cycle, this luminosity depends on the metallicity. The analysis of the red tip in M5 suggested the necessity of an extra cooling source. If this source were axions, the electron coupling constant should be g ae ∼ 1.9 × 10 −13 (Viaux et al., 2013), but a similar analysis performed on M3 did not find the necessity of an extra cooling term and provided a more stringent upper bound of g ae < 2.57 × 10 −13 (Straniero et al., 2018). A similar bound has also been obtained using a set of 50 globular clusters (Arceo-Diaz et al., 2019). In Fig. 2 this bound and the hint found in M5 are represented by the RGB-T line. A detailed analysis of the existing uncertainties can be found in Serenelli et al., (2017). The evolution of HB stars provides additional observational tests. The number of stars in a given region of the HR diagram is roughly proportional to the time spent in it. Since HB stars are the direct descendants of red giant stars, the ratio between the number of HB stars, N HB , and the red giants, N RGB , in a cluster should satisfy the relationship R = N HB /N RGB = ∆t HB /∆ RGB and, since the densities and temperatures of both populations are very different, the inclusion of the axion emission should strongly perturb these quantities. This parameter has been measured in a large sample of globular clusters and is fairly constant at low metallicities, R av = 1.39 ± 0.03 (Salaris et al., 2004). Ayala et al., (2014) and Straniero et al., (2015) examined the influence of axions in this case and found that the best fit to the parameter R was obtained for g aγ = 0.29 ± 0.18 × 10 −10 GeV −1 , i.e. that it was necessary an extra cooling factor, with an upper bound of g aγ < 0.66×10 −10 GeV −1 . In terms of the mass of the axion, these values translate to 0.12 eV and < 0.2 eV, respectively, in the case of the KSVZ axions. Since they did not include bremsstrahlung with electrons, that strongly affects the evolution of RGB stars, it is not possible to extrapolate these values to the DFSZ case. However, in an updated version, taking into account the axion-electron interaction, Straniero et al., (2018) obtained g aγ < 0.5 × 10 −10 GeV −1 and g ae < 2.6 × 10 −13 (these values are represented in Fig. 2). These results, however, strongly depend on the assumed He abundance in the cluster. The Zero Age Horizontal Branch provides an additional test. If the axion-electron interaction is operating in RGB stars, the core is larger and the luminosity of HB stars too. Straniero et al., (2018) examined this case and found a best fit at g ae = 0.54 × 10 −13 and a bound at g ae < 1.78 × 10 −13 . These values are plotted in Fig. 2. Certainly, the deviations represented in Fig. 2 have a small statistical significance, but taken together they suggest that stars loose energy more efficiently than expected. Axions of the DFSZ type, with a mass of few meV could do the job, although other possibilities are open. For instance an ALP † with g ae ∼ 1.5 × 10 −13 and g aγ ∼ 1.4 × 10 −11 GeV −1 (Giannotti et al., 2016). It is evident from this discussion that a direct detection is necessary to prove the existence of axions and that white dwarfs provide an interesting hint on where to search. IAXO is a future experiment designed to detect the emission of axions by the Sun (Irastorza et al., 2011) that will have enough sensitivity to detect axions of the DFSZ type and mass > ∼ 3 meV, according to the calculations of Redondo, (2013). Thanks to the data that is providing Gaia and will provide the LSST as well as other instruments, white dwarfs will become in a next future one of the best characterized populations of the Galaxy, and this will convert them into an excellent laboratory to explore new ideas in Physics.
v3-fos-license
2023-04-07T15:26:34.994Z
2023-04-01T00:00:00.000
257991838
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "375cf59477c5ade83f1e23f548eac858d6c2b6bf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43948", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "bdb06ed46b9ec30619ad4efe2f6fbd25baed7226", "year": 2023 }
pes2o/s2orc
First In Vivo Insights on the Effects of Tempol-Methoxycinnamate, a New UV Filter, as Alternative to Octyl Methoxycinnamate, on Zebrafish Early Development The demand for organic UV filters as active components in sunscreen products has rapidly risen over the last century, as people have gradually realized the hazards of overexposure to UV radiation. Their extensive usage has resulted in their ubiquitous presence in different aquatic matrices, representing a potential threat to living organisms. In this context, the need to replace classic UV filters such as octyl methoxycinnamate (OMC), one of the most popular UV filters reported to be a potential pollutant of aquatic ecosystems, with more environmentally friendly ones has emerged. In this study, using zebrafish, the first in vivo results regarding the effect of exposure to tempol-methoxycinnamate (TMC), a derivative of OMC, are reported. A comparative study between TMC and OMC was performed, analyzing embryos exposed to similar TMC and OMC concentrations, focusing on morphological and molecular changes. While both compounds seemed not to affect hatching and embryogenesis, OMC exposure caused an increase in endoplasmic reticulum (ER) stress response genes, according to increased eif2ak3, ddit3, nrf2, and nkap mRNA levels and in oxidative stress genes, as observed from modulation of the sod1, sod2, gpr, and trx mRNA levels. On the contrary, exposure to TMC led to reduced toxicity, probably due to the presence of the nitroxide group in the compound’s molecular structure responsible for antioxidant activity. In addition, both UV filters were docked with estrogen and androgen receptors where they acted differently, in agreement with the molecular analysis that showed a hormone-like activity for OMC but not for TMC. Overall, the results indicate the suitability of TMC as an alternative, environmentally safer UV filter. Introduction Ultraviolet (UV) filters present in sun care products play a pivotal role in preventing skin damage caused by overexposure to a broad spectrum of UV radiation that reaches the Earth's surface [1]. However, increasing concerns have recently been raised regarding UV filters, as well as other cosmetic and personal care product additives, as several studies have shown that they can cause a series of adverse effects in animal models, spanning from developmental and neurological disorders [2][3][4] to reproductive impairment [5][6][7][8], including decreased semen [9,10] and oocyte quality [11], as well as transgenerational effects [12]. To date, one of the most used UV filters is octyl methoxycinnamate (OMC), also known as ethylhexyl methoxycinnamate (EHMC) or octinoxate, which, over the years, has been used worldwide in more than 90% of commercially available sunscreens and cosmetic products [13]. OMC toxicity has been documented, both in vitro and in vivo, in different animal models, either vertebrates or invertebrates [14][15][16][17][18][19]. Moreover, recently, in medaka, its transgenerational effect has also been stated [20], while, in zebrafish, parental exposure . Effect on Embryo Morphology To verify the possible interference of the two chemicals on the early developmental phases, the embryo and larvae were observed daily, starting just prior to hatching and using a stereomicroscope at 48, 72, and 96 h post-fertilization (hpf). At 48 hpf, the two eye diameters (D), the eye distance (E.D.), and the distance between the eyes (D.B.E.) were checked. At 72 and 96 hpf, the length of the larvae (L), the length of the eyes (E.L.), the yolk area (Y.A.), and its perimeter (Y.P.) were recorded. Figure 1A,B show representative images and details regarding the morphological parameters. As shown in Table 1, at 48 hpf, no morphological differences were observed among the embryos exposed to the UV filters and in the control group. No differences occurred either among the embryos exposed to the three OMC and TMC concentrations. Table 1. The table shows representative images of 48 hpf embryos exposed to DMSO (vehicle control) and to the 3 different concentrations of OMC and TMC (1 = 0.05 mM, 2 = 0.01 mM, and 3 = 0.005 mM). The morphological parameters, as shown in Figure 1 As shown in Table 1, at 48 hpf, no morphological differences were observed among the embryos exposed to the UV filters and in the control group. No differences occurred either among the embryos exposed to the three OMC and TMC concentrations. Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. 48 Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. Similarly, at 72 (Table 2) and 96 (Table 3) hpf, no morphological differences were observed among the control and exposed larvae. Table 2. The table shows representative images of 72 hpf larvae exposed to DMSO (vehicle control) and to the 3 different concentrations of OMC and TMC (1 = 0.05 mM, 2 = 0.01 mM, and 3 = 0.005 mM). The morphological parameters, as shown in Figure 1 The hatching rate was monitored daily starting from 48 hpf. At 48 h, in OMC2-exposed embryos, the hatching rate was higher, although not to a significant extent with respect to all the other experimental groups. At 72 hpf, the hatching rate reached 60% and was similar in all groups. At 96 hpf, more than 90% of the embryos were hatched in all groups, and no statistical significances were observed among the experimental fish (Figure 2 The hatching rate was monitored daily starting from 48 hpf. At 48 h, in OMC2-exposed embryos, the hatching rate was higher, although not to a significant extent with respect to all the other experimental groups. At 72 hpf, the hatching rate reached 60% and was similar in all groups. At 96 hpf, more than 90% of the embryos were hatched in all groups, and no statistical significances were observed among the experimental fish (Figure 2 The hatching rate was monitored daily starting from 48 hpf. At 48 h, in OMC2-exposed embryos, the hatching rate was higher, although not to a significant extent with respect to all the other experimental groups. At 72 hpf, the hatching rate reached 60% and was similar in all groups. At 96 hpf, more than 90% of the embryos were hatched in all groups, and no statistical significances were observed among the experimental fish ( Figure 2). Alcian Blue-Alizarin Red Double Staining The double staining technique allows the simultaneous staining of different tissue types. Alcian blue is used to stain acidic polysaccharides such as glycosaminoglycans in cartilages and other body structures, while alizarin red reacts with calcium, thereby helping in the diagnosis of calcium deposits and, thus, is commonly used to mark bone tissue. Fish were stained at 96 h, a good time point to monitor cartilage [29]. Images obtained with the optical microscope show that, according to the developmental stage, the head skeleton presents mainly cartilaginous structures. However, it is possible to notice that larvae exposed to every OMC (Figure 3a-d) Alcian Blue-Alizarin Red Double Staining The double staining technique allows the simultaneous staining of different tissue types. Alcian blue is used to stain acidic polysaccharides such as glycosaminoglycans in cartilages and other body structures, while alizarin red reacts with calcium, thereby helping in the diagnosis of calcium deposits and, thus, is commonly used to mark bone tissue. Fish were stained at 96 h, a good time point to monitor cartilage [29]. Images obtained with the optical microscope show that, according to the developmental stage, the head skeleton presents mainly cartilaginous structures. However, it is possible to notice that larvae exposed to every OMC (Figure 3a-d) and TMC (Figure 3e-h) concentration already show signs of the calcified basioccipital articulatory process (BOP), thus suggesting a possible acceleration in development. The double staining technique allows the simultaneous staining of differ types. Alcian blue is used to stain acidic polysaccharides such as glycosamino cartilages and other body structures, while alizarin red reacts with calcium, the ing in the diagnosis of calcium deposits and, thus, is commonly used to mark b Fish were stained at 96 h, a good time point to monitor cartilage [29]. Image with the optical microscope show that, according to the developmental stage skeleton presents mainly cartilaginous structures. However, it is possible to n larvae exposed to every OMC (Figure 3a-d) and TMC (Figure 3 e-h) concentratio show signs of the calcified basioccipital articulatory process (BOP), thus sugges sible acceleration in development. Analysis of Molecular Targets Involved in Oxidative Stress Response By real-time PCR, the expression of Heat shock protein 70 (hsp70.2) mRNA, which codifies for a chaperone that helps proteins to maintain their folded structure, was analyzed. The data show a significant decrease in hsp70.2 mRNA expression in all groups exposed to OMC and TMC compared to the CTRL and DMSO groups. Particularly, the lowest expression of hsp70.2 was measured in embryos treated with the lowest concentration of OMC (OMC3), and a comparable reduction was observed in OMC1, OMC2, and TMC1exposed embryos. The medium and lowest TMC concentrations determined a reduction in expression, which was significantly lower with respect to the control groups but higher with respect to that induced in the OMC-treated groups and the TMC1 one (Figure 4a). Regarding the expression of genes codifying for the antioxidant system, superoxide dismutase (sod) mRNAs were analyzed. The Sod1 mRNA level was significantly increased in the OMC1, OMC2, and TMC3 groups compared to the control groups, while a decrease in expression was observed in the TMC2 group (Figure 4b). Regarding sod2, the mRNA transcript increased in TMC3-treated fish. In contrast, a decrease, although not statistically significant with respect to the controls, was observed in OMC2 and OMC3 larvae. Nevertheless, the TMC1 and TMC2 concentrations caused a significant mRNA reduction (Figure 4c). The glutathione peroxidase (gpx1a) mRNA levels pinpointed significant differences among the exposed and control groups. Specifically, regarding treatments, in OMC-exposed fish, a U shape response was observed, with an increase induced by the lowest and highest concentrations, while the medium one did not affect the basal levels. Concerning TMC, only TMC1 and TMC2 exposure upregulated mRNA expression, with values similar to those induced by OMC1 and OMC2 concentrations (Figure 4d). Thioredoxin (Trx) mRNA expression was significantly upregulated only in OMC2, TMC1, and TMC 2-exposed larvae. Furthermore, in OMC1-treated fish, a slight increase was observed, although not statistically significant (Figure 4e). By a Western blot analysis, the nitrotyrosine (NT) protein levels were assayed. A significant upregulation of the protein levels was observed only in OMC3-treated fish. Noteworthy is the decreasing, although not significant, trend displayed by the TMC treatments (Figure 4f). Analysis of Molecular Targets Involved in the Endoplasmic Reticulum Stress (ER Stress) Response, Inflammation, and Apoptosis Among the molecular sensors triggering the onset of ER stress, eif2ak3 mRNA (codifying for PERK protein) was significantly downregulated in larvae exposed to OMC2, OMC3, and TMC3, while it was upregulated in larvae exposed to TMC1 and TMC2 (Figure 5a). The expression of DNA damage-inducible transcript 3 (ddit3), also known as the C/EBP homologous protein (CHOP), significantly decreased in all groups exposed to OMC (Figure 5b). Nuclear receptor factor 2 (nrf2) mRNA resulted in significantly upregulated in OMC2 exposed larvae and downregulated OMC3 and TMC3 groups (Figure 5c). The expression of NF-kB-Activating Protein (nkap) mRNA, encoding a protein involved in the activation of the ubiquitous transcription factor NF-kB, was significantly upregulated only in TMC2-and TMC3-exposed embryos (Figure 5d). By a Western blot analysis, the active-cleaved caspase 3 protein levels were assayed and were significantly downregulated only in OMC1 and OMC2-exposed larvae (Figure 5e). trx mRNA and (f) Nitrotyrosine protein levels in larvae exposed to the different experimental treatments. mRNA levels were normalized against rplp0 and rplp13 and used as reference genes. Nitrotyrosine levels were normalized against β-actin. Insert shows representative nitrotyrosine (55 KDa) and b-Act (42 KDa) immunoblots. Data are shown as the mean ± SD. Different letters indicate statistically significant differences among the experimental groups (p < 0.05). Analysis of Molecular Targets Involved in the Endoplasmic Reticulum Stress (ER Stress) Response, Inflammation, and Apoptosis Among the molecular sensors triggering the onset of ER stress, eif2ak3 mRNA (codifying for PERK protein) was significantly downregulated in larvae exposed to OMC2, OMC3, and TMC3, while it was upregulated in larvae exposed to TMC1 and TMC2 (Figure 5a). The expression of DNA damage-inducible transcript 3 (ddit3), also known as the C/EBP homologous protein (CHOP), significantly decreased in all groups exposed to OMC (Figure 5b). Nuclear receptor factor 2 (nrf2) mRNA resulted in significantly upregulated in OMC2 exposed larvae and downregulated OMC3 and TMC3 groups (Figure 5c). and (e) Casp3 protein levels in larvae exposed to the different experimental treatments. mRNA levels were normalized against rplp0 and rplp13 and used as reference genes. Cleaved caspase 3 (Casp3) (17 KDa) levels were normalized against b-actin (b-Act) (42 KDa). Insert (e) shows representative Casp3 and β-Act blots. Data are shown as the mean ± SD. Different letters indicate statistically significant differences among the experimental groups (p < 0.05). Analysis of Androgen Receptor (ar) Transcript and Estrogen Receptor α (ERα) Protein Levels The Ar transcript levels were significantly downregulated by OMC exposure, while the TMC treatment did not induce significant changes either with respect to the DMSOtreated or to CTRL fish (Figure 6a). By a Western blot analysis, the Erα protein levels were analyzed, and a significant reduction was observed in OMC1-and OMC2-treated larvae, while TMC1 exposure caused a significant downregulation only with respect to DMSOexposed larvae (Figure 6b). and (e) Casp3 protein levels in larvae exposed to the different experimental treatments. mRNA levels were normalized against rplp0 and rplp13 and used as reference genes. Cleaved caspase 3 (Casp3) (17 KDa) levels were normalized against b-actin (b-Act) (42 KDa). Insert (e) shows representative Casp3 and β-Act blots. Data are shown as the mean ± SD. Different letters indicate statistically significant differences among the experimental groups (p < 0.05). The expression of NF-kB-Activating Protein (nkap) mRNA, encoding a protein involved in the activation of the ubiquitous transcription factor NF-kB, was significantly upregulated only in TMC2-and TMC3-exposed embryos (Figure 5d). By a Western blot analysis, the active-cleaved caspase 3 protein levels were assayed and were significantly downregulated only in OMC1 and OMC2-exposed larvae (Figure 5e). Analysis of Androgen Receptor (ar) Transcript and Estrogen Receptor α (ERα) Protein Levels The Ar transcript levels were significantly downregulated by OMC exposure, while the TMC treatment did not induce significant changes either with respect to the DMSO-treated or to CTRL fish (Figure 6a). By a Western blot analysis, the Erα protein levels were analyzed, and a significant reduction was observed in OMC1-and OMC2-treated larvae, while TMC1 exposure caused a significant downregulation only with respect to DMSO-exposed larvae (Figure 6b). ERα protein levels in larvae exposed to the different experimental treatments. mRNA levels were normalized against rplp0 and rplp13 and used as reference genes. The ERα levels were normalized against β-actin (β-Act). Insert (b) shows representative ERα (65 KDa) and β-Act (42 KDa) blots. Data are shown as the mean ± SD. Different letters indicate statistically significant differences among experimental groups (p < 0.05). Molecular Docking Results As reported in the Methodological section, TMC and OMC were docked to estrogen and androgen receptors in order to evaluate their possible interaction and affinity. Estrogenic receptor: for TMC, from the cluster analysis, only one main pose was identified (three clusters; >90% population) located within the estradiol binding cleft (En = −7.74 kcal/mol) (Figure 7). Comparing this positioning with the estradiol and raloxifene ones (derived from crystallographic data 1a52 and 1ere, respectively), we can notice that they are totally superimposable, thus suggesting a competitive binding behavior. However, the estradiol binding energy is much lower (i.e., higher affinity) at −9.68 vs. −7.74 kcal/mol, implying that TMC cannot really compete significatively with the natural ligand (no interference activity). A different result was obtained, however, for OMC: this compound appears to bind more efficiently to the ER, since, from the cluster analysis, 9-10 different orientations were found (all close in energy and significatively populated, range -7.2/−6.80 kcal/mol)) that are spread all along the incoming pathway to the binding cleft ( Figure 8). Thus, even if its energy is still higher than that of estradiol within the cleft, the other binding sites, due to their location, will obstruct the natural ligand's entrance to its cleft. This suggests a possible interference activity. Androgen receptor: For TMC, the cluster analysis after the docking experiments reveals the presence of statistically populated clusters all located at the same site, albeit with different spatial molecular orientations, both far from the natural ligand's binding cleft and from the coactivator's ( Figure 9). In addition, for OMC ( Figure 9), two main poses are identified: one located in the same TMC docking zone, the other located in proximity and bridging two helix regions (aa 843-833 (helix) and 670-677 aa (helix)), which are close to the connection loop of the DNA-binding domain (aa 560-632), not shown in figures. Thus, even if any agonistic activity can be excluded for this com- ERα protein levels in larvae exposed to the different experimental treatments. mRNA levels were normalized against rplp0 and rplp13 and used as reference genes. The ERα levels were normalized against β-actin (β-Act). Insert (b) shows representative ERα (65 KDa) and β-Act (42 KDa) blots. Data are shown as the mean ± SD. Different letters indicate statistically significant differences among experimental groups (p < 0.05). Molecular Docking Results As reported in the Methodological section, TMC and OMC were docked to estrogen and androgen receptors in order to evaluate their possible interaction and affinity. Estrogenic receptor: for TMC, from the cluster analysis, only one main pose was identified (three clusters; >90% population) located within the estradiol binding cleft (En = −7.74 kcal/mol) (Figure 7). Comparing this positioning with the estradiol and raloxifene ones (derived from crystallographic data 1a52 and 1ere, respectively), we can notice that they are totally superimposable, thus suggesting a competitive binding behavior. However, the estradiol binding energy is much lower (i.e., higher affinity) at −9.68 vs. −7.74 kcal/mol, implying that TMC cannot really compete significatively with the natural ligand (no interference activity). A different result was obtained, however, for OMC: this compound appears to bind more efficiently to the ER, since, from the cluster analysis, 9-10 different orientations were found (all close in energy and significatively populated, range -7.2/−6.80 kcal/mol)) that are spread all along the incoming pathway to the binding cleft ( Figure 8). Thus, even if its energy is still higher than that of estradiol within the cleft, the other binding sites, due to their location, will obstruct the natural ligand's entrance to its cleft. This suggests a possible interference activity. Androgen receptor: For TMC, the cluster analysis after the docking experiments reveals the presence of statistically populated clusters all located at the same site, albeit with different spatial molecular orientations, both far from the natural ligand's binding cleft and from the coactivator's ( Figure 9). In addition, for OMC (Figure 9), two main poses are identified: one located in the same TMC docking zone, the other located in proximity and bridging two helix regions (aa 843-833 (helix) and 670-677 aa (helix)), which are close to the connection loop of the DNA-binding domain (aa 560-632), not shown in figures. Thus, even if any agonistic activity can be excluded for this compound, the possibility of some allosteric antagonistic activity must still be considered. Discussion In this study, the effects of early exposure to one of the most widely used UV filters, OMC, and one of its derivatives, TMC, were analyzed in zebrafish larvae, focusing on the onset of possible adverse developmental effects. OMC toxicity, in fact, has been largely described in different animal models, and the necessity to replace this molecule has become urgent. TMC, one of its recently synthesized derivatives, could represent a valid candidate molecule, but its effects in vivo have not been investigated so far. For this reason, embryos were exposed to OMC at environmentally relevant concentrations, and the same range was used also for TMC, which, considering its chemical features, could represent a valid, safer alternative UV filter. In this regard, the concentrations of either OMC or TMC used did not induce morphological alterations, resulting in agreement with a previous published paper on OMC [21]. These last authors observed, in fact, that OMC concentrations in the same range as those used in this study, 1 and 10 μg/L, had no effect on the hatching rate, malformation, and survival of zebrafish [21], while 100 μg/L exposure caused developmental toxicity. Although no significant changes were observed in larval morphometry, evidence was obtained regarding the ability of both compounds to interact with bone formation. In zebrafish, the appearance of the cartilaginous structure can be easily detected 3 or 4 days post-fertilization, and the results obtained herein with the Alcian blue-Alizarin red double staining allowed us to hypothesize that OMC and TMC, similar to other chemical compounds, including metallic elements, pesticides, and drugs [30], interact with bone metabolism and seem to accelerate bone development, suggesting a possible interaction with the endocrine system. Nevertheless, the acceleration in bone formation has been observed not only in the case of xenobiotic exposure but also in the case of probiotic addiction in the rearing medium [31][32][33], suggesting that the presence of the cinnamate moiety in both molecules can boost mineralization. Most of the studies published to date show that one of the main problems related with exposure to sunlight, as well as to certain UV filters, including OMC, is the increase in oxidative stress [21,34]. The results obtained herein clearly suggest the prooxidant action of OMC, whereas, in the case of TMC, although it still induces the modulation of similar biomarkers to some extent, it is less toxic overall than OMC. Among the analyzed biomarkers, hsp70 has been largely considered an early warning signal of stress [35,36]. Its Discussion In this study, the effects of early exposure to one of the most widely used UV filters, OMC, and one of its derivatives, TMC, were analyzed in zebrafish larvae, focusing on the onset of possible adverse developmental effects. OMC toxicity, in fact, has been largely described in different animal models, and the necessity to replace this molecule has become urgent. TMC, one of its recently synthesized derivatives, could represent a valid candidate molecule, but its effects in vivo have not been investigated so far. For this reason, embryos were exposed to OMC at environmentally relevant concentrations, and the same range was used also for TMC, which, considering its chemical features, could represent a valid, safer alternative UV filter. In this regard, the concentrations of either OMC or TMC used did not induce morphological alterations, resulting in agreement with a previous published paper on OMC [21]. These last authors observed, in fact, that OMC concentrations in the same range as those used in this study, 1 and 10 µg/L, had no effect on the hatching rate, malformation, and survival of zebrafish [21], while 100 µg/L exposure caused developmental toxicity. Although no significant changes were observed in larval morphometry, evidence was obtained regarding the ability of both compounds to interact with bone formation. In zebrafish, the appearance of the cartilaginous structure can be easily detected 3 or 4 days post-fertilization, and the results obtained herein with the Alcian blue-Alizarin red double staining allowed us to hypothesize that OMC and TMC, similar to other chemical compounds, including metallic elements, pesticides, and drugs [30], interact with bone metabolism and seem to accelerate bone development, suggesting a possible interaction with the endocrine system. Nevertheless, the acceleration in bone formation has been observed not only in the case of xenobiotic exposure but also in the case of probiotic addiction in the rearing medium [31][32][33], suggesting that the presence of the cinnamate moiety in both molecules can boost mineralization. Most of the studies published to date show that one of the main problems related with exposure to sunlight, as well as to certain UV filters, including OMC, is the increase in oxidative stress [21,34]. The results obtained herein clearly suggest the prooxidant action of OMC, whereas, in the case of TMC, although it still induces the modulation of similar biomarkers to some extent, it is less toxic overall than OMC. Among the analyzed biomarkers, hsp70 has been largely considered an early warning signal of stress [35,36]. Its primary role, shared with most chaperones, is traditionally linked to protein folding and assembly [37,38] and to the disaggregation of protein aggregates [39]. Later evidence showed that chaperones have a dual function in proteostasis, as they also contribute to key steps in protein degradation [40]. In this light, the downregulation of hsp70 observed herein suggests an increase in unfolded proteins, which contribute to the unfolded protein response (UPR) onset. Following the translation, secreted and transmembrane proteins enter the lumen of the endoplasmic reticulum (ER), where they are post-translationally modified and properly folded in the presence of specialized chaperone proteins. This process is overwhelmed during ER stress, so, in order to relieve ER stress and restore ER homeostasis, the cell activates UPR, which is regulated by three molecular sensors, one of them being PKR-like ER kinase (PERK). Activated PERK directly acts on eukaryotic translation initiator factor-2 (eIF2a), which reduces the protein overload within the ER of a stressed cell and favors the transcription factor ATF4, which translocates to the nucleus and activates a set of UPR target genes involved in amino acid metabolism, antioxidant responses, autophagy, and apoptosis [41,42]. Among them, CHOP causes a downregulation of Bcl-2 [43], favoring the activation of the intrinsic apoptotic pathway. In this context, in relation to our results, perk mRNA, which, in zebrafish, is codified by the eif2ak3 gene, shows a differential behavior with the two compounds; OMC, in fact, determines its mRNA downregulation, which possibly also causes ddit3 mRNA, codifying for CHOP, decreased transcription. This signal cascade inhibition can contribute to the reduction in the cleaved, active caspase 3 protein levels and highlights OMC's ability to interact with larvae physiological processes. It should be considered that, at this stage of development, apoptosis plays a crucial role in correct organ shaping; thus, in the long term, the occurrence of morphological alterations cannot be excluded. On the contrary, eif2ak3 mRNA is upregulated by the higher TMC concentrations, thus allowing to speculate that, at higher concentrations, the Tempol moiety could exert its beneficial, antioxidant effect and can counteract the toxicity of OMC in these fish. The signaling pathway leading to the activation of apoptosis are, in fact, not induced in TMC-exposed fish, as stated by the lack of changes in the CHOP (ddit3) and caspase 3 levels. Nevertheless, PERK (eif2ak3) can, in turn, also activate Nrf2, a master regulator of detoxification [44]. Nrf2 is a transcription factor that, once in the nucleus, specifically recognizes and binds to the core sequence of the Antioxidant Responsive genes (AREs). In this context, the increased trend observed in OMC-treated fish, especially at the higher concentrations, for sod1, txn, and gpx and nitrotyrosine, despite not being always significant, clearly suggest that larvae are undergoing oxidative stress. In addition, since both sod mRNA isoforms were not affected by higher TMC concentrations, this further strengthens the hypothesis that TEMPOL probably mitigates OMC toxicity by exerting an antioxidant action. TMC, in fact, only increases the gpx and txn mRNA levels, and since the GPx/glutathione system is thought to be a major defense in the case of low levels of oxidative stress [45], we can speculate that TMC is less toxic than OMC at this stage of embryogenesis. An increase in gpx has also been found in embryos exposed to another UV filter, benzophenone-3, to which chronic exposure, similar to OMC and TMC, did not affect the survival and hatching rates [46,47]. A similar txn and gpx mRNA trend was observed for both chemicals. In mammalian cell models, it was observed that Trx-1 plays roles in redox regulation, growth promotion, neuroprotection, inflammatory modulation, and the inhibition of apoptosis [48]. In addition, it seems that the glutathione system can serve as a backup system to reduce thioredoxin when the electron transfer pathway from TrxR1 is blocked [49], suggesting a tight crosstalk between the two antioxidant systems. In zebrafish, txn knockdown led to hydrocephalus [50] and defective liver development [51], mainly due to increased hepatic cell death. Regarding nkap mRNA, it is known that, aside from being directly involved in NF-kB activation, a master gene in inflammatory processes [52], it has a key role also in transcriptional repression, immune cell development, maturation, T-cell acquisition of functional competency, and the maintenance of hematopoiesis [53]. Nevertheless, a role of NF-kB in immune system development has also been described, resulting in a major transcription factor that regulates the genes responsible for both the innate and adaptative immune response. Thus, the increase of this transcript in TMC-exposed larvae could be directly associated with a more enhanced immune response, which correlates well with the more developed skeletal structure, as pointed by morphological observations. Finally, the first attempt to investigate the possible hormone-like activity of TMC was carried out. Our results suggest that TMC does not affect the ar transcription nor ERα protein levels, ruling out a possible hormone-like activity. These findings are supported from the in silico docking studies carried out on both the ER and AR receptors; in fact, in relation to ER activity, TMC cannot compete with estradiol due to its lower affinity and the absence of allosteric sites of interactions. Regarding OMC, different studies so far have demonstrated its ability to interact with steroid hormone receptors; in a yeast assay [54] and in vitro using MCF-7 cells (E SCREEN), OMC behaved as an estrogen-like compound upregulating the ER levels. Our in silico studies showed, for OMC, the presence of allosteric sites of interactions that prevent entrance of the natural ligand in its binding cleft. In addition, considering OMC with AR, the presence of a possible allosteric site at the beginning of the connection loop with the DNA-binding site could suggest the possible inhibition of AR activity. This site is instead absent for TMC. Further evidence with hormone receptors comes from studies in vivo in rats, where OMC increased the uterine weight [55]. A different scenario was obtained in a study using zebrafish reporting the ability of OMC to inhibit er isoform transcription [21], thus resulting in agreement with the results herein obtained. This variety in the response should be considered when working with in vivo models that could exhibit a different tolerance to xenobiotics; thus, the same molecule, the specific concentration used, and the length of exposure could act on a species-specific basis. The results obtained in this study are summarized in Figure 10. This schematic figure shows that TMC, differently from OMC, which significantly affects both the ar and ERα levels, has scarce hormone-like activity. Only OMC affects apoptosis, which has a key role in embryo shaping at this stage of early development, and finally, TMC potentiates the larvae oxidative stress response, suggesting that organisms could be more prone to contrasting ROS production caused by oxidative stimuli, such as UV light exposure. transcription factor that regulates the genes responsible for both the innate and adaptative immune response. Thus, the increase of this transcript in TMC-exposed larvae could be directly associated with a more enhanced immune response, which correlates well with the more developed skeletal structure, as pointed by morphological observations. Finally, the first attempt to investigate the possible hormone-like activity of TMC was carried out. Our results suggest that TMC does not affect the ar transcription nor ERα protein levels, ruling out a possible hormone-like activity. These findings are supported from the in silico docking studies carried out on both the ER and AR receptors; in fact, in relation to ER activity, TMC cannot compete with estradiol due to its lower affinity and the absence of allosteric sites of interactions. Regarding OMC, different studies so far have demonstrated its ability to interact with steroid hormone receptors; in a yeast assay [54] and in vitro using MCF-7 cells (E SCREEN), OMC behaved as an estrogen-like compound upregulating the ER levels. Our in silico studies showed, for OMC, the presence of allosteric sites of interactions that prevent entrance of the natural ligand in its binding cleft. In addition, considering OMC with AR, the presence of a possible allosteric site at the beginning of the connection loop with the DNA-binding site could suggest the possible inhibition of AR activity. This site is instead absent for TMC. Further evidence with hormone receptors comes from studies in vivo in rats, where OMC increased the uterine weight [55]. A different scenario was obtained in a study using zebrafish reporting the ability of OMC to inhibit er isoform transcription [21], thus resulting in agreement with the results herein obtained. This variety in the response should be considered when working with in vivo models that could exhibit a different tolerance to xenobiotics; thus, the same molecule, the specific concentration used, and the length of exposure could act on a species-specific basis. The results obtained in this study are summarized in Figure 10. This schematic figure shows that TMC, differently from OMC, which significantly affects both the ar and ERα levels, has scarce hormone-like activity. Only OMC affects apoptosis, which has a key role in embryo shaping at this stage of early development, and finally, TMC potentiates the larvae oxidative stress response, suggesting that organisms could be more prone to contrasting ROS production caused by oxidative stimuli, such as UV light exposure. Chemicals OMC (98%), TEMPOL (97%), p-methoxy cinnamic acid (99%), N,N-dicyclohexylcarbodiimide (DCC) (99%), and 4-dimethylaminopyridine (DMAP) (99%), as well as all the other reagents and solvents, were purchased from Sigma-Aldrich and were used without further purification. TMC was synthesized starting from p-methoxy cinnamic acid and TEMPOL by DCC/DMAP-mediated esterification: p-methoxy cinnamic acid (1 mmol), DMAP (0.05 mmol), and TEMPOL (1 mmol) were dissolved in 2 mL of anhydrous dichloromethane (CH 2 Cl 2 ) and cooled to 0 • C; 1.2 mmol of DCC were then added and magnetically stirred for 5 min at 0 • C. The reaction mixture was left for 3 h at room temperature under stirring. The reaction course was monitored by thin-layer chromatography by using petroleum ether/diethyl ether (8:2) as the eluant. At the end of the reaction, the insoluble N,N-dicyclohexylurea was filtered off, and the filtrate was evaporated. The residue was dissolved in CH 2 Cl 2 and washed with saturated NaHCO 3 (3 × 10 mL). The organic layer was dried over Na 2 SO 4 anhydrous, and the solvent was removed under reduced pressure, affording a 70% yield of TMC. The purity of the compound was assessed by comparison with an authentic sample, as described in [24]. Exposure Embryos were obtained by natural spawning by crossing adult zebrafish (D. rerio, AB wild-type strain) maintained at the DiSVA fish facility under controlled conditions (28.0 ± 0.5 • C) and with a 14/10 h light/dark cycle in oxygenated water. Embryos were obtained by natural spawning crossing 10 couples. Spawned embryos were reared in E3 medium (5 mM NaCl, 0.17 mM KCl, 0.33 mM CaCl 2 , 0.33 mM MgSO 4 , and 10-5% Methylene Blue). All spawned eggs were checked for fertilization and quality and then were divided among experimental groups in 250 mL glass beakers and were maintained at 1 embryo/mL stocking density (150 embryos/beaker). Each experimental group was set up in triplicate. The experimental groups were: OMC and TMC were dissolved in DMSO. The exposure started immediately after the embryo viability was checked and lasted until 96 h post-fertilization (hpf). The fish rearing medium was changed daily, and DMSO, OMC, and TMC were renewed. After 96 h, at least 10 individuals randomly taken from each group were sacrificed, fixed in 4% PFA, and stored in 70% ethanol. The other larvae were collected and stored at −80 • C for molecular analyses. Morphology All embryos stored in ethanol 70% were observed and photographed with a stereomicroscope (Leica microsystems, Wetzlar, Germany). The following parameters were recorded: total length, eye length, yolk area, and yolk circumference. Alcian Blue-Alizarin Red Staining Five larvae per experimental group were double-stained according to Maradonna et al. 2013 [31]. Larvae were then observed and photographed with a Zeiss Axio Imager.A2 combined with a color digital camera Axiocam 105 (both from Zeiss, Oberkochen, Germany) optical microscope. Images were analyzed using ZEN 2.3 (Carl Zeiss Microscopy GmbH, Oberkochen, Germany). RNA Extraction and cDNA Synthesis Total RNA was extracted, and cDNA synthesis was performed as previously described in [56], starting from 5 pools of ±20 larvae for each experimental group. Further details are reported in the Supplementary Materials. Real-Time PCR The qRT-PCRs were performed using the dye-based SYBR green assay in a CFX thermal cycler (Bio-Rad, Milan, Italy), as previously described in [56]. For each experimental group, replicates (n = 5) were run in duplicate. Primer list is reported in Supplementary Table S1 and further details are reported in the Supplementary Materials. Western Blot Analysis Whole embryo homogenates were extracted from at least 4 pools of ±40 larvae. Protein extraction, SDS page electrophoresis, and Western blot procedures are detailed in the Supplementary Materials. Computational Analysis Estrogen and androgen receptor LBD (ligand-binding domain) structures were retrieved from the Brookhaven Protein Data Bank (PDB codes 1ere/1a52 and 1e3g/1t5z, respectively) "http://www. wwpdb.org (accessed on 24/02/2022)" and used in the molecular docking calculations. TMC and OMC structures were built in and optimized at the DFT level using the B3LYP/6-311G ** basis set. Mulliken charges were calculated, then kept for subsequent docking simulations with AutoDock 4.2/MGLTools 1.5.7 [57]. For both compounds, a blind docking approach was used at first in order to identify after the cluster analysis every putative site on the receptor's surface. Subsequently, on the lowest energy and most populated poses, a focused docking protocol was applied to better refine both pose and its energy. For the blind docking, the grid map, centered in the center of mass of the enzyme (120 × 120 × 120 Å 3 ), included the whole protein surface; in the focused docking protocol, the grid map was centered on the ligand and extended around the cleft (40 × 40 × 40 Å 3 ) with points spaced equally at 0.375 Å. The number of GA (genetic algorithm) runs was set to 100, the energy evaluations 25,000,000, the maximum number of top individuals that automatically survive 0.1, and the step size for translation 0.2 Å. All the docking calculations were carried out in triplicate using three different CPU random seeds. The final docked ligand-receptor complexes were ranked according to the predicted binding energy, and all the conformations were processed using the built-in clustering analysis with a 2.0 Å cut-off. Additionally, for purposes of comparison, estradiol was docked to the estrogen receptor LBD in order to compare the TMC/OMC-binding energies. For AR, the DHT (dihydro testosterone) binding site was taken into account in order to locate the TMC/OMC binding zones (1t5z PDB code). Molecular graphics images were produced using the UCSF Chimera 1.16 package (Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, CA, USA). Statistical Analysis QPCR, Western blot, and the imaging statistical analysis were performed with Graph Pad Prism V9.0.1. (GraphPad Software, Inc., San Diego, CA, USA). Data were presented as the means ± SD and were analyzed using one-way ANOVA, followed by Dunnett's multiple comparison test. Different letters on the histogram bars indicate statistically significant changes among the groups. The p-value was set as p < 0.05. Conclusions In conclusion, the results obtained during this in vivo study of TMC, a derivative of the most popular UVB filter, OMC, are promising, as it appears to be less toxic than its parent derivative. The integration of the results suggests that TMC could replace OMC in the future as an equally effective photoprotectant but endowed with less toxic effects. However, additional in vitro studies or in vivo ones using other marine species would help in supporting the preliminary data obtained here and would aid in better understanding the behavior of this promising compound. Institutional Review Board Statement: Ethical review and approval were waived, since the study was performed using larval forms not capable of feeding independently; thus, their use is not regulated by Italian DL 26, 2014. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2019-10-18T17:17:02.028Z
2019-10-18T00:00:00.000
204758099
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-51557-3.pdf", "pdf_hash": "589c0e387df39a518b3e1c12cdfdc4b04d071f0d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43951", "s2fieldsofstudy": [ "Medicine" ], "sha1": "589c0e387df39a518b3e1c12cdfdc4b04d071f0d", "year": 2019 }
pes2o/s2orc
Predictors and their prognostic value for no ROSC and mortality after a non-cardiac surgery intraoperative cardiac arrest: a retrospective cohort study Data on predictors of intraoperative cardiac arrest (ICA) outcomes are scarce in the literature. This study analysed predictors of poor outcome and their prognostic value after an ICA. Clinical and laboratory data before and 24 hours (h) after ICA were analysed as predictors for no return of spontaneous circulation (ROSC) and 24 h and 1-year mortality. Receiver operating characteristic curves for each predictor and sensitivity, specificity, positive and negative likelihood ratios, and post-test probability were calculated. A total of 167,574 anaesthetic procedures were performed, including 158 cases of ICAs. Based on the predictors for no ROSC, a threshold of 13 minutes of ICA yielded the highest area under curve (AUC) (0.867[0.80–0.93]), with a sensitivity and specificity of 78.4% [69.6–86.3%] and 89.3% [80.4–96.4%], respectively. For the 1-year mortality, the GCS without the verbal component 24 h after an ICA had the highest AUC (0.616 [0.792–0.956]), with a sensitivity of 79.3% [65.5–93.1%] and specificity of 86.1 [74.4–95.4]. ICA duration and GCS 24 h after the event had the best prognostic value for no ROSC and 1-year mortality. For 24 h mortality, no predictors had prognostic value. Methods inclusion and exclusion criteria. After obtaining approval from the Ethical Committee of the Clinics Hospital, Faculty of Medicine, University of Sao Paulo, Brazil (N° 0822/06) and performing the research in accordance with the Declaration of Helsinki 12 , we retrospectively reviewed the medical records of patients who were subjected to anaesthetic procedures between 2007 and 2014. The Ethical Committee waived the need for informed written consent, since it was a retrospective study. All adult patients (>18 years old) from the Central Institute of the Clinics Hospital of the University of Sao Paulo who suffered ICA were included in the analysis. ICA was defined as the absence of a central pulse associated with chest compressions for more than 10 seconds, as documented in patient medical records 13,14 . ROSC was defined as restoration of a spontaneous perfusing rhythm or arterial waveform for more than 20 minutes 15 . The intraoperative period was defined as the time between the patient´s entrance to the operating room (OR) and the patient's exit from the OR. The study included all elective, urgent/emergent trauma and emergent/urgent non-trauma cases, except cardiac surgery cases. Patients excluded from the study were those who were deceased organ donors, who arrived at the OR in CA, whose medical records were unavailable, and who had had cardiac surgeries. Analysed variables. The data were obtained from institutional patient medical records and from the laboratory system. The acquired parameters included data regarding patient status, surgery, laboratory exams and ICA (Table 1). Based on the unpredictability of the event, the data at admission, before CA, during CA, and immediately after ROSC were obtained from routine institutional records. Outcomes (no ROSC, 24 h mortality and 1-year mortality). The patients were evaluated for no ROSC and 24 h and 1-year mortality after the event. We contacted the patients after discharge by phone calls, emails, and text messages to account for mortality. Statistical analysis. Data were collected and managed using REDCap electronic data capture software hosted at the institution 16 . Data were analysed by using the statistical software R 3.5.1 (R Foundation for Statistical Computing, Vienna, Austria) and RStudio (RStudio Team, Boston, MA). The library "pROC" 17 was used for the ROC analysis in R. STATA (StataCorp LP, College Station, Texas, USA) software was also used to obtain the graphics for this paper. Considering the high prevalence found in this study for each outcome, the authors estimated the prevalence ratio (PR) and its respective 95% confidence interval (95% CI) from bivariate analysis based on the outcome of each variable. To perform the binomial analysis, continuous and discrete variables were categorized as presented in Table 1. From the binomial analysis, variables that had p values lower than 0.20 were selected for multivariate analysis using Poisson regression with robust variance. The variable with the lowest p value from the bivariate analysis was first selected, and then, other variables with higher p values were added to the analysis of the data. The authors retained the variables with p < 0.05 for the final model. Finally, the PR for the final model was estimated for each variable with its respective 95% CI. After analysing the final model for each outcome, receiver operating characteristic curves (ROC) were generated for the predictors. The ROC curves were created using the bootstrap methodology (2000 samples). The areas under the ROC curves (AUCs) and confidence intervals calculated and were compared using the Delong and Clarke-Pearson method 18 . In addition, sensitivity, specificity and positive and negative likelihood ratios (LHR), along with their 95%CI were calculated. The cut-off values were chosen according to the highest Youden index, calculated as (sensitivity + specificity −1). The AUC was considered non-discriminant if the 95%CI included 0.5. For the ROC analysis of the GCS, the vocal response category was ignored to include patients who were awake and could follow commands but who were intubated or tracheostomized (i.e., a GCS of 10 T). The curve with the highest AUC was selected and compared with the other ROC curves using DeLong's test for two correlated ROC curves. The two-step Fagan nomogram was used for post-test probability calculation 19 . Continuous variables were categorized according to the Youden index. The pre-test probability was estimated using the sample data. The significance level adopted by this study for all analyses was p < 0.05. The survival curves for the 1-year mortality of patients who had ROSC were obtained by the Kaplan-Meier product-limit estimator. Results Sample characterization. There were 167,574 anaesthetic procedures (138,896 elective procedures and 28,678 urgent/emergent procedures), with 160 ICAs. Two patients (1.2%) were transferred to another institution before 24 h after ROSC and were excluded from the analyses (Fig. 1). The overall ICA prevalence was 9.54 cases/10,000 anaesthesia procedures, with 41.5 cases/10.000 anaesthesia procedures for urgent/emergent cases and 2.74/10.000 anaesthesia procedures for elective surgeries. Fifty-six patients did not achieve ROSC (case-fatality rate: 35 Roc analysis of the predictors. The specificity, sensitivity, area under curve (AUC) and thresholds for each predictor of mortality are shown in Table 3. The LHR and post-test probabilities are shown in Table 4 Discussion The main findings of this study were that based on the predictors for no ROSC, a threshold of 13 minutes of ICA yielded a significantly higher AUC than the other predictors. For the 24 h mortality, no predictors had prognostic value. For the 1-year mortality, the GCS 24 h after ICA had the highest AUC. The ICA duration was independently associated with no ROSC and 1-year mortality and had lower cut-offs for longer survival. Ray et al. proposed that good diagnostic value of a biomarker would have an accuracy of www.nature.com/scientificreports www.nature.com/scientificreports/ 0.75-0.90, a +LHR between 5 and 10, a -LHR between 0.1 and 0.2 and an AUC between 0.75 and 0.90 20 . An ICA duration of greater than 13 minutes predicted no ROSC. This means that if an ICA lasts for more than 13 minutes, there is a 93% probability that this patient will not achieve ROSC. Of note, the ICA duration had a direct association with the degree of hypoxia, which is predictor of poor outcome following out-of-hospital CA 21,22 . In animal studies, the average time of anoxia associated with irreversible neurologic damage is between 4 and 10 minutes 23,24 . However, in humans, when effective CPR is present, the CA duration can be more than 17 minutes, while still having a good neurological outcome 21 . One out-of-hospital CA study considered only a CA duration of greater than 25 minutes as a risk of death 22 . In contrast to previous findings, in the 1-year mortality analysis, an ICA duration of 5.5 minutes had the greatest AUC for sensitivity and specificity and wasassociated with a higher probability of death and an even lower ICA duration than previous studies 21,22 . This finding was supported by the fact that in patients who achieved ROSC, there was a 1% increase in the patient´s likelihood of death for each minute of CA according to the 1-year mortality. www.nature.com/scientificreports www.nature.com/scientificreports/ Several studies linked the cause of ICA to a prognostic factor [25][26][27][28] . In our study, only hypovolemia as a cause of ICA was independently associated only with no ROSC. This is in accordance with previous studies on the association of the number of transfused units, the amount of intraoperative bleeding and hypovolemia with in-hospital mortality [26][27][28] . In addition, these studies showed that the cause of ICA was no longer associated with 30-day mortality, which is in accordance with our findings. One may assume that after the first 24 h, hypovolemia is most likely resolved, and organ damage (ischaemia-reperfusion injury) is mainly associated with the outcome. Furthermore, the fact that the ICA duration has a better prognostic value than the cause of ICA for mortality might suggest that organ damage is more time-dependent than aetiology-dependent. The use of GCS after CA has been linked to neurological prognosis. The higher the GCS is 24 h after CA, the better the neurological outcome 29,30 . It is estimated that 80% of patients will be comatose after CA 31 . Within 24 h, more than 95% of patients who recover consciousness will awake 31 . Within 48 h, if targeted temperature management is present, 78% of patients who recover consciousness will awake 32 . We found that GCS without a verbal response 24 h after the event was a predictor of 1-year mortality, with a good AUC, a high + LHR and a low -LHR, yielding a positive post-test probability of 82% and a negative post-test probability of 13%. Hence, we can assume that if the value of the GCS is lower than 15 or 10 T after 24 h, there is an 82% probability of 1-year mortality for the patient. Thus, this variable can be classified as having good prognostic value for mortality 20 . Eventually, since all patients who survived for one year had a GCS of 15 or 10 T, we could also infer that these conclusions, drawn for mortality, could be extrapolated for neurological outcomes. To the best of our knowledge, this is the first study to evaluate laboratory and clinical data at admission, immediately before ICA, immediately after ROSC, and 24 h after ROSC. This study was also the first to analyse the association of changes in laboratory data during the first 24 h with the 1-year mortality and to include a sensitivity and specificity analysis of each of these variables. Regarding laboratory data, PT/INR at admission was independently associated with a greater risk of death within 24 h and its increase in the first 24 h was associated with 1-year mortality. In this manner, we could presume that PT/INR should be analysed as a static and dynamic variable. During acidosis, hypothermia, and/or haemodilution, which are common situations for patients experiencing or recovering from CA, the coagulation system is greatly affected, resulting in hypocoagulation 33 . Recently, hypocoagulation has been attributed to tissue hypoperfusion, with a direct association between the degree of hypoperfusion and the intensity of changes in the coagulation system 34 . In a previous study, base excess levels lower than −6 were associated with increased PT and aTTP, reinforcing the theory that hypoperfusion can generate coagulopathy 35 . In this study, most of the patients had base excess lower than −6, either before or after the event, which is a possible explanation for the increase in PT/INR. Although the PT/INR variation had a high specificity and an infinite +LHR for 1-year mortality, the sensitivity was low, resulting in a small AUC and low −LHR. Lactate, which is another marker of hypoperfusion, did not correlate with the outcomes. This might be because most of the analysed patients (95.6%) had high lactate levels after CA and no patient with normal lactate levels died. Finally, a surprising result of our study is that, contrary to previous studies, the ASA-PS and the comorbidities were neither independently associated nor had any prognostic value with any of the analysed outcomes. The inclusion of trauma patients, who were mostly ASA-PS I and II, might explain this lack of association 2,25,36,37 . The high number of trauma patients might have biased the analysis of ASA-PS as a predictor. Limitations. This study had some limitations. First, we pooled and compared all surgeries -elective, urgent and emergent. This was done due to the low prevalence of ICA, especially in elective surgeries. Thus, the sample in this study consisted of more urgent and emergent surgeries than elective surgeries. This fact, however, allowed greater external validity of the study, especially for tertiary academic hospitals, where there is a mixture of elective, urgent and emergent surgeries. In addition, we could not differentiate between the total number of procedures performed for trauma and non-trauma surgeries due to the lack of an electronic medical record. Second, this was a single-centred study performed at a tertiary academic hospital where the patients are clinically more severe, which might have increased the risk of death. The advantage of using a single centred is that one can assume standardized patient care, including the management of CPR. Third, this was a retrospective study based on patients' charts. There might be some record bias that we were not aware of. Some anaesthesiologists might have underestimated the duration of the ICA, while others might have misdiagnosed the cause or the electrical rhythm. Fourth, another important limitation is the decision to continue resuscitative efforts during cardiac arrest. A longer ICA duration (especially if longer than 20 minutes) is more prone to termination of resuscitative efforts 38 . In our retrospective study, the treating clinicians were not blinded to the ICA duration at the time of the ICA, and the predictive estimates may be artificially inflated. Although we had some loss of follow-up, it was very low. Most of the patients either died during hospitalization or continued to visit the hospital for routine follow-up, the loss to follow-up was limited. In addition, institutional electronic system can log every time the patient comes for a consultation, allowing the authors to check for survival. Finally, we acknowledge the limitations caused by the wide confidence intervals of some of the analyses, the large number of factors analysed and thepoor calibration and discrimination of some models. conclusion ICA duration and GCS 24 h after the event had the best prognostic value for ROSC and 1-year mortality. For 24 h mortality, no predictors had prognostic value. Larger, multicentred studies should be performed to validate these findings.
v3-fos-license
2018-12-05T03:27:08.070Z
2014-11-03T00:00:00.000
56023836
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://sciforum.net/paper/download/2634/manuscript", "pdf_hash": "0ab29d08d6e5e668969bc3d702e8a83a66a6e2f4", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43952", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "0ab29d08d6e5e668969bc3d702e8a83a66a6e2f4", "year": 2014 }
pes2o/s2orc
Information geometry in Gaussian random fields: searching for an arrow of time in complex systems Random fields are characterized by intricate non-linear relationships between their elements over time. However, what is a reasonable intrinsic definition for time in such complex systems? Here, we discuss the problem of characterizing the notion of time in isotropic pairwise Gaussian random fields. In particular, we are interested in studying the behavior of these fields when temperature deviates from infinity. Our investigations are focused in the relation between entropy and Fisher information, by the definition of the Fisher curve. The results suggest the emergence of an arrow of time as a consequence of asymmetrical geometric deformations in the random field model’s metric tensor. In terms of information geometry, the process of taking a random field from a lower entropy state A to a higher entropy state B and then bringing it back to A, induces a natural intrinsic one-way direction of evolution. In practice, there are different trajectories in the information space, suggesting that the deformations induced by the metric tensor into the parametric space (manifold) are not reversible for positive and negative displacements in the inverse temperature parameter direction. In other words, there is only one possible orientation to move through different entropic states along a Fisher curve. Introduction Since the origins of the human race, the concept of time has always intrigued mankind.Along centuries of evolution many philosophers and researchers have studied this fascinating but seemingly obscure topic [1][2][3].What is time?Does it really exist?Why does time seem to flow in one single direction?Is the passage of time merely an illusion?We certainly do not have definitive answers to all these questions.In an attempt to study the effect of the passage of time in complex systems, this paper proposes to investigate a reasonable way to characterize an intrinsic notion of time in random fields composed by Gaussian variables.Our study focuses on an information-theoretic perspective, motivated by the connection between Fisher information and the geometric structure of stochastic models, provided by information geometry [4,5].The proposed framework is mostly based on a data-driven approach, that is, we make use of intensive computational simulations to achieve our conclusions. During the last decades, the notion of information has increasingly become more present and relevant in any scale of modern society, as the volume of data that is being produced by scientific experiments is larger than ever.Being able to decode the symbols in this ocean of data is an essential step to learn, understand and assess the rules governing complex phenomena that are part of our world.A major challenge in dealing with complex systems concerns the mining, identification and further classification of patterns and symbols that convey relevant information about the underlying processes that govern their behavior.After the pieces of information are gathered, the global shape starts to emerge, as if an intricate puzzle had been solved.In this scenario, computational tools for exploratory data analysis are a fundamental component of this data-driven knowledge discovery process. However, one drawback of intensive data-driven approaches is that most exploratory data analysis methods usually rely on the independence assumption, that is, there is no relation between any two random variables, making it difficult to quantify the influence of a set of variables over another set of variables in a random sample from a parametric stochastic model.Independent and identically distributed random variables belonging to the exponential family of probability distributions define the basic approach for any classical statistical inference framework [6][7][8]. Random field models arise as a natural generalization of the classical approach by the simple replacement of the independence assumption by a more realistic conditional independence hypothesis [9,10].Basically, in a Markov random field (MRF), knowledge of a finite-support neighborhood around a given variable isolates it from all the remaining variables.A further simplification consists in considering a pairwise interaction model, which means that we are constraining the size of the maximum clique to be two.In other words, a pairwise model captures only binary relationships.Furthermore, if the random field model is isotropic, all the information regarding the spatial dependence structure of the system is conveyed by a single coupling parameter, from now on denoted by β.This parameter is widely known as the inverse temperature of the system.As the value of this parameter deviates from zero, the more our model deviates from the classical statistical scenario (regular exponential family + independence hypothesis).In the Gaussian case for instance, by introducing some degree of dependence between the random variables, assuming that β = 0, we are essentially moving towards a curved exponential family. Basically, the main goal of this investigation is to use information geometry as a mathematical tool to measure the emergence of an intrinsic notion of time in complex systems modeled by random fields in which temperature is allowed to deviate from infinity.Computational simulations validate our claim that the arrow of time is possibly a consequence of asymmetrical geometric deformations in the metric tensor (Fisher information) of the statistical manifold of the random field model, when the inverse temperature parameter is disturbed. Recently, expressions to compute the expected Fisher information regarding the inverse temperature parameter in Gaussian Markov random field models have been proposed by the authors [11].Also, Markov Chain Monte Carlo simulations have shown that expressing Fisher information in terms of tensor (Kronecker) and pointwise (Hadamard) matrix products leads to more efficient and faster computations.Finally, an indirect but fundamental problem involved in the measurement of these information-theoretic quantities is the estimation of the inverse temperature parameter of a random field, given the observation of a single snapshot of the system.More details about the proposed methodology and the obtained results are discussed in later sections of the paper. Fisher Information in MRF models The concept of Fisher information [12,13] has been shown to reveal several properties of statistical procedures, from lower bounds on estimation methods [6][7][8] to information geometry [4,5].In summary, we can think of Fisher information as being a likelihood analog of Shannon entropy, which is a probability-based measure of uncertainty. In this paper, our goal is to show that, in an exploratory data analysis context, Fisher information plays a central role in providing tools for measuring and quantifying the behavior of random fields.In this scenario, the most interesting feature of random field models over the classic statistical ones is the possibility to take the dependence between pieces of information into account.Moreover, this underlying dependence structure arises in terms of the system's temperature, which may even be variable along time. The Information Equality It is known from statistical inference theory that information equality holds for independent observations in the regular exponential family of distributions [6][7][8].In other words, we can compute the expected Fisher information of a statistical model p (X|θ) regarding a parameter of interest θ by two equivalent ways, since it is possible to interchange the integration and differentiation operators: where l (θ; X) denotes the likelihood function, that is, the probability density function (pdf) interpreted as a function of the model parameters.In our investigations, we replace the pdf of the model by a local conditional density function, which according to the Hammersley-Clifford theorem [14], characterizes the random field model.In fact, this remarkable result states the equivalence between Markov random Fields (local model) and Gibbs random fields (global model).However, given the intrinsic spatial dependence structure of random field models, information equality is not a natural condition.In general, when the inverse temperature parameter gradually drifts apart from zero (T deviates from infinity), this information "equilibrium" fails.Thus, in random field models we have to consider two different versions of Fisher information, from now on denoted by type-I (due to the first derivative of the likelihood function) and type-II (due to the second derivative of the likelihood function).Eventually, when certain conditions are satisfied, these two values of information will converge to a unique bound.Therefore, in random fields, these two versions of Fisher information play distinct roles, especially in quantifying the uncertainty in the estimation of the inverse temperature parameter. Fisher Information in the Gaussian Markov Random Field Model Gaussian random fields are important models for dealing with spatially dependent continuous random variables once they provide a general framework for studying the non-linear interactions between elements of a stochastic complex system along time.One of the major advantages of these models is the mathematical tractability, which allows us to derive exact closed-form expressions for both maximum pseudo-likelihood estimators of the inverse temperature parameter and expected Fisher information.Due to the Hammersley-Clifford theorem, it is also possible to characterize these random fields by a set of local conditional density functions (LCDF's), avoiding the use of the joint Gibbs distribution.Definition 1.An isotropic pairwise Gaussian Markov random field regarding a local neighborhood system η i defined on a lattice S = {s 1 , s 2 , . . ., s n } is completely characterized by a set of n local conditional density functions p(x i |η i , θ), given by: with θ = (µ, σ 2 , β), where µ and σ 2 are the expected value and the variance of the random variables, and β is the inverse temperature or coupling parameter.Note that for β = 0, the model degenerates to the usual Gaussian distribution, which belongs to the regular exponential family.It has been shown that the geometric structure of regular exponential family distributions exhibit constant curvature.It is also known that from an information geometry perspective [4,5], the natural Riemannian metric of these probability distribution manifolds is given by the Fisher information.However, little is known about information geometry on more general statistical models, such as random field models.In this paper, our primary objective is to study, from an information theory perspective, how changes in the inverse temperature parameter affect the metric tensor of Gaussian Markov random field models, more precisely, the Fisher information regarding the β parameter.By measuring this quantity we are actually capturing and quantifying an important component of a complex deformation process induced by a displacement in the inverse temperature parameter direction. Maximum Pseudo-Likelihood Estimation in MRF Models Before we can compute the expected Fisher information in a random field, it is necessary to estimate the model parameters.In this paper, the Gaussian Markov random field parameters µ and σ 2 are both estimated by the sample mean and variance, respectively (the maximum likelihood estimatives).However, maximum likelihood estimation is intractable for the inverse temperature parameter estimation, due to the existence of the partition function in the joint Gibbs distribution.An alternative, proposed by Besag [15], is to perform maximum pseudo-likelihood estimation, which is based on the conditional independence principle.The pseudo-likelihood function is defined as the product of the LCDF's for all the observations in the random field.Definition 2. Let an isotropic pairwise Markov random field model be defined on a rectangular lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i .Assuming that n } denotes the set corresponding to the observations at a time t (a snapshot of the random field), the pseudo-likelihood function of the model is defined by: The pseudo-likelihood function is the product of the local conditional density functions throughout the field.Note that the pseudo-likelihood function is a function of the model parameters. Estimating the Inverse Temperature in the GMRF Model To derive the maximum pseudo-likelihood (MPL) estimator of the inverse temperature parameter we proceed by plugging equation (2) into equation ( 3) and taking the logarithm: By differentiating equation ( 4) with respect to β and properly solving the pseudo-likelihood equation we obtain the following MPL estimator for β: Considering that the model is defined on a regular rectangular 2D lattice, the cardinality of the neighborhood system, |η i |, is spatially invariant.Thus each variable is dependent on a fixed number of neighbors in the lattice and the maximum pseudo-likelihood estimator for the inverse temperature can be rewritten in terms of cross covariances, since equation ( 5) can be expressed as: where σij denotes the sample covariance between the central variable x i and x j ∈ η i .Similarly, σjk denotes the sample covariance between two variables belonging to the neighborhood system η i (note that the definition of a neighborhood system η i does not include the the location s i ).All the information regarding the inverse temperature parameter is conveyed by the covariance matrix of the observable local interaction patterns (second-order statistics), which is somehow expected since we are dealing with Gaussian random variables. Expected Fisher Information As mentioned earlier, in the GMRF model it is possible to obtain exact closed-form expressions for the expected Fisher information.In this section, we proceed with the derivation of both type-I (Φ β ) and type-II (Ψ β ) expected Fisher information in isotropic pairwise Gaussian Markov random fields.In recent research efforts, the authors have already derived these expressions in a previous investigation.In this paper, we will not cover all the steps involved in the derivation.For more details, the reader is referred to [11].To derive an expression for Φ β , we plug the LCDF of the isotropic pairwise GMRF model (equation 2) in the first definition of Fisher information in equation ( 1) to obtain the following: Using the Isserlis' theorem [16] to compute higher-order moments of normally distributed random variables we have: where σ ij is the covariance between the central variable x i and a neighbor x j ∈ η i and σ jk , σ kl , σ jl , σ lm , σ km and σ jm are covariances between two neighboring variables in η i .Therefore, we can express Φ β in terms of the covariances between the random variables in a local neighborhood system, which means that we can use the covariance matrix of the local patterns to compute it.Following the same approach, it is possible to derive an expression to the type-II expected Fisher information, Ψ β , as: where σ jk is the covariance between two neighboring variables in η i .Note that unlike Φ β , Ψ β does not depend explicitly on β (inverse temperature). In order to simplify the notations and also to make computations faster, the expressions for Φ β and Ψ β can be rewritten in a matrix-vector form.Let Σ p be the covariance matrix of the random vectors p i , i = 1, 2, . . ., n, obtained by lexicographic ordering the local configuration patterns x i ∪ η i .In this work, we choose a second-order neighborhood system, making each local configuration pattern a 3 × 3 patch.Thus, since each vector p i has 9 dimensions, the resulting covariance matrix Σ p is 9 × 9. Let Σ − p be the sub-matrix of dimensions 8 × 8 obtained by removing the central row and central column of Σ p (these elements are the covariances between x i and each one of its neighbors x j ∈ η i ).Also, let ρ be the vector of dimensions 8 × 1 formed by all the elements of the central row of Σ p , excluding the middle one (which is the variance of x i actually).Figure 1 illustrates the process of decomposing the covariance matrix Σ p into the sub-matrix Σ − p and the vector ρ in an isotropic pairwise GMRF model defined on a second-order neighborhood system (8 nearest neighbors). Given the above, we can express equations ( 8) and ( 9) in a tensorial form using Kronecker products.The following definitions provide a computationally efficient way to compute both Φ β and Ψ β exploring tensor products.Definition 3. Let an isotropic pairwise GMRF be defined on a lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i of size K (usual choices for K are even values: 4, 8, 12, 20 or 24).Assuming that n } denotes the global configuration of the system at time t, and both ρ and Σ − p are defined according to Figure 1, the type-I expected Fisher information Φ β for X (t) is: where A + denotes the summation of all the entries of the matrix A (not to be confused with a matrix norm) and ⊗ denotes the Kronecker (tensor) product.Similarly, it is possible to define Ψ β using a matrix-vector notation and tensor products. Definition 4. Let an isotropic pairwise GMRF be defined on a lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i of size K (usual choices for K are 4, 8, 12, 20 or 24).Assuming that n } denotes the global configuration of the system at time t and Σ − p is defined according Figure 1, the type-II expected Fisher information Ψ β for X (t) is: p and ρ on a second-order neighborhood system (K = 8).By expressing both Φ β and Ψ β in terms of Kronocker products, it is possible to compute Fisher information in a efficient way during computational simulations. Finally, using this matrix-vector notation, the maximum pseudo-likelihood estimator of the inverse temperature parameter can be rewritten as: Entropy in the GMRF Model Our definition of entropy in the Gaussian Markov random field is done by repeating the same process employed to derive Φ β and Ψ β .Knowing that the entropy of random variable x is defined by the expected value of self-information, given by −log p(x), we have the following expression: Using the same matrix-vector notation introduced in the previous sections, we can further simplify the expression for H β .Definition 5. Let an isotropic pairwise GMRF be defined on a lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i .Assuming that n } denotes the global configuration of the system at time t, and ρ and Σ − p are defined according to Figure 1, the entropy H β for X (t) is: where H G = 0.5 [log(2πσ 2 ) + 1] denotes the entropy of a Gaussian random variable with variance σ 2 and Ψ β is the type-II expected Fisher information.For more details the author is referred to [11]. Uncertainty in the Inverse Temperature Estimation In estimating the inverse temperature parameter of random fields via maximum pseudo-likelihood, a relevant question emerges: how to measure the uncertainty in the estimation of β?Is it possible to quantify this uncertainty?We will see that both versions of Fisher information play a central role in answering this question. It is known from the statistical inference literature that both maximum likelihood and maximum pseudo-likelihood estimators share an important property: asymptotic normality [17,18].It is possible, therefore, to characterize their behavior in the limiting case by knowing the asymptotic variance.A limitation from maximum pseudo-likelihood approach is that there is no result proving that this method is asymptotically efficient (maximum likelihood estimators have been shown to be asymptotically efficient since in the limiting case their variance reaches the Cramer-Rao lower bound).It is known that the asymptotic covariance matrix of maximum pseudo-likelihood estimators is given by [19]: with where H and J denote, respectively, the Jacobian and Hessian matrices regarding the logarithm of the pseudo-likelihood function.Considering the single inverse temperature parameter, β, we have the following definition for the asymptotic variance of the maximum pseudo-likelihood estimator: However, the expected value of the first derivative of log L θ; X (t) (score function) with relation to β is zero: and the second term in the numerator vanishes, leading us to the final expression for υ β as a function of both type-I and type-II Fisher information, Φ β and Ψ β : showing that in the information equilibrium condition (Φ β = Ψ β ) we have the traditional Cramer-Rao lower bound, given by the inverse of the Fisher information.This information equality condition holds for models in the exponential family of distributions under certain regularity conditions (the differentiation and integration operators are interchangeable).Therefore, we can compute the asymptotic variance of the MPL estimator of the inverse temperature parameter in Gaussian Markov random fields.A simple interpretation of this equation indicates that the uncertainty in the estimation of the inverse temperature parameter is minimized when Ψ β is maximized and Φ β is minimized.Essentially, it means that, in average, the local likelihood functions should not be flat (there is a reduced number of candidates for β) and most local patterns must be aligned to the expected global behavior.In the following, we provide a definition for the asymptotic variance of the inverse temperature MPL estimator in the Gaussian Markov random field model (using the matrix-vector notation).Definition 6.Let an isotropic pairwise GMRF be defined on a lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i .Assuming that n } denotes the global configuration of the system at time t, and ρ and Σ − p are defined the same way as described in Figure 1, the asymptotic variance of the maximum pseudo-likelihood estimator of the inverse temperature parameter β is given by (using the same matrix-vector notation): The Fisher Curve of a System By computing Φ β , Ψ β and H β we have access to three important information-theoretic measures regarding a global configuration of the random field, X (t) .The motivation behind the Fisher curve is the development of a computational tool for the study and characterization of random fields.Basically, the Fisher curve of a system is the parametric curve embedded in this information-theoretic space obtained by varying the inverse temperature parameter β from an initial value β I to a final value β F .The resulting curve provides a geometrical interpretation about how the random field evolves from a lower entropy configuration A to a higher entropy configuration B (or vice-versa), since the Fisher information plays an important role in providing a natural metric to the Riemannian manifold of a statistical model [4,5].We will call the path from a global system configuration A to a global system configuration B as the Fisher curve (from A to B) of the system, denoted by F B A (β). Instead of using the notion of time as parameter to build the curve F , we parametrize F by the inverse temperature parameter β.In geometrical terms, we are measuring the deformation in one component of the metric tensor of the stochastic model induced by a displacement in the inverse temperature parameter direction.Definition 7. Let an isotropic pairwise GMRF model be defined on a lattice S = {s 1 , s 2 , . . ., s n } with a neighborhood system η i and X (β 1 ) , X (β 2 ) , . . ., X (βn) be a sequence of outcomes (global configurations) produced by different values of β i (inverse temperature parameters) for which The system's Fisher curve from A to B is defined as the function F : → 3 that maps each configuration X (β i ) to a point (Φ β i , Ψ β i , H β i ) in the information space, that is: where Φ β , Ψ β and H β denote the type-I expected Fisher information, the type-II expected Fisher information and the entropy of the global configuration X (β) , defined by equations ( 10), ( 11) and ( 14), respectively.We are especially interested in characterizing random fields by measuring and quantifying their behavior as the inverse temperature parameter deviates from zero, that is, when temperature leaves infinity.As mentioned before, the isotropic pairwise GMRF model belongs to the regular exponential family of distributions when the inverse temperature parameter is zero (T = ∞).In this case, it has been shown that the geometric structure, whose natural Riemannian metric is given by the Fisher information matrix (metric tensor), has constant negative curvature (hyperbolic geometry).Besides, Fisher information can be measured by two different but equivalent ways (information equality). As the inverse temperature increases, the model starts to deviate from this known scenario, and the original Riemannian metric does not correctly represents the geometric structure anymore (since there is an additional parameter).The manifold which used to be 2D (surface) now slowly is transformed (deformed) to a different structure.In other words, as this extra dimension is gradually emerging (since β not null), the metric tensor is transformed (the original 2 × 2 Fisher information matrix becomes a 3 × 3 matrix).We believe that the intrinsic notion of time in the evolution of a random field composed by Gaussian variables (isotropic pairwise GMRF) is caused by the irreversibility of this deformation process.In this particular study, we are concerned only in measuring the Fisher information regarding the inverse temperature parameter (a single component of the metric tensor).A further investigation concerns the derivation, simulation an analysis of the complete Fisher information matrix in Gaussian random fields (the complete metric tensor).We intend to focus at this problem in future works. Results and Discussion In this section, we present some experimental results using computational methods for simulating the evolution of random fields.All the simulations were performed with Markov Chain Monte Carlo (MCMC) algorithms for generating random field outcomes based on the specification of the model parameters.In this paper, we make use of the Metropolis-Hastings algorithm [20]. Our main objective is to measure Φ β , Ψ β and H β along a MCMC simulation in which the inverse temperature parameter β is controlled to guide the global system behavior.Initially, β is set to β M IN = 0, that is, the initial temperature is infinite.In the following, β is linearly increased, with fixed increments ∆β, up to an upper limit β M AX .After that, the reverse process is performed, that is, the inverse temperature is linearly decreased using the same fixed increments (−∆β) down to zero.We are actually performing a positive displacement followed by a negative displacement along the inverse temperature parameter direction.By sensing a component of the metric tensor (Fisher information) at each point, we are trying to capture part of the deformation in the geometric structure of the manifold defined by the random field's parametric space throughout the process. Our simulation was performed using the following parameter settings: µ = 0, σ 2 = 1 (initial value), A = β M IN = 0, B = β M AX = 0.5, ∆β = 0.001 and 1000 iterations.At each iteration, the values of µ and σ 2 are updated by computing the sample mean and sample variance, respectively.The inverse temperature parameter is updated by computing the maximum pseudo-likelihood estimative.Figures 2 and 3 show some samples of the random field during the evolution of the system, the real values of β (used to generate the random field outcomes) and the estimative βMPL along the an entire MCMC simulation.Note that in this model, the maximum pseudo-likelihood estimator of the inverse temperature parameter underestimates the real parameter.The experimental results suggest that an upper bound for βMPL is a value close to 1/K, where K is the size of the neighborhood system.In all experiments we consider a second-order system, which corresponds to K = 8. Figure 4 shows the asymptotic variance of the maximum pseudo-likelihood estimator of the inverse temperature parameter, given by equation ( 21), for the isotropic pairwise GMRF model along the MCMC simulation.Note that the critical issue concerning the bias-variance tradeoff in this model is the large bias (since the asymptotic variance is quite small in comparison to the bias). We now proceed to the analysis of the entropy in the isotropic pairwise GMRF model.Figure 5 shows the behavior of H β along the MCMC simulation.Note that entropy has some fluctuations for small values of β and it starts to increase as the inverse temperature parameter deviates from zero (or T deviates from infinity).In the beginning of the simulation, that is, for small values of β, H β fluctuates around a base level.After a certain moment, the system's entropy shows a completely different behavior: it starts to increase rapidly.The simulation results suggest this behavior is directly related with variations in Fisher information (components of the metric tensor).Figure 6 shows the plot of both forms of Fisher information, Φ β and Ψ β along the MCMC simulation.Some observations should be pointed out.First, it is clear that both forms of Fisher information significantly diverge for larger values of the inverse temperature parameter.In other words, the information equality prevails only when β is close to zero.Moreover, the asymmetry in both forms of Fisher information is clearly visible, even though the total displacement in the inverse temperature parameter direction is symmetric and adds up to zero.Note also that the way Φ β and Ψ β deviate from the equilibrium condition is significantly different from the way both of them approximate this condition. .Asymptotic variance of the maximum pseudo-likelihood estimator of the inverse temperature parameter along the MCMC simulation.In the isotropic pairwise GMRF model, the critical term in the bias-variance tradeoff is the large bias of βMPL , whose value is upper bounded by 1/K, where K is the size of the neighborhood system.When the temperature is infinity (β = 0), entropy reaches its minimum value.A close inspection to the graph reveals that the behavior of H β is not symmetric, although the total displacement in the inverse temperature direction adds up to zero (from 0 to 0.5 and back).In the following, a more geometric interpretation is discussed.In the beginning of the simulation, when the inverse temperature parameter is zero, the random field model degenerates to a simple Gaussian model (normal density).It is known that in this scenario the parametric space is a surface with constant curvature.In fact, this curvature is negative and equals minus one (hyperbolic geometry) [4].The metric tensor, used to measure the deformation of the parametric space locally, is given by the Fisher information matrix.However, when temperature deviates from infinity (β deviates from zero), the original surface that represents the parametric space is transformed into a complex 3D Riemannian manifold, whose geometrical structure is defined by a novel metric tensor.The properties of this manifold are unknown, but by measuring Fisher information regarding the inverse temperature parameter we are trying to gain insights about the geometry of this manifold.In practical terms, what happens to the metric tensor can be summarized as: by moving forward δ units in β direction we sense an effect that is not compatible with the effect produced by a displacement of δ units in the opposite direction.In other words, moving towards higher entropy states (β increases) is different from moving towards lower entropy states (β decreases).This idea is illustrated by a plot of the Fisher curve of the random field along the simulation.Figure 7 shows the estimated 2D Fisher curves F B A (β) = (Φ β , Ψ β ) for β = 0, . . ., 0.5 (the blue curve) and F A B (β) = (Φ β , Ψ β ) for β = 0.5, . . ., 0 (the red curve).Note that according to the Fisher curve, the basic notion of an arrow of time start to emerge when the information equilibrium condition vanishes.Suppose the following situation: initially, when temperature is infinite, the random field is at a state A. As temperature is gradually reduced (β is increasing), the system reaches a different state A'.During this period, there is no perception of time yet.Now, let us imagine that temperature is being increased (β is decreasing).In terms of information, the path from A' to A is the same as the path from A to A'.In other words, it is not possible to know whether we are moving forward or backwards in time, simply because at this point the notion of time is not clear (as well as the notions of past and future).Seemingly, time behaves as a space-like dimension since it is possible to move in both directions in this information space (the states A and A' are equivalent in terms of entropy, since there is no significant variation of H β ).Let us suppose now that from the state A the random field has evolved to the state A".In this case, the notion of time is clear, since to take the system back to A it is necessary to go through a different path, that is, in terms of information the path from A to A" is not the same as the path from A" to A. In fact, at this point, the deformations induced by the metric tensor into the parametric space (manifold) are not reversible for opposite displacements in the inverse temperature direction (it seems that the deformations caused in the manifold by the emergence of this extra dimension β are difrerent from the deformations induced by the vanishing of this extra dimension).The same curve illustrated by the previous Figure was plotted now in 3 dimensions.Figure 8 shows the 3D Fisher curve of the random field along the same MCMC simulation (now including entropy information).Once more, note that when the system is moving towards a higher entropy state (from A to B) the path is different from the one obtained by bringing the system back to a lower entropy state (from B to A).This natural orientation in the information space induces an arrow of time throughout the evolution of the random field.In other words, the only way to go from A to B by the red path would be running the simulation backwards.Note, however, that when moving along states whose variation in entropy is negligible (practically zero) the notion of time is not apparent.Again, suppose we move from the state A to the state A' indicated in Figure 8.The path from A to A' is the same as the path from A' to A, since both states are in the same base entropic level.In this case, it is not possible to detect whether we are moving forward or backwards in time (it is difficult to perceive the passage of time). In order to emphasize the role of Fisher information in studying the emergence of the arrow of time in random fields, Figure 9 shows the parametric curve obtained by plotting some statistics used in the definition of both Φ β and Ψ β , more precisely, Σ − p + and ρ + .The obtained results show that by simply looking at these two measures it is not possible to capture an arrow of time.By analyzing these measurements we cannot say whether the system is moving forwards or backwards in time at all (even for large variations on the inverse temperature parameter).Note that the path from A (β = 0) to B (β = 0.5) is essentially the same as the path from B to A. Therefore, by monitoring only these two quantities we are not actually measuring the deformations induced by the metric tensor to the parametric space (random field model manifold).This section describes the main results obtained in this paper, focusing on the interpretation of the Fisher curve of a random field.Basically, when temperature is infinite entropy fluctuates around a minimum base value and the information equality prevails.From an information geometry perspective, a reduction in temperature (increase in β) causes a series of changes in the random field, since the metric tensor related to the parametric space is drastically deformed in an apparently non-reversible way, inducing the emergence of an arrow of time. By quantifying and measuring an arrow of time in random fields, a relevant aspect that naturally arises concerns the notions of past and future.Suppose the random field is now in a state A, moving towards an increase in entropy (β is increasing).Within this context, the analysis of the Fisher curve suggests a possible interpretation: past is related to a set of states P = X (β−) whose entropies are lower than the entropy of the current state A. Or equivalently, past is also related to a set of states P = X (β+) whose entropies are higher than A, provided the random field is moving towards a lower entropy state. Again, let us suppose the random field is in a state A and moving towards an increase in entropy (β is increasing).Similarly, future refers to a set of states F = X (β+) whose entropies are higher than the entropy of the current state A (or equivalently, future also refers to the set of states F = X (β−) whose entropies are lower than A, provided that the random field is moving towards a decrease in entropy). Note that according to this possible interpretation, the notion of future is related to the direction of the movement, pointed by the tangent vector at a point (Φ β , Ψ β , H β ) of the Fisher curve.Therefore, if along the evolution of the random field there is no significant change in the system's entropy (as when the random field moves from A to A' in Figure 8), it would be possible to access past by simply moving into the opposite direction (by a displacement in the opposite direction along the inverse temperature parameter), as if time were a spatial dimension (given the stochastic nature of the model, this access to the past does not mean we would actually revisit exactly the same state again, only a equivalent one with the same global properties).However, in this case, where there is no significant changes in entropy, the notions of past and future seem to be meaningless. Conclusions In this paper, we addressed the problem of measuring the emergence of an arrow of time in Gaussian random field models.To intrinsically investigate the effect of the passage of time, we performed computational simulations of random fields in which the inverse temperature parameter is controlled to guide the system behavior throughout different entropic states.Investigations about the relation between two important information-theoretic measures, entropy and Fisher information, led us to the definition of the Fisher curve of a random field, a parametric trajectory embbeded in an information space, which characterizes the system behavior in terms of variations in the inverse temperature parameter.Basically, this curve provides a geometrical tool for the analysis of random fields by showing how different entropic states are "linked" in terms of Fisher information, which is, by definition, the metric tensor of the underlying random field model parametric space.In other words, when the random field moves along different entropic states, its parametric space is actually being deformed by changes that happen in Fisher information (the metric tensor).In this scientific investigation we observe what happens to this geometric structure when the inverse temperature parameter is modified, that is, when temperature deviates from infinity, by measuring both entropy and Fisher information.An indirect subproblem involved in the solution of this main problem was the estimation of the inverse temperature parameter of a random field, given an outcome (snapshot) of the system.To tackle this subproblem, we used a statistical approach known as maximum pseudo-likelihood estimation, which is especially suitable for random fields, since it avoids computations with the joint Gibbs distribution, often computationally intractable.Our obtained results show that moving towards higher entropy states is different from moving towards lower entropy states, since the Fisher curves are not the same.This asymmetry induces a natural orientation to the process of taking the random field from an initial state A to a final state B and back, which is basically the direction pointed by the arrow of time, since the only way to move in the opposite direction is by running the simulations backwards. Another observation regarding the analysis of the Fisher curves and the computational simulations suggest that the proposed intrinsic notion of time as the rate at which the parametric space is being deformed is highly non-linear.Apparently, the passage of time seems to be "faster" when entropy has an intermediate value (∆H is large for small displacements in β) and "slower" when its value is extremal (very low or very high).In a Gaussian random field, it takes a greater effort to increase the system's entropy around a minimum base level in comparison to an intermediate value.The same observation is valid when decreasing the system's entropy: to move the system away from a maximum entropy state demands "more time" (∆H is small for small displacements in β) since the deformations in the metric tensor are smooth and not abrupt (it is not so clear to perceive the passage of time). Future works include the study of the Fisher curve in other random field models, such as the Ising and q-state Potts models.Moreover, we are currently working in the full specification of the Fisher information matrix regarding the Gaussian random field model (derivation of the all 9 components of the metric tensor) to completely characterize the geometrical structure of its manifold and to gain a deeper insight on the problems discussed here.Finally, a possible investigation concerns the analysis of the spectrum of the metric tensor throughout MCMC simulations.We expect that the eigenvalues of the Figure 1 . Figure 1.Decomposing the covariance matrix Σ p into Σ −p and ρ on a second-order neighborhood system (K = 8).By expressing both Φ β and Ψ β in terms of Kronocker products, it is possible to compute Fisher information in a efficient way during computational simulations. Figure 2 . Figure 2. Global system configurations along a Markov chain Monte Carlo (MCMC) simulation.Evolution of the random field as the inverse temperature parameter, β, is changed in order to control the expected global behavior. Figure 3 . Figure 3. Variations in β and βMPL along the MCMC simulation in the GMRF model.The MPL estimatives of the inverse temperature parameter in the isotropic pairwise GMRF model underestimates the real parameter values.The obtained results show that the upper bound for βMPL is 1/K, where K is the size of the neighborhood system. Figure 4 Figure 4. Asymptotic variance of the maximum pseudo-likelihood estimator of the inverse temperature parameter along the MCMC simulation.In the isotropic pairwise GMRF model, the critical term in the bias-variance tradeoff is the large bias of βMPL , whose value is upper bounded by 1/K, where K is the size of the neighborhood system. Figure 5 . Figure 5. Entropy in the isotropic pairwise GMRF model along the MCMC simulation.When the temperature is infinity (β = 0), entropy reaches its minimum value.A close inspection to the graph reveals that the behavior of H β is not symmetric, although the total displacement in the inverse temperature direction adds up to zero (from 0 to 0.5 and back). Figure 6 . Figure 6.Fisher information in the isotropic pairwise GMRF model along the MCMC simulation.When the temperature is infinity (β = 0), the information equality prevails, however, for larger values of β, Φ β and Ψ β diverge. Figure 7 . Figure 7. 2D Fisher curve of the random field along the MCMC simulation.The parametric curve was built by varying the inverse temperature parameter β from β M IN = 0 to β M AX = 0.5 and back.The results show that moving along different entropic states causes the emergence of a natural orientation in terms of information (the arrow of time). Figure 8 . Figure 8. 3D Fisher curve of the random field along the MCMC simulation.Note that, from a differential geometry perspective, as Φ β deviates from Ψ β the torsion of the curve becomes evident since it leaves the plane of constant entropy. Figure 9 . Figure 9. Variations in the statistics Σ − p + and ρ + of an isotropic pairwise GMRF model along the MCMC simulation.The parametric curve was built by varying the inverse temperature parameter β from β M IN = 0 to β M AX = 0.5 and back.Note that in this case the arrow of time is not evident since the two curves, F B A (β) and F A B (β), are essentially the same. Figure 10 . Figure 10.The Fisher curve, an arrow of time and notions of past and future in the evolution of a Gaussian random field.
v3-fos-license
2019-06-13T13:06:49.270Z
2019-06-13T00:00:00.000
186203534
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2019.00631/pdf", "pdf_hash": "930680228f8425984d1b7f39486b053ddc59dc12", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43953", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "930680228f8425984d1b7f39486b053ddc59dc12", "year": 2019 }
pes2o/s2orc
Metabolism of IMM-H004 and Its Pharmacokinetic-Pharmacodynamic Analysis in Cerebral Ischemia/Reperfusion Injured Rats IMM-H004, a derivative of coumarin, is a promising candidate for the treatment of cerebral ischemia. The pharmacodynamic mechanisms of IMM-H004 are still under exploration. The present study was conducted to explore the pharmacoactive substances of IMM-H004 from the perspective of drug metabolism. Four metabolites of IMM-H004 including demethylated metabolites M1 and M2, glucuronide conjugate IMM-H004G (M3), and sulfated conjugate M4 were found in rats in vivo. IMM-H004G was the major metabolite in rats and cultured human hepatocytes, and uridine diphosphate-glucuronosyltransferase (UGT) was found to catalyze the metabolism of IMM-H004 in human liver microsomes (HLMs) and rat liver microsomes (RLMs) with high capacity (V max at 3.25 and 5.04 nmol/min/mg protein). Among 13 recombinant human UGT isoforms, UGT1A7, 1A9, 1A8, and 1A1 appeared to be primarily responsible for IMM-H004G formation. The exposure and duration of IMM-H004G (28,948 h × ng/ml of area under the plasma concentration–time curve (AUC), 6.61 h of t 1/2β) was much higher than that of the parent drug (1,638 h × ng/ml of AUC, 0.42 h of t 1/2β) in transient middle cerebral artery occlusion/reperfusion (MCAO/R) rats, consistent with the malondialdehyde (MDA) inhibition effect for at least 10 h. Further pharmacological study revealed that IMM-H004G exhibited a similar neuroprotective activity to that of the parent drug on both oxygen-glucose deprivation injured PC12 cells and transient MCAO/R injured rats. These results demonstrate that both prototype and IMM-H004G are the active pharmaceutical substances, and IMM-H004G, at least in part, contributes to the maintenance of anti-cerebral ischemia efficacy of IMM-H004. IMM-H004, a derivative of coumarin, is a promising candidate for the treatment of cerebral ischemia. The pharmacodynamic mechanisms of IMM-H004 are still under exploration. The present study was conducted to explore the pharmacoactive substances of IMM-H004 from the perspective of drug metabolism. Four metabolites of IMM-H004 including demethylated metabolites M1 and M2, glucuronide conjugate IMM-H004G (M3), and sulfated conjugate M4 were found in rats in vivo. IMM-H004G was the major metabolite in rats and cultured human hepatocytes, and uridine diphosphate-glucuronosyltransferase (UGT) was found to catalyze the metabolism of IMM-H004 in human liver microsomes (HLMs) and rat liver microsomes (RLMs) with high capacity (V max at 3.25 and 5.04 nmol/ min/mg protein). Among 13 recombinant human UGT isoforms, UGT1A7, 1A9, 1A8, and 1A1 appeared to be primarily responsible for IMM-H004G formation. The exposure and duration of IMM-H004G (28,948 h × ng/ml of area under the plasma concentration-time curve (AUC), 6.61 h of t 1/2β ) was much higher than that of the parent drug (1,638 h × ng/ml of AUC, 0.42 h of t 1/2β ) in transient middle cerebral artery occlusion/reperfusion (MCAO/R) rats, consistent with the malondialdehyde (MDA) inhibition effect for at least 10 h. Further pharmacological study revealed that IMM-H004G exhibited a similar neuroprotective activity to that of the parent drug on both oxygen-glucose deprivation injured PC12 cells and transient MCAO/R injured rats. These results demonstrate that both prototype and IMM-H004G are the active pharmaceutical substances, and IMM-H004G, at least in part, contributes to the maintenance of anti-cerebral ischemia efficacy of IMM-H004. Keywords: drug metabolism, pharmacokinetic/pharmacodynamics (PK/PD), UDP-glucuronosyltransferases, cytochromes P450, cerebral ischemia, neuroprotection, IMM-H004 INTRODUCTION Stroke is one of the leading causes of disability and death worldwide (Meschia et al., 2014). According to the World Health Organization, 15 million people suffer stroke worldwide each year. Of these, 5 million die and another 5 million are permanently disabled. About 87% of all strokes are ischemic stroke. Up to now the only drug that has been approved by the Food and Drug Administration for the treatment of ischemic stroke is the thrombolytic tissueplasminogen activator. However, due to the short-term treatment time window and hemorrhage transformation, only a few patients benefit from the tissue-plasminogen activator (Peisker et al., 2017). Therefore, it is essential to develop other therapies for patients with acute ischemic stroke. In addition to thrombolytic, neuroprotection is considered as another strategy for the treatment of stroke (Neuhaus et al., 2017). The approval of edaravone for treating stroke patients brings the hope for the development of novel neuroprotective agent (Lee and Xiang, 2018). Despite intensive investigations into its pharmacological activities and mechanisms, the biotransformation of IMM-H004 has not been addressed. The identification of drug metabolic pathways and the characterization of the enzymes involved in drug metabolism are important aspects in drug discovery and development. Detecting and characterizing metabolites in both experimental animals and humans are not only critical to evaluate potential risks for drug development but also helpful to understand the mechanism of drug action and identify pharmacokinetic (PK) properties to further improve and optimize the compound design. Therefore, the objectives of the present study were 1) to identify the major metabolites of IMM-H004 and the enzymes responsible for IMM-H004 metabolism in vivo and in vitro; and (2) to evaluate the pharmacokinetic/pharmacodynamics (PK/PD) relationship of IMM-H004 and the neuroprotective activities of major metabolites in vivo and in vitro. Animals Male Sprague-Dawley rats (260-280 g) were purchased from Vital River Experimental Animal Co., Ltd (Beijing, China). Standard pelleted laboratory chow and water were allowed ad libitum. All experiments were approved by the Animal Care and Welfare Committee of Peking Union Medical College and were strictly taken in accordance with guidelines regarding the use and care of laboratory animals issued by the Institute Animal Care and Welfare Committee. Metabolites in Rats After intravenous (iv) injection with IMM-H004 citrate (6 mg/ kg, dissolved in saline to yield a concentration of 1.2 mg/ml), rats were housed individually in metabolic cages to allow separate collection of urine and feces until 72 h postdose. Additional rats were bile cannulated under light ether anesthesia. Each rat was housed individually in a metabolic cage and allowed to recover from anesthesia for 2 to 3 h. Then bile duct cannulated rats were iv injected with IMM-H004 citrate (6 mg/kg, dissolved in saline to yield a concentration of 1.2 mg/ml), bile samples were collected until 24 h postdose. β-glucuronidase and sulfatase were dissolved in physiological saline to 3 and 10 mg/ml, respectively. Urine and bile samples (50 μL) were mixed with β-glucuronidase or sulfatase solution (200 μL) and incubated at 37°C for 1 h. The incubations were quenched with two volumes of ice-cold acetonitrile. Samples without β-glucuronidase or sulfatase were used as controls. The mixtures were centrifuged at 18,800×g for 5 min. A 1 μL aliquot of supernatant was injected into liquid chromatography tandem mass spectrometry (LC-MS/MS) for analysis. Feces samples were homogenized in 10-fold solvent of water and methanol (1:1) and diluted 10-fold in water. Bile and urine samples were diluted 100-fold in water. The concentrations of IMM-H004, M1, M2, and IMM-H004G in diluted bile, urine, and feces samples were determined by LC-MS/MS. In Vitro Incubations Cytochrome P450 (CYP450)-mediated metabolism of IMM-H004 was conducted in RLMs or HLMs. IMM-H004 (10 μM) was incubated with RLMs/HLMs (0.5 mg protein/ml) in a final volume of 0.2 ml Tris-HCl buffer (50 mM, pH 7.4) containing 5 mM MgCl 2 . After 2 min preincubation at 37°C, the reactions were initiated by the addition of reduced nicotinamide adenine dinucleotide phosphate (NADPH) regeneration system (10 mM β-nicotinamide adenine dinucleotide phosphate, 100 mM glucose-6-phosphate and 10 U/ml 6-G-P dehydrogenase). After incubation for 30 min, the reactions were terminated by adding two volumes of ice-cold acetonitrile. Samples without NADPH were used as controls. The incubation mixture was vortexed and centrifuged at 18,800×g for 5 min. A 5 μL aliquot of the supernatant was injected into LC-MS/MS for analysis. Glucuronidation reactions were characterized in RLMs or HLMs. The glucuronidation incubation mixture contained IMM-H004 (10 μM), RLMs/HLMs (1 mg protein/ml), alamethicin (50 μg/ mg protein), and UDPGA (3 mM) in Tris-HCl buffer (50 mM, pH 7.4) containing 5 mM MgCl 2 at a final volume of 200 μL. After preincubation on ice for 15 min, the reactions were initiated by the addition of UDPGA and were incubated at 37°C for 30 min before being quenched with two volumes of ice-cold acetonitrile. Samples without UDPGA were used as controls. The incubation mixture was vortexed and centrifuged at 18,800×g for 5 min. A 1 μL aliquot of the supernatant was injected into LC-MS/MS for analysis. Sulfation reactions were carried out using rat or human liver cytosol. The incubation mixture contained IMM-H004 (10 μM), rat/human liver cytosol (1 mg protein/ml), and PAPS (3 mM) in potassium phosphate buffer (100 mM, pH 7.4) at a final volume of 200 μL. After 5 min preincubation at 37°C, the reactions were initiated by the addition of PAPS. After incubation for 30 min, the reactions were terminated by the addition of two volumes of ice-cold acetonitrile. Samples without PAPS were used as controls. The incubation mixture was vortexed and centrifuged at 18,800×g for 5 min. A 1 μL aliquot of the supernatant was injected into LC-MS/MS for analysis. Structural Identification of Metabolites by Liquid Chromatography Tandem Mass Spectrometry and Nuclear Magnetic Resonance (NMR) Identification of M1 and M2 was based on comparing the retention time and fragmentation mass spectrum with standards. Chromatographic separation was performed on a CAPCELL PAK ADME (absorption, distribution, metabolism, and excretion) column (3 μm, 2.1 mm × 100 mm, Shiseido, Tokyo, Japan). The mobile phases were water with 0.5% formic acid (mobile phase A) and methanol with 0.5% formic acid (mobile phase B) pumped at 0.25 ml/min. The elution condition was 20% mobile phase B for 1.0 min, ascending to 80% B in 8.0 min, holding for 1.0 min, and reequilibrating to 20% B within 0.5 min and maintained for 3.5 min. IMM-H004G was separated on a Zorbax C18 column (5 μm, 4.6 mm × 150 mm, Agilent, USA). The mobile phases were water with 0.5% formic acid (mobile phase A) and acetonitrile with 0.5% formic acid (mobile phase B) pumped at 1 ml/min. The elution condition was started with 2% mobile phase B, ascending to 30% B in 9.0 min, ascending to 90% B within 0.5 min and maintained for 4 min, and descending to 2% B within 0.5 min and maintained for 5 min. Fraction eluted at 7.0 to 7.5 min was collected, concentrated, and dried. 1 H NMR and 13 C NMR spectra of samples were performed at 600 and 150 MHz on Bruker AVIIIHD 600 NMR spectrometer (Bruker, Germany) respectively. The chemistry shifts were recorded in δ (ppm) and referenced to the solvent peaks (dimethyl sulfoxide (DMSO)-d 6 ). Comparison of IMM-H004 Metabolism in Human and Rat Liver Microsomes To compare the metabolic capability of IMM-H004 demethylation in rat and human liver microsomes, the kinetics of M1 and M2 formation in HLMs or RLMs were determined. Enzyme kinetic experiments were performed in triplicate. The incubation mixtures contained HLM/RLM (1 mg protein/ml), IMM-H004 (0.1-1,800 μM), and NADPH (1.2 mM) in a final volume of 0.2 ml Tris-HCl buffer (50 mM, pH 7.4) containing 5 mM MgCl 2 . The reaction was incubated at 37°C for 20 min and stopped by adding 200 μL of ice-cold acetonitrile containing IS (100 ng/ml). After vortex-mixing and centrifugation, M1 and M2 in the supernatant were analyzed by LC-MS/MS. To explore glucuronidation difference of IMM-H004 between humans and rats, the kinetics of IMM-H004G formation in UDPGA-supplemented HLMs and RLMs were determined. The incubation mixtures consisted of HLMs or RLMs (0.025 mg protein/ml), alamethicin (50 μg/mg protein), IMM-H004 (0.5-2,500 μM), and UDPGA (5 mM) in a final volume of 0.2 ml Tris-HCl buffer (50 mM, pH 7.4) containing 5 mM MgCl 2 . HLM or RLM was preincubated with alamethicin on ice for 15 min. The reaction was incubated at 37°C for 10 min and quenched by adding 200 μL of ice-cold acetonitrile containing IS (100 ng/ml). After vortex-mixing and centrifugation, IMM-H004G in the supernatant was analyzed by LC-MS/MS. IMM-H004 Metabolism by Human Hepatocytes Primary human hepatocytes were used to predict the metabolism of IMM-H004 in humans. Cryopreserved human hepatocytes were obtained from Bioreclamation IVT (Baltimore, MD, USA). Three different hepatocyte preparations (Lot CDP, ZHL, and DSX) were pooled in this study. Hepatocytes were resuspended and seeded in 96-well plates at a density of 0.7 × 10 6 cells/ml. After 24 h of culture, cells were washed with Hanks' balanced salt solution, followed by addition of 30 nM IMM-H004 dissolved in Hanks' balanced salt solution. Incubations were carried out at 37°C up to 3 h under gentle shaking. Cell culture supernatant was collected within 3 h and determined by LC-MS/MS. All experiments were conducted in triplicate. Identification of Metabolizing Enzymes To identify metabolizing enzymes, IMM-H004 was incubated with different types of human recombinant enzymes. PK and PD Study in Rats The plasma pharmacokinetics (PK) and pharmacodynamics (PD) of IMM-H004 and IMM-H004G were investigated in a cerebral ischemia-reperfusion rat model produced by middle cerebral artery occlusion/reperfusion (MCAO/R) after iv administration of IMM-H004. In brief, rats underwent a 1-h MCAO and then received IMM-H004 citrate (10 mg/kg, dissolved in saline to yield a concentration of 2 mg/ml) immediately after reperfusion, an effective dosing time as reported (Zuo et al., 2015;Yang et al., 2017). For sham surgery, all the arteries were exposed for the surgical period, but the filament was not inserted into the MCA. Blood samples were collected through external jugular vein cannula into heparinized tubes at 0. 033, 0.167, 0.5, 1, 2, 4, 6, 8, 12, 24, 36, and 48 h postdose for IMM-H004 and IMM-H004G detection (50 μL of blood at each time point) and at 0.5, 1, 2, 4, 7, and 10 h postdose for malondialdehyde (MDA) detection (80 μL of blood at each time point). Plasma was immediately separated by centrifugation at 900×g for 5 min. Plasma concentrations of IMM-H004 and IMM-H004G in ischemiareperfusion rats were determined by LC-MS/MS analysis. The PK parameters were calculated by noncompartmental analysis using WinNonlin Version 6.1 (Pharsight, Mountain View, CA). Plasma MDA concentrations in ischemia-reperfusion and sham rats within 24 h were estimated by enzyme-linked immunosorbent assay. Liquid Chromatography Tandem Mass Spectrometry Analysis The LC-MS/MS system consisted of Shimadzu 30A UPLC and API 4000 triple quadruple mass spectrometer (AB SCIEX, USA). Data acquisition and analysis were accomplished using Analyst 1.5.2 software. The analytes and IS were chromatographed by injection of 1 μL sample. The mobile phases consisted of solvent A (0.5% formic acid in water) and solvent B (0.5% formic acid in methanol) at a flow rate of 0.3 ml/min on an Eclipse Plus C18 column (2.1 mm × 50 mm, 3.5 μm, Agilent, USA), and the operating temperature was 40°C. The elution condition was 10% solvent B for 0.7 min, ascending to 98% B in 2.3 min, holding for 3.0 min and reequilibrating to 10% B within 0.1 min and maintained for 2.4 min (Jiang JW, 2018). The specific transitions monitored were 305→248 for IMM-H004, 291→234 for M1, 291→248 for M2, 481→305 for IMM-H004G, 385→305 for M4, and 260→183 for IS. In Vitro The neuroprotective activity of IMM-H004 and IMM-H004G was evaluated on PC12 cells damaged by oxygen-glucose deprivation (OGD). PC12 cells were purchased from the American Type Culture Collection. Cultures were maintained at 37°C in 5% CO 2 in a humidified incubator. For cell viability assay, PC12 cells were incubated in 96-well plates at a density of 5 × 10 4 /ml for 24 h. In the sham group, PC12 cells were cultured in Dulbecco's modified eagle medium supplemented with 5% fetal bovine serum and 5% equine serum (Gibco, USA). In the OGD group, PC12 cells were cultured in glucose-free Earle's balanced salt solution supplemented with 15 mM Na 2 S 2 O 4 for 2 h and then incubated with compounds of nerve growth factor (NGF, 10 μM), edaravone (10 μM, 50 μM), IMM-H004 (1 μM, 10 μM), and IMM-H004G (1 μM, 10 μM), respectively. After 24 h of incubation, MTT (5 mg/ml) was added into the cell cultures and incubated for an additional 4 h. Then the supernatant was removed and 100 μL DMSO was added. Absorbance was measured using an Ultramark microplate reader at a wavelength of 562 nm. The cell viability was expressed as a percentage of the absorbance density value of control cultures. In Vivo Transient MCAO/R was applied to compare the neuroprotection of IMM-H004 and IMM-H004G. Rats were fasted overnight with free access to water and randomly assigned to different groups. Rats were anesthetized with a mixture of 5% isoflurane and 95% oxygen and maintained with a mixture of 3% isoflurane and 97% oxygen during the surgical procedure. A 4-0 nylon thread, the tip of which was burned (diameter 0.36 mm), was inserted into the right internal carotid artery and advanced until the origin of the right MCA was occluded. After 60 min of the occlusion, the thread was withdrawn to allow reperfusion, and then the rats were returned to the chamber. Saline or edaravone at dose of 6 mg/kg or IMM-H004 citrate and IMM-H004G at dose of 10 mg/kg (dissolved in saline to yield a concentration of 2 mg/ml) were administered by intravenous injection immediately after reperfusion. The rats were assessed for neurologic deficits at 24 h after reperfusion according to Zea Longa's five-point scale. A score of 0 indicates no neurological deficit; a score of 1 indicates failure to extend left forepaw fully; a score of 2 indicates circling to the left; a score of 3 indicates falling to the left; a score of 4 indicates did not walk spontaneously and had a depressed level of consciousness; and a score of 5 indicates death. The animals without symptoms of neurological impairment or dying after the surgery were rejected. All animals were killed 24 h after reperfusion. Brains of the animals were removed and cut into 2 mm-thick slices, for a total of six slices per animal. The slices were immersed in a 2% solution of 2,3,5-triphenyltetrazolium chloride in phosphate buffered saline at 37°C for 20 min and then fixed in 4% formaldehyde overnight. Images of the slices were obtained with a scanner and a computer. The infarct area and the total area were calculated by tracing the areas on the computer screen with Image J. The percentage of the infarct volume was expressed as the infarct volumes (white parts)/ the whole volume of the cortex. Statistical Analysis Data are presented as mean ± SD or SEM. Statistical evaluation was performed using one-way analysis of variance. Significant difference was further performed in conjunction with the Student-Newman-Keuls method. Statistical significance was accepted at p < 0.05. Identification of IMM-H004 Metabolites In Vivo and In Vitro The protonated IMM-H004 molecule (m/z 305) was detected under positive ion mode, which further dissociated in MS 2 to produce the fragment ions at m/z 248, m/z 220, m/z 192, and m/z 177 ( Table 1). The metabolic profile of IMM-H004 in rat urine, feces, bile, and in vitro incubations was analyzed by LC-MS/MS. Four metabolites (M1, M2, IMM-H004G, and M4) of IMM-H004 were detected (Figure 1). M1 and M2 could be detected in rat urine, bile, feces, RLMs, and HLMs incubations. Both of them generated a protonated molecule of m/z 291 which was 14 Da smaller than that of the protonated parent compound, suggesting the loss of methyl. Upon collision-induced dissociation, the fragment ions m/z 234, m/z 206, and m/z 163 of M1 were 14 Da smaller than those of the parent drug, and the fragment ions 248, 220, 192, and 177 of M2 were consistent with those of the parent drug. Since the retention times and fragmentation profiles of M1 and M2 were consistent with those synthesized reference compounds, M1 and M2 were identified as 6-demethylation and N-demethylation IMM-H004, respectively (Figures 2 and 3). M4 was found in rat urine, bile, and human/rat liver cytosol incubations. It showed a protonated molecular ion at m/z 385, 80 Da higher than that of the parent drug, indicating the formation of a conjugate. The collisional activated decomposition product ion spectrum of m/z 385 showed an ion at m/z 305, a loss of 80 Da from the protonated molecular ion, suggesting the presence of a sulfate conjugate. Treatment of urine with sulfatase resulted in the disappearance of M4, further suggesting that M4 was a sulfated conjugate of IMM-H004. Further quantitative analysis of rat urine, bile, and feces by LC-MS/MS showed that the total urinary IMM-H004G and IMM-H004 recovered over the 72-h sampling period was 72.5% of intake, and urinary IMM-H004G was 69.7% of intake. The recovery of IMM-H004G and IMM-H004 in bile accounted for 76.1% of intake, and the recovery of IMM-H004G accounted for 75.9% of intake. The fecal specimens mainly contained IMM-H004, accounting for 18% of the dose. Therefore, IMM-H004G was the main excretion form of the drug in vivo. IMM-H004 Metabolism in Liver Microsomes and Hepatocytes Michaelis-Menten kinetic parameters clearly demonstrated the difference between demethylation and glucuronidation of IMM-H004 in RLM and HLM (Figure 4). The estimated apparent K m and V max values for demethylation and glucuronidation together with the intrinsic clearance are summarized in Table 3. Obviously, the maximal rate of N-demethylation (0.07-0.12 nmol/min/mg protein) was much lower than that of 6-demethylation (1.34-1.99 nmol/min/mg protein) and glucuronidation (3.25-5.04 nmol/ min/mg protein) in both species. Besides, glucuronidation pathway exhibited at least 40-fold V max /K m value compared with 6-demethylation. Therefore, glucuronidation was a highcapacity pathway. The glucuronidation V max in rat samples (5.04 nmol/min/mg protein) was 1.6-fold higher than that in human samples (3.25 nmol/min/mg protein); this finding indicated that rats would have a greater capacity than humans for glucuronidation. IMM-H004G and trace of M4 could be detected in cell supernatant after IMM-H004 was incubated with hepatocytes, while M1 and M2 were not detectable ( Figure 5A). After 3 h of human hepatocytes incubation, more than 70% of IMM-H004 was converted to IMM-H004G, and the total concentration of IMM-H004 and IMM-H004G was nearly equal to the initial concentration of IMM-H004 (30 nM) added in the cell supernatant ( Figure 5B). The results indicated that IMM-H004G was the major metabolite of IMM-H004 in human hepatocytes, and the metabolic profile of IMM-H004 in humans was probably consistent with that of rats. Metabolism of IMM-H004 by cDNA-Expressed Human Metabolizing Enzymes To identify the enzymes involved in the formation of M1 and M2, a panel of cDNA-expressed recombinant CYP450 enzymes was screened for their activities. As shown in Figure 6A, conversion of IMM-H004 to M2 was catalyzed by CYP1A1, 2C9, 2D6, and 3A4, and to a lesser extent by CYP1A2, 2C8, 2C19, and 2J2. To make a better estimate of the relative contribution of each CYP450 enzyme to the overall clearance of IMM-H004 in humans, the enzyme activities were normalized by the average content of each enzyme in HLM (Gong et al., 2012). As a result, CYP2C9 and CYP3A4 were predicated to be the major contributors for the formation of M2. Meanwhile, M1 could not be detected in all CYP450 enzymes incubations, suggesting less possibility that these enzymes are involved in the formation of M1. So other phase I metabolizing enzymes responsible for M1 formation remain to be discovered. To evaluate the activities of UGT enzymes for the formation of IMM-H004G, IMM-H004 was incubated with individual human cDNA-expressed UGT enzyme in the presence of UDPGA. As indicated in Figure 6B, conversion of IMM-H004 to IMM-H004G was catalyzed by UGT1A7, 1A9, 1A8, and 1A1 and to a lesser extent by UGT1A3, 1A10, and 2B15. In addition, we examined the metabolism of IMM-H004 with individual human cDNA-expressed SULT enzyme in the presence of PAPS. The highest activity was observed with SULT1E1, followed by 1A3 and 1A1. Activities were markedly lower with SULT1A2, 1C4, and 2A1 and were negligible with SULT1C2 and 1B1 (Figure 6C). PK and PD Study of IMM-H004 in Rats The mean plasma concentration-time profiles of IMM-H004 and IMM-H004G are presented in Figure 7A, and major PK parameters are shown in Table 4. After iv injection of IMM-H004 to MCAO/R rats, IMM-H004 eliminated rapidly with a short plasma elimination half-life (t 1/2β ) of 0.42 h. The concentration of IMM-H004G in plasma increased to a maximum of 13,020 ng/ml within 15 min after dosing, then declined slowly with a t 1/2β value of 6.61 h. The mean area under curve of IMM-H004G in plasma was 11.22-fold higher than that of IMM-H004. Meanwhile, no significant difference of PK was found between sham and MCAO/R rats (data not shown). The effect of IMM-H004 on MDA levels is shown in Figure 7B. After 0.5 h of reperfusion, the plasma MDA level in MCAO/R rats was significantly higher than that in the sham group. IMM-H004 was shown to significantly lower the increased MDA level at 2 h and maintained for at least 10 h after administration compared to the MCAO/R group. So the PK-PD data indicated that, compared with the parent drug, IMM-H004G exhibited a longer exposure time and higher exposure level and therefore had a better correlation with the duration of drug efficacy. Neuroprotection of IMM-H004 and IMM-H004G in Oxygen-Glucose Deprivation-Injured PC12 Cells The neuroprotective activity against OGD-induced neuronal injury in PC12 cells by IMM-H004 and IMM-H004G at 1 and 10 μM were evaluated by cell viability assay. As shown in Figure 8, cell viability of the model group decreased to 64.9% of the control group (p < 0.001). Cells treated with IMM-H004 and IMM-H004G at concentrations of 1 and 10 μM had significant (p < 0.05) preservation of viability. In addition, only trace of IMM-H004 could be detected in PC12 cultures after 24 h of incubation with IMM-H004G, much lower than the effective concentration of 1 μM in vitro. The result supports the neuroprotective activity of IMM-H004G. Likewise, IMM-H004 and IMM-H004G significantly decreased brain infarct volume and neurological deficits following MCAO/R. At 24 h after ischemia-reperfusion, model rats exhibited visible intracerebral damage (infarct volume, 22.8%) and major neurological deficits. In rats treated with IMM-H004 and IMM-H004G, the infarct volume was markedly reduced (p < 0.01) (Figure 9). It was accompanied by a significant improvement (p < 0.01) of the neurological test score (Figure 10). DISCUSSION Drug metabolism research is an essential part of drug discovery and development. A comprehensive understanding of metabolic pathways and analysis of the concentration-effect relationship of new candidates and major metabolites can help us to reveal the pharmacoactive substances of a drug and provide useful information for drug development and clinical application. IMM-H004 has been shown to have significant neuroprotective effect on a variety of pathological models. It could block chemokine-like factor 1-C27-induced calcium mobilization and chemotaxis, decrease the toxicity of Aβ, reduce inflammatory response, and improve blood-brain barrier function (Li et al., 2010;Song et al., 2013;Ji et al., 2014;Song et al., 2014;Zuo et al., 2014;Zuo et al., 2015;Chu et al., 2017;Niu et al., 2017). Unexpectedly, IMM-H004 underwent extremely short elimination half-life (0.19 h) in rat plasma and brain . However, due to the low exposure and short elimination half-life of IMM-H004, it is difficult to explain its pharmacology activity. As such, it is very meaningful to investigate how the effect of IMM-H004 lasts for a long time and whether pharmacologically active metabolites are present. Subsequently, the metabolism of IMM-H004 was systematically investigated in vitro (with HLMs, human hepatocytes, and RLMs) and in rats in vivo. Four metabolites of IMM-H004 including demethylated metabolites M1 and M2, glucuronide conjugate IMM-H004G, and sulfated conjugate M4 were identified in vitro. Enzyme kinetics analysis demonstrated that demethylation of IMM-H004 was a low capacity and affinity pathway. In contrast to demethylation, glucuronidation offered high affinity and capacity in both RLM and HLM. These results indicate that glucuronidation may play an important role in the metabolism of IMM-H004. The major metabolites of IMM-H004 observed in vivo in rats were consistent with that in vitro. After iv administration of IMM-H004, most of the drug was recovered in bile and urine as IMM-H004G, indicating that IMM-H004G was the major metabolite of the drug in rats. Many functional groups can react with UDPGA to form O-, N-and S-glucuronides, respectively (Bock, 2015). IMM-H004 contains a hydroxyl group and a tertiary amine on the piperazine ring in its structure. Therefore, there are two possibilities for glucuronide metabolites of IMM-H004. Actually, we detected only one glucuronide metabolite of IMM-H004 by LC-MS/MS. Our NMR results confirmed that the glucuronide metabolite was the phenolic glucuronide. Like other compounds, the hydroxyl group appears to be a more active group that forms glucuronide. IMM-H004 citrate (10 mg/kg dissolved in saline) was given immediately to MCAO rats after reperfusion via tail vein injection, and the collection of blood sample was timed according to the time of IMM-H004 iv injection. Error bars represent SD (n = 5). One-way analysis of variance was used, # p < 0.05, ## p < 0.01 vs. Control, *p < 0.05, **p < 0.01 vs. Model group. Error bars represent SD (n = 12). One-way analysis of variance was used, ### p < 0.001 vs. Control, *p < 0.05, **p < 0.01, ***p < 0.001 vs. Model group. It is well known that conjugation of hydroxyl groups of phenols can occur with both glucuronate and sulfate and that there are species differences in glucuronidation or sulfation rates between animals and humans due to different expression levels of UGT and SULT subtypes (Vaidyanathan and Walle, 2002). Due to the lack of M4 reference standards, the kinetics of sulfated conjugate of IMM-H004 in liver microsomes cannot be compared with that of glucuronide conjugates, so the metabolite profile of IMM-H004 was further investigated in primary human hepatocyte to predict the metabolism of IMM-H004 in humans. The results showed that although a small amount of sulfated conjugate can be detected, IMM-H004 was predominantly metabolized to IMM-H004G in human hepatocytes. Therefore, glucuronidation may be the primary metabolic pathway of IMM-H004 in humans, similar to coumarin, a precursor compound of IMM-H004 (Ford et al., 2001;Wang et al., 2006;Leonart et al., 2017). But considering the glucuronidation clearance in RLM is much higher than that of HLM, the role of glucuronide conjugate in the metabolism and disposition of IMM-H004 in humans needs to be further investigated in vivo. Glucuronidation is known to be catalyzed by a family of UGT enzymes. Human UGTs, consisting of at least 22 proteins, are divided into five subfamilies including UGT1A, 2A, 2B, 3A, and 8A on the basis of sequence identity. Members of the UGT1A and 2B subfamilies play a key role in drug metabolism (Rowland et al., 2013). Since the induction/inhibition and genetic polymorphism of UGTs are associated with drug therapy strategy and safety, it is important to identify the major metabolizing enzymes that act on a drug to predict interindividual variability in drug exposure and the potential of drug-drug interactions. To determine the UGTs involved in IMM-H004 glucuronidation, recombinant human UGTs were applied. Our results indicated that UGT1A7, 1A9, 1A8, and 1A1 were the most active enzymes toward the glucuronidation of IMM-H004. Other UGTs also catalyzed the reaction, albeit less efficiently. Therefore, multiple UGT isoforms are involved in the IMM-H004 glucuronidation. When a drug is metabolized by a single enzyme, changing the enzyme activity is more likely to have a marked effect on the overall PK of the compound. Coadministration of drugs results in an increased likelihood of drug interactions. In contrast, the involvement of multiple UGT enzymes in the metabolism of IMM-H004 correspondingly makes it less likely that the PK profile FIGURE 9 | Effect of IMM-H004 and IMM-H004G on infarct volume of MCAO/R rats. (A) Representative brain slices stained by 2,3,5-triphenyltetrazolium chloride; (B) quantitative evaluation of infarct volume. Error bars represent SEM (n = 10). One-way analysis of variance was used, ### p < 0.001 vs. Sham, **p < 0.01 vs. Model group. of IMM-H004 was affected by other drugs. Since the abundant isoforms of UGT enzyme expression in human liver are UGT2B7, 1A4, 2B4, 1A1, 2B15, and 1A9, followed by UGT1A6 and 1A3, while UGT1A7 and 1A8 are mainly expressed in extrahepatic tissues (Izukawa et al., 2009;Achour et al., 2017;Tourancheau et al., 2018), it is anticipated that UGT1A1 and 1A9 are the major contributors for the formation of IMM-H004G in humans. It has been reported that hydroxycoumarin derivatives were good substrates of UGT. For example, 4-methylumbelliferone has been widely used as a nonspecific probe substrate for the evaluation of recombinant human UGTs activity (Uchaipichat et al., 2004). Although a variety of UGTs participate in the glucuronidation of hydroxycoumarins (Dong et al., 2012;Shan et al., 2014), UGT1A9 with greater capacity in bulk and complex phenol glucuronidation generally exhibits the highest catalytic activity (Ethell et al., 2002;Liang et al., 2010;Xia et al., 2014). Our data are consistent with these literatures. PK/PD studies can reveal the relationship between drug concentration and efficacy, which helps to understand the mechanism of drug action. MCAO/R rat models were applied for IMM-H004 PK-PD study. MDA, the earliest indicator to reflect the efficacy of IMM-H004, served as a biomarker in plasma for PD, which was also because correlation between plasma MDA levels and severity of the disease was reported on both acute ischemic stroke patients and MCAO/R rats (Awooda et al., 2015;Jena et al., 2017). In line with previous reports, plasma MDA levels of our experiment were significantly increased in MCAO/R rats. IMM-H004 treatment was able to reduce MDA levels in MCAO/R rats, and the beneficial effect persisted for at least 10 h after treatment. As IMM-H004 is eliminated rapidly with a short plasma elimination half-life (0.42 h at 10 mg/kg, longer than t 1/2β 0.19 h at 6 mg/kg reported before, indicating that the drug elimination may be saturated at 10 mg/kg and there may be differences between experiments in different batches of animals), the exposure of IMM-H004 cannot adequately explain the duration of PD effect. IMM-H004G has a longer half-life and greater exposure in blood circulation than IMM-H004. Our previous brain microdialysis study also indicated that the exposure of IMM-H004G in rat brain extracellular fluid was 10.5-fold higher than that of IMM-H004 (Jiang et al., 2018). Therefore, IMM-H004G is the predominant form present in the body and drug target tissue. As the PK profile of IMM-H004G was consistent with the MDA inhibition curve, we speculated that IMM-H004G was likely to be an active metabolite. Glucuronidation is generally considered as a process of detoxification and inactivation, because glucuronides usually possess less intrinsic biological or chemical activity than their parent forms and exhibit higher polarity and excretability. But there are exceptions; some glucuronide conjugates are active and contribute to pharmacological activities. The most typical examples with in-depth research were morphine-6-Oglucuronide and quercetin-3-O-glucuronide: Analgesic effect of morphine-6-O-glucuronide was achieved via activation of mu-opioid receptors, a G-protein-coupled receptor (Frölich et al., 2011). In addition, morphine-6-O-glucuronide was predominantly trapped in the extracellular fluid of brain with a high AUC value and therefore durably available to bind at opioid receptors inducing more potent central analgesia than morphine (Stain-Texier et al., 1999). Quercetin-3-Oglucuronide was also reported as an active compound which could inhibit intracellular reactive oxygen species in mouse fibroblast 3T3 cells induced by H 2 O 2 attack and suppress invasion of MDA-MB-231 breast cancer cells and matrix metalloproteinase 9 (MMP-9) induction (Shirai et al., 2002;Yamazaki et al., 2014). Although hydroxycoumarin derivatives exhibit various biological activities and are susceptible to glucuronidation, there is no report on the biological activity of coumarin glucuronide conjugates. We investigated the neuroprotective effect of IMM-H004G. IMM-H004G exhibited similar neuroprotection to that of parent compound in vitro and in vivo, which provided evidence that IMM-H004G may play a role in the neuroprotection of IMM-H004. Previous studies showed that IMM-H004 with high lipophilicity and low molecular size could easily cross the blood-brain barrier (T max = 0.21 h) (Jiang et al., 2018). Therefore, we speculated that IMM-H004 may perform its activity rapidly, whereas IMM-H004G, with slower elimination and greater exposure in plasma and brain (Jiang et al., 2018), may contribute to the maintenance of anticerebral ischemia efficacy of IMM-H004 at least partly. In addition, the secondary peak of IMM-H004G in Figure 7, more than 70% recovery of the drug in both biliary and urine mainly as IMM-H004G and undetectable IMM-H004G in feces, suggested together the existence of drug enterohepatic circulation. The enterohepatic circulation ensured reabsorption and persistence of the drug, and glucuronidation was the basis of enterohepatic circulation process. As a result, IMM-H004G may also contribute to anti-cerebral ischemia efficacy indirectly by improving the PK behavior of the drug. Meanwhile, it was worth noting that the metabolite M1 also showed neuroprotective activity in cell cultures. Despite low exposure in vivo, M1 had an IC 50 value of 2.12 × 10 −8 M as an antagonist of the potent stroke target chemokine-like factor 1 in previous research (Li et al., 2010;Kong et al., 2011;Sun et al., 2013;Kong et al., 2014). And we cannot rule out the existence possibility of other unknown active metabolites, the lagged effect of IMM-H004 by the time required from cell signaling to biological effects, and the generation of IMM-H004 from IMM-H004G in certain microenvironment of target tissue. So the duration effect of IMM-H004 may be the result of a collective effect of IMM-H004, IMM-H004G, and other active metabolites. Wherefore, more comprehensive and in-depth research needs to be done in the future. CONCLUSION In conclusion, four metabolites of IMM-H004 including demethylated metabolites, glucuronide conjugate IMM-H004G, and sulfated conjugate were detected in vitro and in vivo. Multiple drug metabolizing enzymes, including CYPs, UGTs, and SULTs, are involved in IMM-H004 metabolism. IMM-H004G is the major metabolite of IMM-H004 in rats and in human hepatocyte. The exposure and duration of IMM-H004G in MCAO/R rats are greater than that of IMM-H004. Notably, IMM-H004G exhibits a similar neuroprotective activity to that of the parent drug both in vitro and in vivo. IMM-H004G, at least in part, contributes to the maintenance of anticerebral ischemia efficacy of IMM-H004. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the manuscript and/or the supplementary files. ETHICS STATEMENT This study was carried out in accordance with the recommendations of Guide for the use and care of laboratory animals, the Institute Animal Care and Welfare Committee.
v3-fos-license
2018-12-01T05:53:35.039Z
2018-11-29T00:00:00.000
54020351
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://gutpathogens.biomedcentral.com/track/pdf/10.1186/s13099-018-0276-3", "pdf_hash": "e368272c9a2500cefe451721fe5695a3c114db4a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43954", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "sha1": "e368272c9a2500cefe451721fe5695a3c114db4a", "year": 2018 }
pes2o/s2orc
Microbiota profile in new-onset pediatric Crohn’s disease: data from a non-Western population Background The role of microbiota in Crohn’s disease (CD) is increasingly recognized. However, most of the reports are from Western populations. Considering the possible variation from other populations, the aim of this study was to describe the microbiota profile in children with CD in Saudi Arabia, a non-Western developing country population. Results Significantly more abundant genera in children with CD included Fusobacterium, Peptostreptococcus, Psychrobacter, and Acinetobacter; whereas the most significantly-depleted genera included Roseburia, Clostridium, Ruminococcus, Ruminoclostridium, Intestinibacter, Mitsuokella, Megasphaera, Streptococcus, Lactobacillus, Turicibacter, and Paludibacter. Alpha diversity was significantly reduced in stool (p = 0.03) but not in mucosa (p = 0.31). Beta diversity showed significant difference in community composition between control and CD samples (p = 0.03). Conclusion In this developing country, we found a pattern of microbiota in children with CD similar to Western literature, suggesting a role of recent dietary lifestyle changes in this population on microbiota structure. Background Crohn's disease (CD) is the most common phenotype of inflammatory bowel disease (IBD). Although the incidence is highest in Western populations, increasing time trend is reported worldwide in adults and children [1][2][3]. Despite extensive research, the causes of all phenotypes of IBD remain unknown. However, a multifactorial theory is most likely. In a genetically susceptible individual, environmental factors trigger uncontrolled inflammation [4]. Diet and microbiota are the most likely environmental factors, acting as triggers. It was found that there was an increased risk of CD with a high intake of polyunsaturated fatty acids, omega-6 fatty acids, saturated fats, and meat, but there was a decreased risk with a high intake of dietary fiber, omega-3 fatty acids, vegetables, and fruits [5][6][7][8][9][10]. The role of microbiota in IBD in general, and CD in particular, has been increasingly recognized. Several studies documented reduced diversity of the microbial community and defined associations of certain taxa with CD, suggesting a role of beneficial and harmful microbes [11][12][13][14]. Almost all the literature on microbiota in IBD is regarding Western populations, who have well-defined environmental, cultural, and dietary lifestyles, which are different from populations in developing countries. Since IBD is newly recognized in these populations, these are commonly referred to as "new populations" with IBD. The study of the characteristics of IBD in these populations may increase our understanding of this condition. In previous reports, we have defined the incidence and clinical profile of pediatric CD in Saudi Arabia [15,16]. The objective of this report is to describe the bacterial microbiota profile in a cohort of newly-diagnosed treatment-naïve children in Saudi Arabia, a non-Western Open Access Gut Pathogens Patients characteristics All children were Saudi nationals (17 with CD and 18 controls). The median (range) age was 15 (7.3-17.8) years for the children with CD and 16.3 (3.9-18.6) years for controls. Gender distribution indicated that 11/17 (65%) of the CD and 12/18 (67%) of the control subjects were males. The clinical presentation of the 18 children classified as controls included recurrent abdominal pain in 9 (50%) children, finally diagnosed as functional abdominal pain, diarrhea in 5 (28%) children, finally diagnosed as nonspecific diarrhea, and 4 (22%) children with rectal bleeding, finally diagnosed as juvenile polyps. Hemoglobin, platelets, ESR, and CRP were normal in all controls. CD-associated microbiota The CD-associated taxa for the family, genus, and species phylogenetic levels in stool and mucosa are presented in Tables 1 and 2 respectively. Significantly-abundant taxa in children with CD compared with controls at the genus level included Fusobacterium, Peptostreptococcus, Psychrobacter, and Acinetobacter, and at the species level, Fusobacterium nucleatum, Bacteroides clarus, and Psychrobacter pulmonis. The most significantly-depleted genera in children with CD included Roseburia, Clostridium, Ruminococcus, Ruminoclostridium, Intestinibacter, Mitsuokella, Megasphaera, Streptococcus, Lactobacillus, Turicibacter, and Paludibacter; whereas, significantly-depleted species included Roseburia inulivorans, Clostridium disporicum, Blaucia ruminococcus spp., Eubacterium seraeum, Intestinibacter bartelitii, Eubacterium spp., Streptococcus salivarius, Turicibacter spp., Bacteroides xylanolyticus, Clostridium perfringens, and Bifidobacterium catenulatum. It is to be noted that no significantly more abundant taxa were found in mucosal samples. By contrast, a large number of taxa were depleted from stool and mucosa samples as detailed in Tables 1 and 2. The direction of change of most taxa (gain or loss) is depicted in Fig. 1 and the rank abundance distribution of the 20 most abundant genera in stool and mucosa samples is illustrated in Fig. 2. Microbiota diversity Alpha diversity, as measured by the Shannon index, is shown in Fig. 3, indicating significantly-reduced alpha diversity in the stool of children with CD compared to controls (p = 0.03); whereas, the difference in CD mucosa was not significant (p = 0.32). Beta diversity as evaluated by the Bray-Curtis distance indicated a statistically significant community dissimilarity between control and CD samples in stool (p = 0.03). Abundance and diversity in inflamed and un inflamed mucosa Comparison of species abundance between inflamed and uninflamed mucosa of children with CD for the four most different species (Bacteroides nordii, Escherichia coli, Eisenbergiella tayi, Bacteroides caccae), indicated no significant difference (p > 0.9). In addition, there was no significant difference in alpha diversity between inflamed and uninflamed mucosa in children with CD (p = 0.31). Discussion The national incidence of pediatric IBD in Saudi Arabia (0.47/10 5 ), including CD (0.27/10 5 ), has been reported recently, indicating a lower but steady increase in incidence similar to that in the Western literature [15]. In addition, the clinical, laboratory, endoscopic, and histopathologic characteristics have been reported, indicating similar patterns to descriptions from Western countries [16,17]. In Saudi Arabia, marked socioeconomic improvement which led to improved education, nutrition, and health care, was accompanied by changes in lifestyle. For example, in a larger report on Saudi children with CD, only 10% and 30% consumed fruits daily and twice weekly respectively; whereas about 50% and 30% consumed fast food and sweetened soft drinks daily and twice weekly respectively [18]. This dietary lifestyle pattern indicating less consumption of fruits and high consumption of fast food and sweetened soft drinks is similar to descriptions in the Western literature [19,20]. IBD in Saudi Arabia evolved from occurring rarely, to a commonly-diagnosed condition with increasing incidence, suggesting a role of recent changes in environment including dietary lifestyle. It is well known that dietary components acting directly or through alteration of intestinal microbiota have a significant role in triggering inflammation [21][22][23][24]. This report, to our knowledge, is the first description of microbiota profile in newly-diagnosed treatmentnaïve children with CD from a non-Western population. We identified a large number of taxa in the CD fecal and mucosal samples from order to species levels in the phylogenetic tree. However, it should be noted that taxonomic species designations based upon 16 s are tentative assignments and caution is advised in interpretations related to species classification. In view of the variation of the microbiome between newly-diagnosed and treated patients with IBD [25], comparison will be mainly with the few most similar reports of newly-diagnosed and treatment-naïve children with CD. CD-associated microbiota All of the 20 most abundant taxa from the order to the species levels in this study, have been reported in the Western literature [4,14,26,27]. However, the significance of associations of some taxa with CD, contrasts with reports from Western populations. For example, Enterobacteriaceae, reported significantly more abundant in CD [14], was not found significantly associated with CD in our study. Similarly, Faecalibacterium prausnitzii, which is reported as significantly-depleted bacteria with possible anti-inflammatory properties [28], was not found to be significantly associated with CD in our samples. Variations in the significance of associations exists between studies even from the same Western populations. For example, at least two reports found Faecalibacterium prausnitzii to be more abundant in mucosal samples of children with CD, contrasting with other reports of the protective role of this bacterium in patients with CD [29,30]. These observations are in line with the well-known variability of microbiota both within and between subjects in the same population. Finally, it is important to note that associations described in this study as well as in the literature, do not imply functional or causal effects. Specifically, it is still unclear whether changes in microbiota in children with CD were the cause or the result of inflammation. Microbiota diversity In this study, alpha diversity was reduced in mucosa and stools of children with CD relative to controls, a finding similar to that in the literature [31]. However, this reduction was statistically significant only in stools (p = 0.03) and not in tissue samples (p = 0.32). Beta diversity in our cohort indicated an overall statistically-significant distance difference between CD and control samples (p = 0.03), a finding similar to reports from Western populations [14]. Lack of difference between inflamed and uninflamed CD mucosa In this study, the lack of significant difference in the four most abundant bacterial species and alpha diversity between sites with inflamed and uninflamed mucosa are consistent with most reports and suggest an unlikely role of bacteria in the pathogenesis of the patchy distribution of lesions in CD [32,33]. However, one report suggested that uninflamed tissue forms an intermediate bacterial population between controls and inflamed tissues [34], and another reported a significant difference in microbial community structure between inflamed and uninflamed mucosal sites, but there was great variation between individuals, suggesting no obvious bacterial signature that was positively associated with the inflamed gut [35]. It appears, therefore, that our findings of no significant difference in the bacterial community between inflamed and uninflamed mucosa in children with CD are consistent with most reports. In this study from a developing country population, the finding of a microbiota profile similar to that in Western populations was unexpected in view of different culture and lifestyle. However, recent changes to a more westernized dietary lifestyle, affecting microbiota structure, explains at least in part this similarity. Study limitations The most important limitation is the small sample size. However, Crohn's disease is evolving in this part of the world and the characterization of the microbiota associated with new onset Crohn's is unique. Conclusions In this developing country, we found a pattern of microbiota in children with CD similar to Western literature, suggesting an effect of recent dietary lifestyle changes on microbiota structure. This report suggests a possible role of dietary lifestyle related to alteration of microbiota and the increasing incidence of CD in the Saudi population. The study population In addition to controls, the study population included all children diagnosed with CD according to standard guidelines [36]. The children referred for colonoscopy were enrolled prospectively. Two hospitals participated in the study. These were King Khalid University Hospital, King Saud University (a free-access primary and tertiary care hospital), and Al Mofarreh Polyclinics (a private gastroenterology institution). Demographic information, socioeconomic family status, nutritional history, drug history, history of the present illness, past medical and surgical history, including any medications, physical examination, laboratory, imaging, endoscopic, and histopathological findings were recorded at presentation. Controls were enrolled if they had no evidence of IBD or other causes of inflammation proven by endoscopy and histopathology. Sample collection, storage, and processing In view of the known variations in the microbial community along the gastrointestinal (GI) tract [37][38][39][40][41], mucosal samples were collected from the ileum and different colonic sites to minimize the effects of these variations. Samples were collected from 17 children with CD and 18 controls. A total of 44 tissue samples from the children with CD (8 from the Ileum, 6 from each of the cecum, ascending, transverse, descending, sigmoid colon and rectum) and 14 from controls (6 from the Ileum, 3 transverse colon, 2 sigmoid colon and 3 from the rectum). For logistic reasons, mucosal samples were not taken from the ileum and all colonic segments of each subject. Similarly, not all subjects gave stool samples. A total of 20 stool samples were collected from children with CD (10) and controls (10) before bowel preparation (75%), or from the first stool passed after the start of bowel preparation (25%), to minimize washout effects [41]. All samples were collected in cryovials (no fixatives or stabilizers), immediately placed in ice, transported to the laboratory, and stored at − 80 °C within 5 to 20 min. The average storage duration before analysis was 3 years. At the time of microbiota analysis, all samples were shipped in dry ice by express mail to the USA (MR DNA, Shallowater, TX, USA). The samples were received frozen in about 36 h. DNA extraction and sequencing methods DNA was extracted using the Mobio Powersoil Kit as per the manufacturer's instructions (MOBIO, Carlsbad, CA, USA). Amplicon sequencing (bTEFAP ® ) was performed at MR DNA (Shallowater, TX, USA) and used for bacterial analysis [42]. The primers 515F GTG CCA GCMGCC GCG GTAA and 806R GGA CTA CHVGGG TWT CTAAT were used to evaluate the microbial ecology of swabs on the Illumina MiSeq with methods based upon the bTEFAP ® . A single-step 28-cycle polymerase chain reaction (PCR) with the HotStarTaq Plus Master Mix Kit (Qiagen, Valencia, CA, USA) was employed under the following conditions: 94 °C for 3 min, followed by 28 cycles of 94 °C for 30 s, 53 °C for 40 s, and 72 °C for 1 min; after this, a final elongation step at 72 °C for 5 min was performed. Following PCR, all amplicon products from different samples were mixed in equal concentrations and purified using Agencourt Ampure Beads (Agencourt Bioscience Corporation, MA, USA). Samples were sequenced utilizing Illumina MiSeq Chemistry following the manufacturer's protocols. The Q25 sequence data derived from the sequencing process was processed using a standard analysis pipeline (http://www.mrdna lab. com; MR DNA, Shallowater, TX, USA). Paired sequences were merged and depleted of barcodes and primers, then short sequences < 150 base pairs (bp) were removed, sequences with ambiguous base calls were removed, and sequences with homopolymer runs exceeding 6 bp were removed. Sequences were then denoised and chimeras were removed. Operational taxonomic units (OTUs) were defined after removal of singleton sequences, clustering at 3% divergence (97% similarity) [43][44][45][46]. OTUs were then taxonomically classified using the Nucleotide Basic Local Alignment Search Tool against a 16 s National Center for Biotechnology Information (NCBI)derived database (http://www.ncbi.nlm.nih.gov, http:// rdp.cme.msu.edu), and compiled into each taxonomic level into both 'counts' and 'percentage' files. Counts files contain the actual number of sequences and percentage files contain the relative (proportional) percentage of sequences within each sample, which map to the designated taxonomic classification. Statistical analysis The analysis was performed using Python and R software [47,48]. To increase statistical power, taxa with low representation in the samples were excluded from the analysis. Specifically, we excluded samples below 1000 reads as well as taxa absent from more than 50% of both CD and control samples. Custom functions implementing the permutation test were written to detect the taxa whose abundances were significantly different between two sample categories, e.g., CD and control or inflamed and uninflamed. When more than one sample was available from the same patient for the analysis, the log-relative abundances from these samples were averaged. We performed all statistical analysis on log-transformed data after adding pseudo counts of 1 read for each taxonomic group. Association analysis To understand which members of the bacterial microbiota might contribute to CD, we examined the difference in microbial abundance between CD stool and controls, CD mucosa and controls, and inflamed and uninflamed mucosa in CD. First, we compared uninflamed and inflamed mucosa in children with CD and found no significant changes in microbial abundances (p > 0.9). Given this lack of difference, all CD mucosal samples were included in the analysis. Associations were determined based on the difference in the mean log-relative abundance. Statistical significance was assessed via a permutation test (exact Fisher's test) followed by a correction for multiple hypothesis testing. Specifically, the permutation test yielded raw, uncorrected p-values, which were corrected to q-values following the Benjamini-Hochberg procedure to account for the false discovery rate (FDR) [49]. Although less significantly-associated taxa may be biologically important, we reported only statistically-significant associations when the corrected FDR-corrected p-value was < 0.05. Diversity analysis Diversity analysis was used to study the richness of taxa and the evenness of habitat composition, as well as the community dissimilarity between samples. Alpha diversity, a measure of taxa richness was evaluated by the Shannon index. This measure quantifies the number of taxa and their representative proportions in the habitat; a high alpha diversity indicates that there is a high number of taxa with similar abundance. The difference in diversity between CD mucosa and controls, stool CD and controls, or inflamed and uninflamed mucosa were analyzed. The sample-wise difference in community composition (Beta diversity) was quantified by the Bray-Curtis dissimilarity, which accounts for both patterns of presence-absence of taxa and changes in their relative abundance between samples. The beta diversity separations were analyzed by the ANOSIM or analysis of (dis)similarity. The ANOSIM statistic compares the mean of ranked dissimilarities between groups to the mean of ranked dissimilarities within groups. The significance of the statistic was determined by an exact permutation test.
v3-fos-license
2022-01-15T16:04:23.130Z
2022-01-13T00:00:00.000
245955912
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/lci2.35", "pdf_hash": "be0a23dbeca79c7e7775d386762c39695b4a5c55", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43955", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "f61e98703a116bcf14a05ea104b84051a1ef708c", "year": 2022 }
pes2o/s2orc
Percutaneous treatments of hepatocellular carcinoma: Improving efficacy, applicability and extending ablation criteria The main curative treatments of early hepatocellular carcinoma (HCC) are liver resection, liver transplantation and percutaneous ablation. Monopolar radiofrequency ablation (RFA) was the most widely used percutaneous treatment but has limitations in terms of applicability and efficacy. These limitations could be responsible for downgrading the treatment of early HCC not amenable to usual monopolar RFA, transplantation or resection and to a shift to palliative treatment. However, improvement in ablation techniques during the last 10 years including the development of microwave ablation, multibipolar RFA, irreversible electroporation but also new technical tricks for ablation allowed to optimize the efficacy and promote the wide application of percutaneous treatments in patients with early HCC. It helped also to explore the ability of percutaneous ablation to treat HCC outside current guidelines in order to ablate more lesions of larger sizes. In this review, we aim to describe how the improvement of ablation methods helps to maximize the number of patients treated for early HCC and to discuss if we could extend the usual ablation criteria in order to allocate more patients in a curative setting. (RFA, MWA and CA) are based on thermal ablation with either heating (RFA and MWA) or freezing (CA) the tumour. In contrast, IRE is a mainly non-thermal ablation method generating short electric pulses of high intensity between two electrodes that have the advantage to decrease the risk of injury of the adjacent structure. 6 Thermal percutaneous ablation has been widely used since the beginning of the 2000s initially to treat patients with small uni-nodular HCC of <3 cm not amenable to liver resection or liver transplantation. Progressively, more data have shown that RFA could compete with liver resection in front of a small tumour and new technical approaches have enlarged the indication of ablation in patients with HCC. 7,8 In the present review, we aimed to describe briefly the current state of the art regarding the percutaneous treatments for HCC thus focusing on the role of percutaneous ablation in specific clinical situations and finally discuss the extension of ablation criteria. | Current indications and related outcomes after ablation for HCC Thermal percutaneous ablation, mainly monopolar RFA, has replaced percutaneous ethanol injection after several randomized controlled trials showing a better local control rate for RFA in all studies and increased overall survival (OS) in two of them. Currently, monopolar RFA is considered the standard for percutaneous ablation and was safely used to treat patients with portal hypertension and/or mild liver failure that are classical contra-indication to liver resection. The OS varies between 40% and 70% at 5 years with 50%-80% of tumour recurrence at 5 years after ablation of HCC within Milan criteria. [9][10][11] In patients harbouring HCC with a diameter of <2-3 cm with good performance status, without portal hypertension and with a preserved liver function, surgical resection and RFA could be both considered as equivalent as first-line therapy. 9,12 In contrast, for larger nodules (>2-3 cm), the results of RFA are less performant because of the decreased local control rate with usual monopolar RFA because of the difficulty to achieve a sufficient ablation area with a peritumoural margin. 13 A comparison between surgery and locoregional treatments has been often questioned considering that the indication of one treatment above the other is influenced by patients' comorbidities, tumour burden, liver function and presence of portal hypertension. Some retrospective studies suggested that the two treatments were comparable for nodules <2-3 cm. 9,12 Three randomized controlled trials have compared monopolar RFA to resection in the treatment of HCC; one of these trials showed an advantage for resection in terms of OS, whereas the two others showed no difference in terms of oncological outcomes between the two procedures. RFA was associated with less morbidity than surgery in all trials. [14][15][16] To note, these clinical trials were monocentric, only performed in Asia, where the rate of cirrhosis was lower than in Western countries and have several biases impairing their conclusions. 17 A meta-analysis with a Markov model, including 17 studies and comparing resection and RFA for early HCC, suggested that in patients with two or three nodules lesser than 3 cm, RFA was more cost-effective than resection. In contrast, for a single HCC between 3 and 5 cm surgical resection was considered as the best option. 18 Monopolar RFA has been compared to MWA in several trials for the treatment of HCC and a recent meta-analysis showed that despite a similar local recurrence rate, the distant recurrence rate was significantly lower in the MWA group. Moreover, disease-free survival at 5 years was higher in patients treated by MWA (risk ratio = 3.66, CI 95% = 1.32-42.27) compared to RFA. 19 Currently, European Association for the Study of the Liver (EASL) guidelines suggest that both percutaneous ablation and surgery could be used in HCC of <2 cm and then surgery should be proposed when possible in case of HCC > 2 cm. 3,[14][15][16] When facing patients eligible for both percutaneous ablation and liver transplantation, liver transplantation is frequently considered as the best treatment as it cured both the tumour and the cirrhosis. However, graft shortage and the increased time on the waiting list led to propose a different strategy with first-line ablation followed by salvage transplantation if tumour recurred. This strategy tested in 67 patients was associated with an 84% of OS and 58% of tumour recurrence at 5 years and allowed to avoid liver transplantation in 27 patients that were alive without tumour recurrence. 20 21 Overall, it seems that a proportion of patients could avoid liver transplantation after a first curative percutaneous ablation, even if this strategy remains to be better assessed in long term and the target population needs to be refined. Other percutaneous approaches have been proposed for local ablation of HCC such as laser ablation (LA) and CA. One randomized trial testing non-inferiority has compared RFA with LA in patients within Milan criteria and suggested that LA was not inferior to RFA in terms of tumour ablation, time to local progression and OS. 22 This technique has been tested also in solitary large HCCs (>40 mm) in comparison with TACE and the results of this pilot case-control study showed superiority of LA especially in nodules between 51 and 60 mm with a complete response rate post-LA and post-TACE of 75% and 14.3%, respectively, associated with a higher OS (55.4% vs 48.8% at 36 months). 23 Moreover, CA was compared to RFA on patients affected by HCC smaller than 40 mm in a multicentre randomized controlled trial performed in China. In this study, local tumour progression rates were lower after CA (3%, 7% and 7% at 1, 2 and 3 years) than after RFA (9%, 11% and 11% at 1, 2 and 3 years) with similar 5-year OS rate. 24 These two techniques have encouraging results but required more data on long-term oncological outcomes. 3,7 | Limitations of classical technic of percutaneous monopolar RFA: Efficacy, applicability and consequences in clinical practice Two major types of limitations exist using the classical monopolar RFA: limitations in terms of efficacy (leading to the risk of treatment failure) ( Figure 1) and limitations in terms of applicability (impossibility to perform the ablation) ( Figure 2). | Efficacy The limitations of the classical monopolar RFA are related to the decrease of temperature together with the distance from the electrode (usually 2-3 cm from the needle) or when blood flow is present in the vicinity of the tumour. 7,8 Firstly, percutaneous ablation is associated with a risk of local recurrence that is related to an insufficient ablation of the target tumour and its peri-tumoural margins and/or because of tumour aggressiveness. The rate of local recurrence can reach up to 21%-30% and, even if most patients could be ablated again, 25 it required additional procedures and, in some cases, the local recurrence could be aggressive and not amenable to another curative treatments. 26 Moreover, if monopolar RFA is able to ablate tumour between 2 and 3 cm, its ability to ablate larger tumours remains variable because of an insufficient thermal effect leading to a non-predictable ablation area. Finally, several studies have underlined that thermal ablation of tumours at the vicinity of major vessels was less effective in terms of complete ablation and has a higher risk of local tumour recurrence owing to the decrease of temperature induced by the blood flow, the so-called heat sink effect. 27 In this line, a retrospective analysis on 283 patients has compared liver resection to monopolar RFA to treat HCC located at the vicinity of vascular structures. Liver resection showed a better outcome than monopolar RFA with longer progression-free survival (PFS) and OS. PFS at 5 years was 58.0% for liver resection vs 25.4% for RFA, with an OS rate at 5 years of 93.5% vs 82.3% after resection and RFA, respectively. 28 The type of tumour recurrence should also be categorized: local and distant recurrence as well as the aggressiveness of tumour recurrence (size, number, infiltrating form, portal invasion and metastasis) as it could impact survival. 26 Liver Cancer (BCLC) stage 0 or A patients did not receive curative treatment and were treated outside guidelines using palliative treatments mainly by TACE. 30,31 As expected, a retrospective study showed that TACE for the treatment of BCLC 0 or A HCC was associated with a lower local control rate and lower survival than percutaneous RFA. 32 Overall, undertreatment of these patients has a clear impact on long-term oncological outcomes. As only a limited proportion of these patients are amenable to liver transplantation or liver resection, the efficacy and the good safety profile of percutaneous ablation even in older patients, patients with comorbidities, patients with portal hypertension or mild liver failure make ablation a good option in order to widely treat patients with early HCC. Increasing the efficacy of percutaneous ablation and bypassing it limitations F I G U R E 1 How to improve the efficacy of percutaneous ablation. We figured the main limitations in terms of efficacy of percutaneous ablation to treat patients with early hepatocellular carcinoma and how to by-pass these limitations in order to reduce local tumour recurrence and treat larger tumours with adequate local control. IRE, irreversible electroporation; MWA, microwave ablation; RFA, radiofrequency ablation; TACE, transarterial chemoembolization will help to maximize the numbers of patients with HCC receiving a curative treatment. 33 | Improving efficacy Failure to achieve complete ablation and local tumour recurrence is mainly the consequence of the insufficient treatment of the tumours with insufficient margin together with the tumour aggressiveness. Several new ablation modalities have been developed to improve the efficacy of percutaneous treatments such as MWA or multibipolar radiofrequency ablation (mbp-RFA). MWA is a method of ablation that creates an electromagnetic field around a monopolar needle (centrifugal ablation), inducing heating and coagulation necrosis. 34 One of the advantages of MWA is to reach more quickly its target temperature than RFA allowing shorter ablation time. It has been advocated that MWA could ablate a larger area than RFA but, even if several retrospective studies report good long-term outcomes after treatment of early HCC with percutaneous MWA, its superiority to classical monopolar RFA remains unproven. [34][35][36] Multibipolar radiofrequency ablation consists of the insertion of several electrodes, up to six, in the periphery of the nodule and not in its centre. The needles can be positioned either into the lesion (intra-tumoral) or outside the lesion (No-Touch). The electrodes are activated by pair in alternance in order to deliver the energy in the whole tumour periphery, converging in the centre of the treatment zone (centripetal ablation), in order to better control the shape and the extent of the ablation area and increase ablation margin. 37,38 This technique has proved to be safe with a low rate of local tumour recurrence in HCC within Milan criteria with a 3-and 5-year local and overall tumour PFS of 96%, 94%, 52% and 32%, respectively. 37 Similar results were found also in large Asian cohorts of 516 patients reporting a high rate of complete ablation (99%) with a 1-, 3-and 5-year OS of 99.42%, 83.97% and 68.42%, respectively. The severe complication rate was 1.74%. 39 Monopolar RFA and No-Touch mbp-RFA were compared in a retrospective study analyzing cirrhotic patients with a single HCC of <3 cm. Patients treated by monopolar RFA had a shorter time to recurrence compared to mbp-RFA with a tumour-free survival in the No-Touch mbp-RFA group significantly higher than monopolar RFA group. 40 Mbp-RFA has been also compared to surgical resection in nodules between 2 and 5 cm, resulting in higher morbidity in surgical group and a higher degree of local recurrence in mbp-RFA. However, tumour recurrences in the mbp-RFA arm were often accessible to re-treatment resulting in similar OS, 86.7% vs 91.4% at 3 years after mbp-RFA and resection respectively. 41 F I G U R E 2 How to improve the applicability of percutaneous ablation. We figured the main limitations in terms of the applicability of percutaneous ablation to treat patients with early HCC and how to by-pass these limitations. CEUS, contrast-enhanced ultrasonography; IRE, irreversible electroporation; MWA, microwave ablation; RFA, radiofrequency ablation Data also suggested that mbp-RFA will be also helpful to treat tumours in the vicinity of major vessels. As this technic allows the placement of one or several electrodes between the tumour and the vessel, it seems to be less sensitive to the heat sink effect. Among 362 cirrhotic patients treated either by mbp-RFA or monopolar RFA with nodules <5 cm, the size >30 mm and the vicinity of large vessel were independent factors of local tumour progression only in patients treated by monopolar RFA and not in patients treated by mbp-RFA. 42 Monopolar RFA and mbp-RFA were also compared in a multicentre retrospective trial in the treatment of HCC ≤ 5 cm abutting large vessels. The local tumour progression was 50.5% in monopolar RFA vs 16.3% in multi-bipolar RFA. 43 Finally, a western monocentric study reported no difference in terms of local recurrence between liver resection and mbp-RFA for the treatment of single perivascular HCC between 2 and 5 cm. 41,44 This data suggested that mbp-RFA could be used to treat efficiently HCC closed to major vessels. | Thrombocytopenia Thrombocytopenia due to hypersplenism and portal hypertension could be a limitation to ablative therapies due to the potential risk of bleeding. 45 G/l) undergoing invasive procedures is the oral administration of either avatrombopag and lusutrombopag (5 or 7 days, respectively) followed by the procedure (9th-14th day of the beginning of treatment); if the number of platelets safe to perform, the procedure is not reached platelet transfusion is recommended. 47 The best strategy between the use of thrombopoietin-receptor agonist or platelet transfusion alone is unknown. | Presence of pacemakers Electromagnetic interference with cardiac devices such as pacemakers and defibrillators could occur in a patient treated with RFA. 53 A manuscript including various types of ablation treatments, monopolar RFA and MWA for lung cancer, primary liver cancer and liver metastasis reported interference in 10% of the patients without any clinical relevance. 54 In contrast, bipolar RFA was considered as a safe procedure as the energy is conducted between the two needles and did not induce any interference with the pacemaker. | Child Pugh B patients Child Pugh B patients are not optimal candidates to liver resection owing to the risk of post-operative liver failure. In patients not a candidate for liver transplantation or with a long waiting time period, percutaneous ablation could be performed with a good safety profile even if Child Pugh B patients are often associated with a lower OS due to the occurrence of complications of cirrhosis during the follow-up. 55 Two studies suggested that percutaneous IRE could be used in Child Pugh B patients with fewer adverse events than RFA, maybe owing to the lower and more progressive non-tumour parenchymal sacrifice, thanks to the non-thermal ablation effect. 56,57 | Bilio-enteric anastomosis and sphincterotomy Liver abscess following RFA is a rare complication occurring in <2% of the cases. However, in patient with pre-existing biliary abnormality favouring ascending biliary such as enterobiliary anastomosis or endoscopic sphincterotomy, the risk is considerably higher. A liver abscess developed in two (22.2%) of nine patients with biliary abnormality (hepaticojejunostomy for the first patient and endoscopic papillotomy for the second one). 58 In another monocentric series of patients with enterobiliary anastomosis undergoing RFA for HCC, six patients out of eight (75%) developed a liver abscess. 59 Interestingly, in patients with bilioenteric anastomosis, prolonged antibiotic prophylaxis with piperacillin/tazobactam for 10 days, reduced the risk of liver abscess, with only case reported among the 10 patients treated by RFA. 60 Some authors suggested that previous transcatheter arterial chemoembolization was associated with a higher risk of liver abscess after RFA; however, these results were not confirmed by other larger studies leaving place for discussion. 58,60 | At-risk location At-risk locations, such as subcapsular or subdiaphragmatic lesions, HCC closed to the gallbladder or to the colon/small bowel, are no longer considered as contra-indication to percutaneous ablation as good long-term outcomes could be achieved using specific technical approaches during ablation procedure. 61 For example, hydrodissection (creation of artificial ascites) has proven to protect the colon/small bowel/stomach or the diaphragm from thermal injury. IRE could be also used in this situation as it avoids the thermal lesions observed with RFA or MWA. Subcapsular lesions could be treated using mbp-RFA with a No-Touch technique in order to avoid the direct puncture of HCC. 62 HCC close to the biliary convergence should not be treated by thermal ablation and is therefore a good indication for IRE. A series of 58 patients treated by IRE not accessible to thermal ablation because of their at-risk location or due to liver dysfunction reported a complete ablation of the lesion in 92% of the case, with a local tumour PFS of 70% at 1 year. 56 1.4.6 | Invisible HCC at ultrasonography Up to 30% of patients with small HCC evaluated for ablation were considered not treatable mainly because the HCC were not visible at ultrasonography. 31 However, several technical approaches are available in order to propose ablation in these patients: the creation of artificial ascites or pleural effusion, a fusion between ultrasonography and pretherapeutic computed tomography (CT) scan or magnetic resonance imaging, CT scanner guidance, lipiodol staining to guide ablation. 6 | Increasing the number of ablated lesions In the EASL guidelines, percutaneous ablation is recommended in up to two or three HCC of <3 cm. If binodular or trinodular HCC was often associated with a higher rate of tumour recurrence, percutaneous ablation could achieve long-term survival in these patients. 55 Several teams have reported their experiences of percutaneous treatments up to four to five HCC with a good safety profile. 34,63 However, treatments of such high numbers of HCC remain timeconsuming, and in the context of a multifocal liver carcinogenesis, it is not clear if there is a benefit compared to TACE. In non-infiltrative mass forming HCC without portal vein invasion, the 3-and 5-year OS was 63.4% and 30%. 64 Percutaneous ablation of large HCC of more than 5 cm seems to be technically feasible using Mbp-RF with a morbidity and mortality higher than in ablation of smaller lesions but remaining tolerable and manageable ( Figure 3). The best candidate seems to be a patient with a uninodular mass-forming HCC between 5 and 8 cm but more data on the wide applicability of these technics and the refinement of patient's selection are still warranted. | Treating HCC with macrovascular invasion The presence of a portal vein tumour thrombosis is usually considered as a contraindication to percutaneous treatments. However, some retrospective studies propose to move this line. Giorgio et al (2009) reported a series of 13 patients with an HCC nodule and portal vein thrombosis (diameter 3.7-5 cm). The rate of complete necrosis was 77% following RFA without major complications. 71 Another study suggested that the combination of percutaneous RFA with sorafenib was associated with longer OS than sorafenib alone. However, more data are needed in terms of the safety and efficacy of percutaneous ablation of limited tumour portal vein thrombosis in regards to other available treatments such as trans-arterial radioembolization or systemic treatments 72,73 (Figure 3). | Treating extrahepatic metastasis Systemic treatments such as atezolizumab/bevacizumab or tyrosine kinase inhibitors (TKIs, sorafenib, regorafenib, cabozantinib, lenvatinib) are recommended by all scientific societies to treat patients with extrahepatic spread of HCC ( Figure 3). Even if we agree that the use of locoregional treatments is limited nowadays in the treatment of extra-hepatic metastases, we think that in patients with uninodular metastasis, it is sometimes an option discussed in multidisciplinary tumour board. Several retrospective monocentric studies have suggested that percutaneous RFA of unique metastasis or in oligometastatic patients (mainly with adrenal or lung metastasis) was associated with a good safety profile and showed potential signals of efficacy. [74][75][76][77] Percutaneous ablation using RFA has been also used to treat patients with node metastasis. 78 The best candidate for this strategy seems to be a patient without any active HCC in the liver and with a unique metastasis. However, it is unknown if this strategy is better than systemic therapy alone and if systemic treatment should be initiated after complete ablation of a unique metastasis. | Combination with systemic treatments An adjuvant treatment by TKI such as sorafenib has failed to improve the survival of patients with HCC in a curative setting (surgery and RFA). 79 In contrast, preclinical studies have suggested that percutaneous ablation of HCC induces modifications in the immune microenvironment stimulating an immune response owing to the liberation of antigen following tumour necrosis. 80 Therefore, it was hypothesized that adding immunotherapy in neo-adjuvant or adjuvant settings could improve survival after locoregional treatments. There are nowadays several ongoing Phase II or III trials combining locoregional treatment and immunotherapy in patients with curable HCC but at high risk of relapse (multiple nodules or >3 cm). 33,81 Moreover, percutaneous ablation has been associated in phase 2 clinical trials with immunotherapy (anti-CTLA4 antibody) in patients with advanced HCC in order to foster the response to immunotherapy. 82 | CON CLUS ION The enrichment of knowledge and advances in the technology in the field of percutaneous ablation helped to propose a personalized approach for patients with early HCC. In the end, the competition between liver resection, liver transplantation and percutaneous ablation in the treatment of small HCC seems to be useless. The main goal remains to propose a curative treatment to the maximum numbers of patients with early HCC, whatever the treatment, taking into account age, the comorbidity, tumour features, portal hypertension and liver failure in the dramatic context of graft shortage in western countries. Wide dissemination of all techniques of percutaneous F I G U R E 3 Proposition of extension of ablation criteria among the Barcelona Clinical Liver Classification (BCLC). We figured the BCLC and the recommendation of treatments adapted from the EASL guidelines. We put on the lower part of the figure, the potential extension of the criteria of ablation among the BCLC classification. IRE, irreversible electroporation; MWA, microwave ablation; RFA, radiofrequency ablation; TACE, trans-arterial chemoembolization ablation would help to reach this goal. More data are also warranted in the extension of ablation criteria in terms of size, numbers or in metastatic disease and their potential role compared to systemic treatment or trans-arterial radioembolization. Moreover, the combination of percutaneous ablation and systemic treatments in neoadjuvant or adjuvant situations in early HCC but also in advanced HCC in order to increase tumour response and survival is currently an important field of research. CO N FLI C T O F I NTE R E S T Jean Charles Nault received a research grant from Bayer for Inserm UMR1138 and research grant from Ipsen; Olivier Seror received grants or honorarium from Bayer, Ipsen, General Electric, Quantum Surgical; Pierre Nahon received grants or honorarium from Astra Zeneca, Bayer, BMS, EISAI, Ipsen, Roche.
v3-fos-license
2018-10-10T04:44:37.429Z
2019-10-01T00:00:00.000
53071645
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11166-019-09312-6.pdf", "pdf_hash": "d1aafd100b0421e51d1fc3e8e8916290b68cbe0e", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43956", "s2fieldsofstudy": [ "Economics" ], "sha1": "7c33e6514ef8ac4dc0e70acc7ceef04eb19f5ae8", "year": 2019 }
pes2o/s2orc
Protecting against disaster risks: Why insurance and prevention may be complements We examine mechanisms as to why insurance and individual risk reduction activities are complements instead of substitutes. We use data on flood risk reduction activities and flood insurance purchases by surveying more than 1000 homeowners in New York City after they experienced Hurricane Sandy. Insurance is a complement to loss reduction measures undertaken well before the threat of suffering a loss, which is the opposite of a moral hazard effect of insurance coverage. In contrast, insurance acts as a substitute for emergency preparedness measures that can be taken when a loss is imminent, which implies that financial incentives or regulations are needed to encourage insured people to take these measures. We find that mechanisms leading to preferred risk selection are related to past flood damage and a crowding out effect of federal disaster assistance as well as behavioral motivations to reduce risk. Introduction Seminal theoretical papers highlight that insurance and risk reducing protective measures are substitutes (Ehrlich and Becker 1972;Arnott and Stiglitz 1988). According to the theory, insurance would discourage individuals from investing in loss reduction measures unless they are rewarded with a reduction in their premiums. This behavior may lead to moral hazard when individuals take fewer risk-reducing measures after purchasing insurance, and to adverse selection when it is mainly individuals with a high risk who demand insurance but the insurer cannot distinguish between high and low risk individuals. These two problems arise from information asymmetries in the sense that the higher risk-taking by the insured is not observed by the insurer and, therefore, not reflected in a higher risk based premium (Akerlof 1970;Rothschild and Stiglitz 1976). Recent work over the past decade reveals that some insured individuals may view additional measures to limit risk ex ante as complements to insurance and thus implement more of these measures than the uninsured do, for example, because they are (highly) risk averse. This behavior can lead to both insurance purchases and investments in risk reduction; this has been termed advantageous selection (de Meza and Webb 2001) or preferred risk selection (Finkelstein and McGarry 2006). Finkelstein and McGarry (2006) find that individuals with private long term care insurance in the United States are more likely to engage in activities that reduce health risks, which in turn makes it less likely that they will ever use long-term care. Cutler et al. (2008) also show that positive relationships exist between individuals in the United States purchasing term life, annuities, medigap and long term care insurance and adopting risk reduction activities. The datasets used in these studies do not allow for examining the exact behavioral mechanisms behind the observed preferred risk selection, though. Other empirical research has also shown that there can be significant heterogeneity in the relation between insurance coverage and risk reduction (Cohen and Siegelman 2010). Einav et al. (2013) found that some individuals who engage in moral hazard have a higher demand for health insurance coverage, except for highly risk averse individuals with a high perceived health risk who do not engage in moral hazard and have a high willingness-to-pay for insurance coverage. The lack of empirical evidence on adverse selection in some insurance markets may be consistent with the hypothesis that buyers are not maximizing their expected utility, but it is also consistent with the hypothesis that little information asymmetry exists. In this paper we examine the relationship between individual risk reduction activities and natural disaster insurance coverage as our field case by identifying behavioral mechanisms that may explain preferred risk selection. In particular we use the U.S. flood insurance market and the decisions by homeowners to reduce flood risk by investing in loss reduction measures as our context. Floods are the most costly source of natural disasters in the U.S., 1 and there is an expectation that the frequency and severity of flooding will increase in the future as a result of climate change and the accompanying sea level rise (IPCC 2014). Flood insurance for residential properties is almost exclusively purchased through the federally-run National Flood Insurance Program (NFIP), which covers more than $1.2 trillion of assets. This makes the NFIP the largest flood insurance program worldwide. We make a temporal distinction between risk reduction activities that are normally adopted well before the risk (i.e. a flood disaster) materializes, such as dry proofing walls of a building to make them impermeable to water, and emergency preparedness measures that are undertaken during an imminent threat of a disaster, such as moving contents to higher floors to avoid them suffering flood damage. Loss reduction measures often have a high upfront cost with an uncertain benefit, while emergency preparedness measures are generally less costly and have more certain risk reduction benefits. Recent experiences with low-probability/high-impact events have given rise to an increasing interest in the economics literature in how people prepare for, and respond to, disasters (Barberis 2013). In particular, Hurricane Katrina in 2005 and Hurricane Sandy in 2012, which combined caused more than $150 billion in economic losses in the United States, showed the importance of undertaking protective measures to reduce future disaster damage (Munich Re 2015). Moreover, such disasters highlight the need to provide recovery funds through insurance should one suffer losses from a disaster and to find ways to improve disaster preparedness in the future. Only a handful of studies have examined the relationship between investment in risk reduction and insurance purchase decisions for natural disasters (for a review see Hudson et al. 2017). Carson et al. (2013) show that homeowners in Florida who have high deductibles on their windstorm insurance are also more likely to take windstorm risk reduction measures. This suggests that insurance coverage and wind risk reduction measures act as substitutes, at least in terms of the deductible amount. On the other hand, Petrolia et al. (2015) find a positive relation between the decision to purchase windstorm coverage and investment in measures that limit windstorm damage based on a sample of U.S. households along the Gulf coast. A follow-up study by Hudson et al. (2017) using a different U.S. sample from the mid-Atlantic and Northeastern U.S. revealed that households with homeowner's or flood insurance that are threatened to be hit by a hurricane are also more likely to engage in activities that minimize windstorm risks. With respect to flood risk, Thieken et al. (2006) and Hudson et al. (2017) find that insured households in Germany take more flood risk mitigation measures than households without flood insurance. These studies, however, did not identify the behavioral mechanisms behind the relations between insurance and risk reduction activities, which we aim to do here. Our study uses data from a survey we conducted of more than 1000 homeowners who live in flood-prone areas in New York City (NYC). This dataset includes individual level information on implemented flood risk reduction measures and flood insurance purchases from the NFIP as well as a range of variables that influence these decisions, such as psychological characteristics, risk perceptions, experience of past flood damage and receipt of federal disaster assistance. Our individual level data is especially suitable for determining whether the relationship between insurance purchase and risk reduction activities by individuals are substitutes or complements, and for identifying the behavioral mechanisms behind these relationships. The NFIP regulates development in the 1 in 100 year flood zone via specific elevation requirements and by restricting new construction in floodways (Aerts and Botzen 2011;Dehring and Halek 2013). Here we focus on voluntary flood risk reduction measures that households can undertake to prevent flood water from entering a building or to minimize damage once water has entered the structure. These floodproofing measures can be especially attractive for existing structures that are very expensive to elevate. Cost-benefit analyses have shown that elevation can be costeffective for new structures, but not for existing buildings (Aerts et al. 2014). The paper is organized as follows. Section 2 presents the data and the empirical methods. Section 3 presents the results. We find that insurance and long-term risk reduction measures taken ex ante a flood threat are complements, which is opposite to a moral hazard effect. In contrast, we find that people with insurance coverage are less likely to take short-term emergency preparedness measures during a flood threat. An examination shows that individuals both insure and take risk reduction measures for financial reasons, like experiencing high flood damage in the past and not having received federal disaster assistance for damage. Interestingly, we find that behavioral motivations to reduce risk outside of the standard economic model also play a role. Section 4 concludes and provides policy recommendations. Data and empirical methods The databases 2 of our survey consists of a random sample of homeowners in NYC that face flood risk who live in a house with a ground floor. This means that renters and those living in apartments above the ground floor level are not included in our sample. The survey was implemented over the phone by a professional survey company about six months after NYC was flooded by Superstorm Sandy in October 2012. 1035 respondents completed the survey (73% completion rate). See the Electronic Supplementary Material (ESM) for more details about the survey method and survey questions. We use probit models of (flood) insurance purchases and explanatory variables of flood risk reduction activities to examine the relationship between flood insurance purchases and flood risk reduction measures. Treating the purchase of insurance as a dependent variable is consistent with related research that examines the relationship between insurance coverage and risk reduction by policyholders in the context of health risks (e.g. Finkelstein and McGarry 2006;Cutler et al. 2008) and natural hazard risks (Hudson et al. 2017). Voluntarily purchasing flood insurance is a personal decision that is made annually and can be cancelled at any time, which also justifies having it as the dependent variable. Most flood risk mitigation measures are long-term adjustments to the structure of a home, with some being implemented by a previous owner of the house. On the other hand, some are a temporary response to a known threat, such as emergency preparedness measures. This is why we also estimate models with implemented flood risk mitigation measures as the dependent variable and insurance and other relevant factors as explanatory variables, to examine if the main results for our main hypotheses are robust to this alternative specification. Our basic probit models are: where Y i is a binary variable indicating whether respondent i has flood insurance (Y i = 1) or no flood insurance at all (Y i = 0), 3 M i are variables of implemented risk reduction measures. Two empirical models are estimated for the subgroups for which (Y i = 1) since some individuals are required to have flood coverage (1) if they have a federally insured mortgage and live in a designated high-risk flood zone (the 1 in 100 year flood zone defined by the Federal Emergency Management Agency (FEMA)), while others purchase it voluntarily (2). The relationship between insurance coverage and risk reduction may differ between people who bought flood insurance voluntarily or mandatorily, which is why we make this distinction. The mandatory insurance model also enables us to determine whether homeowners who are required to purchase flood insurance implement more or fewer risk reduction measures than those who are uninsured. M i consists of two separate variables of the number of implemented ex ante risk reduction measures at the household level (i.e. structural measures implemented in the home to limit flood damage) and emergency preparedness measures (which also limit flood damage but require a behavioral response of the homeowner in the immediate time period before the flood occurs). We make a distinction between these two types of protective measures throughout our analysis because decision processes of taking these measures are likely to be different. Risk reduction measures taken ex ante a flood threat have relatively high upfront costs with uncertain risk reduction benefits that materialize if a flood occurs. Emergency preparedness measures taken during a flood threat, like moving contents to higher floors and installing flood shields, are often relatively less expensive, but require the household to take action in a situation of emergency when the likelihood of a flood occurring is now almost certain. We specify two hypotheses (H1 and H2) depending on whether coefficients α 1 and α 2 are negative, pointing toward households' viewing insurance and risk reduction measures as substitutes (H1) or whether these coefficients are positive, revealing that these measures are viewed as complements (H2). Although our research design cannot directly prove causality between insurance and risk reduction measures, evidence supporting H1 is consistent with a moral hazard effect, while evidence supporting H2 is consistent with a preferred risk selection effect. (1) and (2) provides insights into how the insurance purchase decision is related with investments in loss prevention. To examine how other variables influence the decision to buy insurance and potentially affect the relationship between Y i and M i , two other models are estimated: a traditional economic model and a behavioral economic model, respectively. The traditional economic model is: The following new variables are introduced in Eqs. 3 and 4: R i reflects either homeowners risk perceptions (model variants 3a and 4a) or experts estimates of the flood risk (model variants 3b and 4b), F i characterizes federal disaster assistance received by homeowner i, and X i are socio-demographic variables, including income which we capture by including dummy variables of four categories of total household income of which the coefficients show the effects on insurance purchases versus the excluded dummy variable of the very high income category (>$125,000). Those required to purchase flood insurance reside in areas designated by FEMA to have a higher flood risk than those having the option to buy coverage. A principal reason for distinguishing between these two groups is determining whether their relationship between having flood insurance coverage and investing in risk reduction measures (Mi) differ as well as whether their perceptions of the risk and those of the experts differ in their insurance purchase decision. Definitions and coding of all variables are provided in ESM Table 1. If a question was not answered by a respondent, then this resulted in missing observations for the variable that is based on this question. This implies that the number of observations in our statistical models varies dependent on the number of non-missing observations for the explanatory variables included in that model. 4 R i are either variables of risk perception measured as perceived flood probability and consequences in line with subjective expected utility theory (Savage 1954) or expert estimates of these variables. These expert indicators of flood risk faced by the respondents have been derived from a probabilistic flood risk model developed for NYC. A detailed description of this model and all flood modelling results can be found in Aerts et al. (2014, including online material). Based on a large set of 549 simulated hurricanes from a coupled hurricane hydro-dynamic model (Lin et al. 2012) the probability that the property of each survey respondent will experience inundation from a flood has been derived (Aerts et al. 2014). An indicator of flood damage was calculated for each respondent based on the mean expected flood inundation level at the respondent's location and the value of a respondent's property that are input to a depth-damage function for each specific building category derived from the HAZUS-MH4 methodology (HAZUS stands for Hazards United States). Such depth-damage curves are commonly used in flood risk assessments, and represent the fraction of a building and its content value that is lost in a flood based on the flood water level present in the census block (Aerts et al. 2013(Aerts et al. , 2014. F i is a dummy variable representing respondents who have received federal disaster assistance for flood damage in the past. They may expect the government to compensate them for damage suffered from a future flood, which can lower their demand for insurance and risk reduction measures. This crowding out effect has been called the Samaritan's dilemma or charity hazard (Buchanan 1975;Browne and Hoyt 2000;Raschky et al. 2013). The traditional economic model is also estimated for mandatory flood insurance purchases, because budget constraints due to low income and the receipt of federal disaster assistance in the past may be reasons for not adhering to the mandatory purchase requirements of the NFIP. Next, a behavioral economic model is estimated to examine factors and motivations that are likely to lead to purchasing insurance voluntarily: Variables M i , F i and X i are similar to those for Eq. 4. A difference in (5) is that R i in (4) is now represented by T i which is a variable indicating respondents who think that the flood probability is below their threshold level of concern. This variable is included since other studies have found that individuals may use a threshold model in assessing low-probability/high-impact risk (Slovic et al. 1977;McClelland et al. 1993;Kunreuther et al. 2001;Botzen et al. 2015). This model implies that many individuals choose not to insure because they ignore the flood risk. B i consists of behavioral variables that examine whether individuals purchase insurance because it gives them peace of mind, and whether their decision to purchase coverage is affected by their locus of control, their own internal values or a social norm. The peace of mind variable is included because affect and emotion-related goals appear to have an important influence on decision making under risk (Loewenstein et al. 2001). Individuals may purchase insurance to reduce anxiety, and to avoid anticipated regret not to have bought it should a disaster happen and consolation (Krantz and Kunreuther 2007), which is captured by this variable. Locus of control is a personality trait which reflects a belief about the degree to which an individual exerts control over his or her own life, in contrast to external environmental factors, such as fate or luck (Rotter 1966). It has been shown that locus of control influences economic decision making in various domains such as earnings (Heineck and Anger 2010), entrepreneurship (Evans and Leighton 1989), investments in education (Coleman and Deleire 2003) and in health (Chiteji 2010). It can be expected that individuals with an external locus of control 5 think they have little influence over outcomes in their life and are less likely to prepare for disasters and purchase flood insurance (Baumann and Sims 1978;Sattler et al. 2000). 5 The external locus of control variable is defined by responses to the question "Some people feel they have completely control over their lives, while other people feel that what they do has no real effect on what happens to them. Please indicate on a scale from 1 to 10 where 1 means "none at all" and 10 means "a great deal" how much control you feel you have over the way your life turns out." which is based on the U.S. World Values Survey (see ESM). The dummy variable of an external locus of control equals 1 if the respondent answered 1 through 5 on this scale and 0 otherwise. Moreover, norms may be a motivation for people to prepare for disasters, as has been shown for the influence of norms on other economic decisions, like consumption, work effort, and cooperation in public good provision, and perceived fairness of income distributions and uses of (public) money, as reviewed by Elster (1989). Being adequately prepared for a specific risky situation may be regarded as a social norm, so that households do not need to rely on others for assistance during and after a disaster. In another context, namely recycling decisions by individuals, Viscusi et al. (2011) and a more detailed follow up study by Huber et al. (2017) show that it is important to distinguish between a person's behavior due to the actions of others (i.e. social norms) and private values. We realize that recycling decisions can follow a different behavioral process than flood preparedness decisions, but the relevance of distinguishing between different types of social norms has been found in a variety of contexts such as littering and energy savings (see the review in Huber et al. 2017). Whether private values are stronger predictors of behavior than social norms may depend on the type of decision. Since in principle, both social norms and private values may be positively related to taking flood risk reduction measures and purchasing flood insurance, we examine the influence of both variables. 6 In our study, a social norm refers to approval of others of being well prepared for flooding, while a private value refers to behavior that the respondent finds to be personally important. 7 Furthermore, we estimate two variants of (5) which include interaction terms with ex ante risk reduction measures in order to examine how behavioral and financial mechanisms relate to the adoption of both risk reduction measures and purchase of insurance. First, we examine how the behavioral characteristics B i influence the decision to both purchase flood insurance and adopt risk reduction measures by creating interactions terms of these Bi variables with the ex ante risk reduction variable. In particular, the norm and locus of control variables can reflect internal preferences of the individual with regard to risk preparedness which may affect decisions to both insure and implement risk reduction measures. Second, a model is estimated to examine how the interaction between risk reduction and flood insurance purchases is related to financial incentives through previous flood damage and past federal disaster assistance. Experiencing severe flood damage may trigger the adoption of ex ante risk reduction measures and the purchase of insurance when individuals perceive that insurance coverage alone is insufficient for coping with future flood events. Individuals who have received federal disaster assistance may expect the federal government to cover their future losses. They therefore are less likely to invest in risk reduction measures and purchase insurance than if they believed they would be responsible for the costs of repairing their damage after a disaster. 6 See ESM for examples of social norms and private values in the context of the flood preparedness. 7 The private value was measured using the question "Please tell me if you strongly agree, agree, neither agree nor disagree, disagree or strongly disagree with the following statement: I would be upset if I noticed that someone who got flooded was insufficiently prepared for flooding and needed to request federal compensation for flood damage he suffered." For eliciting the social norm, the text was: "Other people would be upset if they noticed that someone who got flooded was insufficiently prepared for flooding and needed to request federal compensation for flood damage he suffered" (see ESM). The private value and social norm variables take on the value 1 if the respondent agreed or strongly agreed with the statement and zero otherwise. Descriptive analyses Of our total number of respondents, 44% purchased flood insurance because doing so was mandatory, 21% purchased it voluntarily, 33% did not have flood insurance and 2% did not know whether they had flood coverage. Figure 1 shows the relation between having flood insurance coverage and the implementation of specific (structural) risk reduction measures, which are taken ex ante a flood threat. These measures often have substantial upfront investment costs and limit damage during flood events. If insurance and risk reduction measures are substitutes then one would expect that individuals with flood insurance coverage would undertake fewer risk reduction measures than individuals without flood insurance, because FEMA does not give premium discounts for policyholders who adopt such measures. The one exception is if a homeowner elevates his home and therefore this measure is not considered in our analysis. 8 As shown in Fig. 1, individuals who voluntarily and mandatorily purchased flood insurance are also more likely to take ex ante risk reduction measures than the uninsured, which is statistically significant for building with water-resistant materials, having a water-resistant floor, and elevating utility and electric installations. This (positive) relationship between having flood insurance coverage and undertaking flood risk reduction measures is less clear for emergency preparedness measures such as moving contents to a higher floor (Fig. 2). Compared with the uninsured, individuals with mandatory flood insurance are statistically significantly less likely to place flood shields and sandbags to limit damage during a flood, while they are slightly more likely to move home contents away from flood-prone parts of the house, but this relationship is insignificant. The percentage of people with voluntary and no flood insurance coverage taking these measures is similar and does not differ significantly from actions taken by individuals who are uninsured. Results of statistical models The results of simple probit models (Eqs. 1 and 2) of relations with insurance purchases and risk reduction measures are shown in ESM Table 2. The results confirm the positive significant relation with (mandatory and voluntary) flood insurance purchases and ex ante risk reduction measures, while the coefficient of emergency preparedness measures is negative, albeit insignificant. This negative coefficient is due to the controlling for implemented risk reduction measures. A separate probit model with only emergency preparedness measures yields a positive (insignificant) coefficient for this variable, which becomes negative in a model when risk reduction measures are included as explanatory variable. 9 8 About 16% of the respondents without flood insurance indicated that they have elevated their home above potential flood levels. This percentage is the same for respondents with mandatory flood insurance coverage and is 15% for households who purchased flood insurance voluntarily. An evaluation of the NFIP concluded that the discounts are not high enough (Jones et al. 2006), which may explain why people with NFIP coverage are not more likely to elevate their house than people without coverage. 9 This sign change is not caused by multi-collinearity. The correlation between these two variables is only 0.31. In general we checked all correlation coefficients of explanatory variables in all of our models to make sure they are not too high and problems with multi-collinearity do not occur. Table 1 shows the results of probit models of flood insurance purchases with explanatory variables motivated by a traditional economic model of decision making under risk (Eqs. 3 and 4). Our main interest here is to measure the relation between insurance coverage and the adoption of risk reduction activities. A consistent picture emerges in both models: individuals with flood insurance coverage are more likely to have invested in measures that flood-proof their building ex ante a flood threat (positive and significant marginal effect). The probit model in Table 1 also finds a significant Fig. 1 Percentage of respondents who implemented specific ex ante risk reduction measures for individuals who purchased flood insurance voluntarily, mandatorily or not at all. Note: ** indicates a significant difference at the 5% level with the no flood insurance group Fig. 2 Percentage of respondents who implemented specific emergency preparedness measures for individuals who purchased flood insurance voluntarily, mandatorily or not at all. Note: ** indicates a significant difference at the 5% level with the no flood insurance group negative relation between having insurance and undertaking emergency preparedness measures shortly in advance or during flood events, while this negative effect was statistically insignificant in the simple model without control variables (Eqs. 1 and 2, shown in ESM Table 2). In other words, while accounting for standard economic explanatory variables that influence insurance purchases, individuals with that coverage are less likely to have flood shields or sand bags available or move contents out of flood-prone parts of their house. This points towards a moral hazard effect, at least for emergency preparedness measures which are generally taken just before a flood occurs, or even as it does occur. The opposite relationships with insurance coverage for longer term risk reduction measures and shorter term emergency preparedness measures indicate that the decision processes of taking long-term vs. short-term disaster preparedness measures differ. The flood risk faced by the respondent could be an influencing factor on both decisions to purchase flood insurance and undertake damage mitigation measures. The perceived flood probability 10 is positively related to mandatory purchases, but this effect is not very strong (p value<0.1), while it is insignificant in the model of voluntary flood insurance purchases. The perceived flood damage is insignificant in both models 10 The perceived flood probability is measured as a dummy variable of respondents who expect that their flood probability is higher than 1/100. The main results are similar if instead this variable is specified as a continuous variable of the respondent's best estimate of the flood probability, which is not included in the reported model in Table 1 because of its large number of missing observations (N drops to 169 for voluntary purchases). Table 1 include experts estimates of the flood probability 12 and expected flood damage instead of the perceived flood probability and perceived flood damage. The expert flood probability has an insignificant effect on voluntary flood insurance purchases, while the mandatory flood insurance coverage is positively related to experts estimates of the flood probability. This is to be expected since mandatory purchase requirements are required only in FEMA high risk zones for which flood probabilities are high. The expert flood damage only has a significant effect on voluntary flood insurance purchases which suggests that individuals in lower risk areas purchase insurance coverage because they focus on potential losses, while those in high risk areas don't think about the damage because they are required to buy coverage. These findings that hazard severity plays a larger role in voluntary flood insurance demand than hazard probability are in line with research showing that many individuals do not seek or use probabilistic information in making decisions (e.g. Kunreuther et al. 2001). The main relations between flood risk mitigation activities and flood insurance coverage are similar when perceptions or objective indicators of the flood probability and consequences are included as explanatory variables. Another important finding is that households who have received federal disaster assistance to compensate for uninsured flood damage suffered in the past are less likely to carry both mandatory and voluntary flood insurance. 13 This is not necessarily what we expected a priori. On the one hand, households without flood coverage who claimed compensation from federal assistance are required to carry flood insurance. On the other hand, if households expect that the government will bail them out again during a future flood event, they may drop this coverage again which would be a typical charity hazard effect. We find that the latter effect dominates. The marginal effect of this variable is rather large; the probability of having flood insurance is between 0.1 and 0.2 lower after receiving disaster relief. A variable of having received disaster assistance but in the form of a loan turned out to be statistically insignificant (not shown in Table 1), suggesting that compensations through loans are not a substitute for insurance, which confirms early results by Kousky et al. (2013). A significant income effect is present for flood insurance purchases. A dummy variable of households with a very low total household income (<$25,000) is statistically significant in the models of voluntary purchases, which means that these individuals are less likely to buy flood coverage than those with a very high income (the 11 This variable is measured as the absolute value of expected flood damage. In addition we estimated a model with a variable of the expected flood damage relative to the respondent's property value as an explanatory variable, which is an indicator of perceived severity of flooding and resulted in similar main findings. In other words, our main results are robust to this alternative specification. 12 The U.S. Federal Emergency Management Agency (FEMA) is charged with mapping flood risk in hazard prone areas and releasing this information to the public. Different FEMA flood zones correspond to different level of risk. Usually high risk areas are defined as those where there is a higher-than 1% chance of being flooding in any given year. Also, the publicly available FEMA flood zone classifications do not have a significant influence on voluntary flood insurance purchases (results not shown in Table 1). This may not be surprising since the accuracy of the NYC FEMA flood zone classification has been highly debated (Aerts et al. 2013). 13 About 37% of the respondents received federal disaster assistance for flood damage they experienced in the past. The average federal disaster compensation received was $21,908 which is substantial, but lower than the average compensation of $34,766 that respondents with flood coverage received in insurance payments. excluded baseline) due to affordability concerns. A significant lower purchase of mandatory insurance is also observed for people with a low income. For individuals with a very low income and middle high income a significant effect appears only in model 3b. Overall our findings imply that concerns about insurance affordability in addressing the pricing of flood insurance and providing financial assistance to incentivize purchase of coverage and investments in cost-effective mitigation measures need to be considered (Kousky and Kunreuther 2014). Voluntary insurance purchases are positively related with having a high education level. 14 These results for socioeconomic variables suggest that more vulnerable social groups with a low income and low education level are less likely to purchase flood insurance, and, thereby, have worse financial protection against flood damage. Next, we examine in more detail the behavioral motivations for voluntarily purchasing insurance and for combining this insurance coverage with risk reduction measures. Table 2 shows the results of a model with explanatory variables that are motivated by a range of behavioral economic theories that postulate that individuals base decisions under risk on intuitive thinking and other psychological decision processes (Eq. 5). In particular, the model includes a threshold variable of perceived risk that represents respondents who think that the probability their house will suffer a flood is below their threshold level of concern, and variables of a private value of preparing for flooding, individual locus of control, and purchasing flood insurance because it gives peace of mind. Moreover, we examined whether in addition to these variables, flood experience 15 significantly influences voluntary flood insurance purchases. This did not turn out to be the case (results not shown in Table 2), which may be due to the large share of our respondents (about 75%) that had been flooded in the past. The main relations between flood risk mitigation activities and flood insurance coverage are similar when the aforementioned behavioral variables are included in the model. The behavioral economic model provides some important additional insights, though, compared to the standard economic models. Individuals who think that the flood probability is below their threshold level of concern are less likely to purchase flood insurance (p value = 0.06). Individuals with an external locus of control who think they have little control about outcomes in their life are less likely to demand flood insurance. Moreover, flood insurance demand is positively related to a strong private value 16 of being well prepared for flooding (see the survey question in footnote 7). A separate model was estimated including the external social norm variable (not shown here), which turned out to have an insignificant effect on voluntary flood insurance purchases. This is in line with findings from Viscusi et al. (2011) andHuber et al. (2017) who show that recycling behavior in the United States is influenced by private 14 Respondent's age and gender are not statistically significant (results not shown in Table 1). 15 We examined the influence of flood experience by testing different models with the following indicators of flood experience: a dummy variable of whether a respondent experienced flooding in the past (=1) or not (=0), a variable of the number of times a respondent has been flooded, and a variable of the damage the respondent suffered from the last flood event. These variables were statistically insignificant and have been excluded from the model in Table 2. It should be noted that the main relations between flood insurance purchases and implementation of damage mitigation measures remain similar in these models that control for flood experience. 16 We examined whether individuals are more likely to have a strong private value for flood protection when they received federal disaster assistance in the past or experienced high flood damage, and found that these relations are statistically insignificant. values and not external social norms, and conclude that it is important to distinguish these two types of variables as we do here. Moreover, we find that peace of mind is an important motivation for individuals to purchase flood insurance. About 70% of the respondents indicated that they were similar to a person who buys insurance (in general) because it gives her/him peace of mind. Our probit model results (Table 2) shows that these individuals are significantly more likely to purchase flood insurance. The results of the previous models show that when we control for relevant independent variables that influence flood insurance purchases, a consistent positive relation between flood insurance and the implementation of ex ante risk reduction measures is found. For households in our sample, insurance and risk reduction measures are complements. As a robustness check, we also estimate models 17 with the number of implemented emergency preparedness or risk reduction measures as dependent variable and mandatory and voluntary purchases as explanatory variables along with other control variables noted in Tables 1 and 2. The results of these alternative models, reported in ESM Tables 3 and 4, reveal the same relationships between the flood damage mitigation measures as those in Tables 1 and 2. That is, insurance and emergency preparedness measures are substitutes, while insurance and risk reduction measures are complements. Next we examine whether the aforementioned behavioral variables that were found to influence voluntary flood insurance purchases in Table 2, directly influence the Pseudo R2 0.12 *, **, *** indicate significance at the 10%, 5%, and 1% levels, respectively relationship between insurance purchases and the implementation of risk reduction measures by adding interaction terms with the risk reduction variable. 18 In particular, the left column of Table 3 shows a model with interactions terms of behavioral characteristics. We examined a model that included interactions of the risk reduction measures variable with peace of mind, the threshold level of concern, the private value of preparing for floods and external locus of control. The interactions of risk reduction with peace of mind and the threshold level of concern are insignificant (not shown here) and model fit is better when these variables are included independently, as is done in the model in Table 3. The interaction term risk reduction measures × strong private value of preparing for floods is significant at the 5% level. The positive sign implies that individuals with a high private value to prepare for flooding are more likely to take both risk reduction measures and purchase flood insurance. The negative marginal effect of the interaction risk reduction measures × external locus of control suggests that individuals with an external locus of control are less likely to take both flood insurance and ex ante risk reduction measures, but this effect is only weakly significant (p value = 0.08). The right column of Table 3 shows a model that adds interaction terms of the risk reduction measures variable with variables of important financial incentives for implementing both ex ante risk reduction measures and purchasing flood insurance, which are the flood damage the respondent suffered in the past and whether or not s/he received federal disaster assistance. The main results of the previous model remain similar. Additional insights of this model are that the interaction risk reduction measures × experienced flood damage is highly significant and positive. This suggests that individuals who experienced severe flood damage in the past are more likely to take both flood insurance and ex ante flood-proofing measures. This observation can reflect availability bias, which implies that individuals who experienced severe flood damage in the past find it easy to imagine this could occur again in the future and, therefore, prepare well for flooding. Moreover, a reason for this may be that insurance reimbursements for flood damage in the past were insufficient, which is why insured individuals also take flood damage mitigation measures. As an illustration, our respondents who received compensation for flood damage they experienced in the past indicate that the amount of compensation they received was on average only about 45% of the total damage they suffered to their building and home contents. This past high level of uncompensated flood damage by insurance can also reconcile our finding in Table 3 that past flood damage influences the decision to both purchase insurance and take risk reduction measures with the previous result that past flood damage did not significantly influence the purchases of flood insurance independently. Taking both insurance and risk reduction measures for the uncovered risk may be seen as an effective strategy to deal with high flood damage by our respondents. The negative significant marginal effect of the variable risk reduction measures × received disaster assistance implies that individuals who have received federal disaster assistance for flood damage in the past are less likely to both take flood insurance and ex ante risk reduction measures. 18 These variables are included only as interaction terms and not separately to prevent potential problems with multicollinearity. Correlation statistics of these separate variables with the interaction term are about 0.7. Conclusions While the economics literature often assumes that insurance purchase and adoption of preventive measures are substitutes, several empirical studies on the purchase of health-related insurance, have shown this is not the case. Explanations put forward for those findings are that individuals select into buying insurance based on risk preferences which also cause them to undertake other risk reduction measures. We offer an examination of this behavior in the context of low-probability/high-impact disaster risks, specifically, floods. We focused on floods since they have caused hundreds of billions of dollars losses in the United States alone over the last decades and have also affected more people and caused more economic losses worldwide than any other natural disaster (CRED 2015). With the growing concentration of population and assets in flood prone areas in a number of countries, this risk is likely to become an even more important issue in the years to come. The overall pattern of results shows that long before individuals are faced with a threat of experiencing a loss from flooding they are likely to both have flood insurance and take risk reduction measures in their home to limit future flood damage. This behavior implies preferred risk selection, thus supporting H2 rather than H1 (that Pseudo R2 0.11 0.14 *, **, *** indicate significance at the 10%, 5%, and 1% levels, respectively suggests that the relationship between insurance and risk reduction was negative; i.e. they are substitutes). With respect to emergency preparedness, insured individuals are less likely to take measures to limit damage than uninsured individuals, behavior which supports H1 and implies a moral hazard problem. These findings indicate that the nature and timing of the protection measure adoption should be more closely considered than has been so far in previous studies. Our statistical models reveal that purchasing flood insurance is negatively related to obtaining federal disaster assistance, an illustration of charity hazard. We have extended the usual economic model to test how a variety of behavioral mechanisms can explain flood insurance demand: that is the case for peace of mind, having an external locus of control which implies that individuals expect to have little influence over the risks they face, and having a strong private value of preparing for flood. The positive relationship we find for the latter variable is consistent with previous findings that private values are a stronger predictor than social norms in the context of recycling decisions (Viscusi et al. 2011;Huber et al. 2017). Individuals who purchase flood insurance and flood-proof their building exhibit private values of preparing for floods and experienced high amounts of flood damage in the past. Individuals who received federal disaster assistance in the past are less likely to both purchase insurance and take risk reduction measures. The lack of locus of control about life in general has a negative impact on flood insurance demand. The NFIP has been facing problems with a low up take of coverage and increasing flood losses that caused large deficits in the program; the NFIP is undergoing reforms 19 to address these issues. Several of our findings are relevant for these reforms. Our finding of a negative relationship between purchasing flood insurance and undertaking emergency preparedness implies that financial incentives or regulations are needed to encourage insured people to invest in these measures. For instance, it has been proposed that offering premium discounts to policyholders who reduce their risk can stimulate individuals to better prepare for flooding. Even though we find that risk reduction measures and insurance can be complements, there is a large group of people who do not take these measures for which such financial incentives to encourage risk reduction may also be relevant. Moreover, we find low income to significantly limit insurance purchase. This suggests that addressing affordability should be given attention in efforts of the NFIP to increase the market penetration of flood insurance. The finding that federal disaster assistance crowds out private flood risk reduction and insurance demand suggests that individual flood preparedness can be improved by limiting federal disaster relief, or by offering alternative forms of relief, like loans instead of grants. Our findings that behavioral characteristics, such as locus of control and private values, influence flood insurance purchases as well as the joint decision to mitigate risk, opens up avenues for further research to study how these may be activated to stimulate flood preparedness such as using information campaigns.
v3-fos-license
2022-03-25T15:24:42.427Z
2022-03-22T00:00:00.000
247638520
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-6374/12/4/186/pdf", "pdf_hash": "7cfcf3a03af2602e2abae2ef154d4f100add8b4f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43957", "s2fieldsofstudy": [ "Biology" ], "sha1": "3211128c022ba500de7c80c3dcdfaa351ac17643", "year": 2022 }
pes2o/s2orc
3D Printing in Solid Dosage Forms and Organ-on-Chip Applications 3D printing (3DP) can serve not only as an excellent platform for producing solid dosage forms tailored to individualized dosing regimens but can also be used as a tool for creating a suitable 3D model for drug screening, sensing, testing and organ-on-chip applications. Several new technologies have been developed to convert the conventional dosing regimen into personalized medicine for the past decade. With the approval of Spritam, the first pharmaceutical formulation produced by 3DP technology, this technology has caught the attention of pharmaceutical researchers worldwide. Consistent efforts are being made to improvise the process and mitigate other shortcomings such as restricted excipient choice, time constraints, industrial production constraints, and overall cost. The objective of this review is to provide an overview of the 3DP process, its types, types of material used, and the pros and cons of each technique in the application of not only creating solid dosage forms but also producing a 3D model for sensing, testing, and screening of the substances. The application of producing a model for the biosensing and screening of drugs besides the creation of the drug itself, offers a complete loop of application for 3DP in pharmaceutics. Introduction Environmental and genetic variations between individuals result in differences in treatment response to the same targeted agent [1]. Unique means have been developed, such as 3D printing (3DP), to personalize therapies to alleviate the challenges associated with standard medicine dosing. 3DP, described as additive manufacturing, is an emerging individualized oral drug delivery system that researchers are delving to produce pharmaceutical products with individualized doses [2]. Continuous research efforts in this technology resulted in FDA approval of an anti-epileptic drug, spritam (levetiracetam). The significant advantage of this technology is that it provides the room to adjust the dosage according to the unit on demand by altering the object's geometry or other physical dimensions under observation [3]. The prerequisite of designing any 3D printed dosage form is to develop the substance via Computer-Aided Design software (CAD). CAD provides a pathway to understanding the potential of 3DP for personalized drug therapy of active pharmaceutical ingredients [4]. The CAD programs are used to convert the 3DP file into a stereolithography file (STL file) that possesses the necessary information for the spatial geometry of the object to be printed. After the initialization, the STL file is cut into different segments, one of which is the slice file (SLI segment), which is then uploaded to the 3D printer for printing. The 3D printer acts as a guide for the motion to build the necessary parts. 3DP is one of the novel techniques that allows us to fabricate the oral dosage form with an exact formulation for organ-specific delivery of active pharmaceutical ingredients (APIs) to patients [5]. Although 3DP is in its very early age for being used in the field of personalized drug therapy, it can provide an unprecedented advantage in healthcare settings for designing customized pharmaceutical products and extemporaneous dosage 3DP Techniques There are three major types of 3DP techniques: laser-based printing systems, inkjetbased printing systems, and extrusion-based printing systems [13]. Laser-Based 3D Printing Systems Laser-based 3DP is primarily of two major types 1. Stereo-Lithography Apparatus (SLA) SLA was one of the first 3DP technologies to be invented by Hull in 1986 in which radiation is applied on some photo-sensitive polymers to initiate the process of photopolymerization [14]. Digitally controlled UV-Light emitters are usually utilized to scan the surface of liquid polymers and plastic resins, which are photo-polymerizable. After polymerization, the 3D printer creates a layer of solid resins equivalent to the depth of the previous layer of polymer [15]. The excellent penetration potential of UV light causes the fusion of multiple layers of polymers. These cycles are repeated several times to achieve the intended design of a dosage form [13]. Figure 1 shows the design of the stereo-lithographic apparatus. Continuous liquid interface production (CLIP) is a modified version of SLA. Unlike SLA, the polymerization process in this technique is ongoing and continuous instead of a layer-by-layer pattern, and it requires a pool of liquid photopolymer resin [16]. In CLIP, the printing speed and resolution are very high compared to traditional SLA [17] and can create objects nearly 100 times faster than the other 3DP methods commercially available [16]. This method has a limited application in the pharmaceutical business due to the increased energy input from the laser [18,19]. SLS is one of the emerging technologies in the field of 3D Printing. It involves only single step in which a laser selectively sinters powders into layers to achieve the intende 3D structures. This technique consists of using focused lasers on the surface of powder to draw specific patterns by stacking powder materials. As the layers are being sintered the powder beds move downward, and the reservoir beds move upward to make new layers, and the new layers are then stacked up on the previous layers ( Figure 2). The pro cess is repeated several times to achieve the intended dosage designs. Different types o polymers have been employed to produce pharmaceuticals, such as thermoplastic mate rials like PA12 (Nylon) and polyether ether ketone (PEEK) [20]. The SLS process has bee used to make orally disintegrating tablets such as ondansetron [21]. Selective Laser Sintering (SLS) SLS is one of the emerging technologies in the field of 3D Printing. It involves only a single step in which a laser selectively sinters powders into layers to achieve the intended 3D structures. This technique consists of using focused lasers on the surface of powders to draw specific patterns by stacking powder materials. As the layers are being sintered, the powder beds move downward, and the reservoir beds move upward to make new layers, and the new layers are then stacked up on the previous layers ( Figure 2). The process is repeated several times to achieve the intended dosage designs. Different types of polymers have been employed to produce pharmaceuticals, such as thermoplastic materials like PA12 (Nylon) and polyether ether ketone (PEEK) [20]. The SLS process has been used to make orally disintegrating tablets such as ondansetron [21]. Inkjet-Based 3D Printing Systems Inkjet-based Printing is one of the most-used 3DP techniques, which is further subdivided into two major types: Drop-on-Powder (DoP) deposition and Drop-on-Drop (DoD). DoP deposition uses either a powder bed covered with unbound powder material or inkjet printing to jet a liquid binder on to a powder bed to generate 3D structures (Figure 3). On the other hand, in DoD, the liquid droplets are the building materials placed on the surface of a substrate in a coherent pattern ( Figure 4). The API can be dissolved in a liquid medium that acts as a binder or is formulated into powders that serve as the powder bed. The principle of interaction between the binding liquid and the powder bed is like the wet granulation technique [22]. The Drop-on-Powder technique is more suitable to produce pharmaceuticals [23]. Inkjet-Based 3D Printing Systems Inkjet-based Printing is one of the most-used 3DP techniques, which is further subdivided into two major types: Drop-on-Powder (DoP) deposition and Drop-on-Drop (DoD). DoP deposition uses either a powder bed covered with unbound powder material or inkjet printing to jet a liquid binder on to a powder bed to generate 3D structures ( Figure 3). On the other hand, in DoD, the liquid droplets are the building materials placed on the surface of a substrate in a coherent pattern ( Figure 4). The API can be dissolved in a liquid medium that acts as a binder or is formulated into powders that serve as the powder bed. The principle of interaction between the binding liquid and the powder bed is like the wet granulation technique [22]. The Drop-on-Powder technique is more suitable to produce pharmaceuticals [23]. Extrusion-Based 3D Printing Systems Extrusion-based printing systems are also known as nozzle-based printing systems, an 'additive' technology used in modeling, prototyping, and production applications. Its printing process is quite similar to that of laser-based and inkjet-based systems. It requires a plastic filament as the primary printing material. It lays down plastic material layer-bylayer to form a 3D object utilizing a bottom-up construction approach. These systems are Extrusion-Based 3D Printing Systems Extrusion-based printing systems are also known as nozzle-based printing systems, an 'additive' technology used in modeling, prototyping, and production applications. Its printing process is quite similar to that of laser-based and inkjet-based systems. It requires a plastic filament as the primary printing material. It lays down plastic material layer-bylayer to form a 3D object utilizing a bottom-up construction approach. These systems are Extrusion-Based 3D Printing Systems Extrusion-based printing systems are also known as nozzle-based printing systems, an 'additive' technology used in modeling, prototyping, and production applications. Its printing process is quite similar to that of laser-based and inkjet-based systems. It requires a plastic filament as the primary printing material. It lays down plastic material layer-bylayer to form a 3D object utilizing a bottom-up construction approach. These systems are classified based on the need for the heating system to melt the powder or not and can be of two major types: (i) pressure-assisted micro-syringes (PAM) and (ii) fused filament fabrication (FFF) ( Figure 5) [13]. PAM is the technique in which the powder and binder are mixed to make a semisolid material extruded at a pressure of around 3-5 bars. The material is not immediately solidified; rather, it requires exposure to light or air to completely harden the material [24]. This is one of the reasons why there are risks in shrinking or deformation of the intended structure, or if the material is not hardened enough, then there are chances of collapsing the whole structure [18]. In PAM, certain solvents are utilized to produce a semisolid material. After evaporation at room temperature, the solvents generate the intended final product. These solvents are often toxic and sometimes may create unnecessary damage to API by altering its stability profile [15]. FFF Many authors also recognize FFF as fused deposition modeling (FDM), Table 1. This technique has gained widespread acknowledgment in both pharmaceutical and nonpharmaceutical fields. It is mainly employed in the pharmaceutical field to produce an oral dosage form by thin layer deposition of the material [25]. A continuous filament of thermoplastic material is utilized as a solid filament and is fed to a moving and heated printer extruder head via a gear system. The material is converted into a soft substance in the print head before being extruded via a nozzle system. After extrusion, the extruded material solidifies virtually quickly. FFF/FDM 3DP technology, although expensive, is user-friendly and extremely simple [18]. PAM PAM is the technique in which the powder and binder are mixed to make a semisolid material extruded at a pressure of around 3-5 bars. The material is not immediately solidified; rather, it requires exposure to light or air to completely harden the material [24]. This is one of the reasons why there are risks in shrinking or deformation of the intended structure, or if the material is not hardened enough, then there are chances of collapsing the whole structure [18]. In PAM, certain solvents are utilized to produce a semi-solid material. After evaporation at room temperature, the solvents generate the intended final product. These solvents are often toxic and sometimes may create unnecessary damage to API by altering its stability profile [15]. FFF Many authors also recognize FFF as fused deposition modeling (FDM), Table 1. This technique has gained widespread acknowledgment in both pharmaceutical and nonpharmaceutical fields. It is mainly employed in the pharmaceutical field to produce an oral dosage form by thin layer deposition of the material [25]. A continuous filament of thermoplastic material is utilized as a solid filament and is fed to a moving and heated printer extruder head via a gear system. The material is converted into a soft substance in the print head before being extruded via a nozzle system. After extrusion, the extruded material solidifies virtually quickly. FFF/FDM 3DP technology, although expensive, is user-friendly and extremely simple [18]. Polymers Used in 3D Printing of Pharmaceutical Solid Dosage Form 3DP utilizes different polymers and polymer combinations to produce novel solid dosage forms. Polyvinyl alcohol (PVA) and polyvinylpyrrolidone (PVP) are the most widely used polymers. Polymers used in 3DP are usually classified as non-biodegradable such as PVA, polyethylene glycol (PEG), Eudragit L 100, etc.; biodegradable such as poly L-lactic acid (PLLA), polycaprolactone (PCL), etc.; and amalgams. Polymer amalgams combine two or more polymers such as Eudragit RL PO plus PLA [26]. A list of polymers and their combinations used in 3DP are listed in Table 2. 3DP Solid Dosage Forms Researchers have been employing 3DP to produce pharmaceutical solid dosage forms and others for several decades. However, currently, there is only one formulation of levetiracetam (spritam) available on the market. FDA approved spritam in 2015. The drug is formulated as fast disintegrating tablets available in 4 different strengths, i.e., 250 mg, 500 mg, 750 mg, and 1000 mg. In 2020, Giomouxouzis et al. published a study and used the FDM 3DP method in which they utilized diltiazem as the model drug for preparation of diltiazem caplets using PVA and cellulose acetate (CA) as the polymers for ink. Thermal analysis techniques (TGA, DSC) are used to assess the physicochemical properties of the prepared caplets and X-ray diffraction (XRD) and scanning electron microscopy for analyzing the morphological features. The XRD analysis shows amorphization of diltiazem inside the polymer [49]. In 2019, Gültekin et al. utilized FDM as the 3DP method and Eudragit EPO + POLYOXTM WSR N10 and Eudragit EPO + POLYOXTM N80 as polymers to prepare tablets and filaments of pramipexole dihydrochloride monohydrate. Scanning electron microscope (SEM), differential scanning calorimetry (DSC), and filament disintegration tests were used to assess the characteristics of prepared filaments and tablets [50]. In 2020, ondansetron and anti-emetic drugs were used as model drugs to prepare orodispersible printlets in which Kollidon VA-64 was used as the major polymer, and selective laser sintering (SLS) was the method of choice for 3DP [21]. Many drugs have been investigated to convert them into a novel solid dosage form by utilizing 3DP technology. Khaled et al. used a pressurized micro syringe technique (PAM) as the printing technology to develop a novel solid dosage form involving three different drugs-captopril, nifedipine, and glipizide. Each of the three medications has a unique release profile. Captopril and nifedipine are well-known antihypertensive drugs, while glipizide is typically used to treat type 2 diabetes [24]. Goole and Amighi in 2016 [15] further investigated the study mentioned above. Hydroxypropyl cellulose (HPMC) was used as a primary polymer for glipizide and nifedipine. Both drugs were dispersed in HPMC but in separate compartments. On the other hand, for captopril, the polymers used were PEG 6000 and cellulose acetate (CA). These polymers are used to create a porous system through which the drug is released via osmotic diffusion. The only drawback of this technique was an increased tablet size, which creates difficulties in swallowing [15]. Recently, Tabriz et al., 2021 developed a novel solid dosage form (tablets) for isoniazid and rifampicin. Both isoniazid and rifampicin are used to treat tuberculosis as the first line of therapy. HPMC is the polymer of choice for isoniazid, whereas hydroxymethyl propyl cellulose acetate succinate (HMPCAS) is the polymer of choice for rifampicin. Both drugs are printed in two different layers, and then those two layers are fused to make a single tablet [51]. Ibrahim M et al., 2019 used metformin HCl as a model drug to prepare metformin tablets using FDM as 3DP technology in which PVA filaments are used as a polymer of choice. To enhance the solubility of the drug, ethanol is utilized as a solvent to prepare the solution of metformin HCl (low water content, i.e., 10% v/v). Afterward, the PVA filaments are soaked in metformin HCl/ethanol solution for a specified time. Then, the solutions are aliquoted in several vials and continuously stirred for 1, 3, 6, and 10 days to achieve the maximum drug loading. SEM, X-ray powder diffraction (XRPD), Fourier Transform Infrared Spectroscopy (FTIR), DSC, and dissolution studies are used to physiochemically characterize the prepared metformin-PVA (ML-PVA) filaments [52]. Similarly, in another study by Saviano M et al., 2019, FDM 3DP is used to prepare ciprofloxacin HCl + PVA's drug-loaded filaments. The dried powder of both the drug and the polymer is mixed to prepare the physical blends. Dibutyl sebacate is added along with drug and polymer to increase drug adhesion on the pellets and facilitate the extrusion process. The drug-loaded filaments of diameter 2.85 ± 0.15 mm are used to feed the extruder, resulting in flat-faced cylindrical printlets [53]. Table 3 includes a list of drugs where 3DP technology is utilized to convert into novel solid dosage forms. 3D Printing for Organ-on-Chip Application and Drug Sensing After producing oral dosage forms, it is essential to test the drugs in a model, such as conventionally used animal models. However, the animal model often does not recapitulate the entire physiology of a human body. It is also challenging to study cell-cell interactions with animal models. There are also increased ethical concerns about using an animal model in society [66]. With the current advancement in microfabrication such as photolithography and 3D printing, scientists have engineered 3D models called organ-on-chip (OoC) systems, which give the capacity of not only mimicking the cellular/tissue level in its microenvironment but also of acting as a systematically analytical tool for disease progression. For a definition, OoC is the technology that aims to create artificial living organs, which are then used to mimic the physiological responses of the actual organs. This technology offers a realm for drug testing, sensing and accurately manipulating cells in an in-vivo-like manner. 3D Printing for Microfluidics Microfluidics refers to the technology of liquid handling in tiny channels with dimensions in the order of one to ten micrometers [67,68]. This field has emerged in the last 30 years due to its application in several diverse fields, namely medicine, biology, chemistry and physics. The microfabrication techniques used to create these microchannels mainly originated from microelectronic areas, i.e., silicon technology. At first, the microfluidic chips can be made from silicon wafers, then glass [69] and fused silica [67] using dry and wet etchings, and currently polymers [70,71] using polymer injection molding and hot embossing. 3D printing offers a new platform to create microstructures and channels with dimensions in the order of 10 microns. It is an innovative add-on fabrication technique that grants the engineering of lab-on-a-chip devices that are occasionally difficult to pattern using traditional approaches such as micromachining or molding [72]. 3D Printing for Tissues and Organs 3D bioprinting is a fabrication process in which tissues are printed three-dimensionally. The ink used for bioprinting is usually called bio-ink. The bio-ink comprises living cells when printing tissues or organs, and in the case of printing scaffolds, it contains biomaterials such as agarose, alginate, collagen, cellulose and so on [73]. 3D printing has been a promising candidate for the fabrication of an OoC platform due to its precise control of layer-by-layer assembly of biomaterials, such as an extracellular matrix (ECM), cells, etc. [74] While it is still in the early stages, 3D cell printing has displayed promising utilities in screening, testing drugs by modeling tissues and diseases [75], including skins [76], cancers [77], liver [78], lung [79], etc. For example, Nguyen and co-authors reported [78] bio-printed 3D human liver tissues possessed of primary human parenchymal (hepatocyte) and non-parenchymal (endothelial and hepatic stellate) cell populations, which were then assessed for their possible use as substantial, multi-cellular models of human liver tissue. The authors showed the primary histologic, biochemical, and metabolic properties of the 3D liver tissues. Furthermore, to investigate the ability of the tissues to be used as a model of drug-induced liver injury, the authors tested their model response to the known hepatotoxicant Trovafloxicin in comparison to its harmless corresponding Levofloxacin. In general, the results emphasized that the 3D liver tissues formed by the 3D bioprinter can be a beneficial add-on to the pre-clinical toxicity studies. Another example, presently, Kang and co-authors, has succeeded in producing an artificial lung model using 3D printing [79]. They claimed that this 3D alveolar barrier model can be used as a replacement for conventional models for pathological and pharmaceutical applications. 3D Printing for a Complete Organ-on-Chip 3D printing usually serves as a tool to print either microfluidic devices or tissues, as shown in the previous sections. An organ-on-chip device should have both microfluidic channels which function as a microenvironment, and engineered tissues [80] to mimic the physiology of human organs. It has recently become possible to print both channels and cells using 3D printers. For example, in a single-step fabrication, all factors of the 3D tumor models such as the microfluidic channels, body of the chip, and the 3D tumor tissues are produced straight from the inputs of a user. This can be manifested by using multiple printing heads, allowing both 3D printing and 3D bio-printing [81]. Figure 6 presents an illustration of 3D bio-printing for organ-on-chip applications, e.g., drug screening/testing. 3D printing technology has hence not only offered the tools for producing oral dosage forms, but has also been shown to be capable of providing platforms such as OoC for drug screening, sensing, and testing, making it a perfect candidate for pharmaceutical applications. produced straight from the inputs of a user. This can be manifested by using multiple printing heads, allowing both 3D printing and 3D bio-printing [81]. Figure 6 presents an illustration of 3D bio-printing for organ-on-chip applications, e.g., drug screening/testing. 3D printing technology has hence not only offered the tools for producing oral dosage forms, but has also been shown to be capable of providing platforms such as OoC for drug screening, sensing, and testing, making it a perfect candidate for pharmaceutical applications. Figure 6. An illustration of 3D bioprinting, which provides 3D cell culture OoC devices for drug screening/testing. Adapted from [74]. Figure 6. An illustration of 3D bioprinting, which provides 3D cell culture OoC devices for drug screening/testing. Adapted from [74]. Pharmaceutical Application Drug development is time-consuming and expensive via clinical trials [82]. The essential goal of the OoC field is to boost and enable assessment in drug discovery and development [83]. This ultimate desire has fueled the establishment of many start-ups and spin-off companies that have conveyed the research realm mainly in academia towards efficient and reliable commercially availability on a product or service basis. One of the examples in the pharmaceutical productions of the 3D bioprint OoC is the use in testing the toxicity of pre-clinical drug candidates as doing so by Organovo Inc., in San Diego, CA, USA. Organovo Inc. has put up a robust business as a contract research organization that evaluates experimental drug compounds on the 3D-printed liver. Top global pharmaceutical companies, such as Merck, Bristol-Myers Squibb, and Roche are now using the service of Organovo Inc. Limitations of 3DP Technology 3DP technology is no doubt a revolutionary technology in the field of both pharmaceutical and non-pharmaceutical industries. With each passing year, breakthroughs are being made, such as techniques being improved to solve flaws and new materials being tested to overcome material limitations for the cost-effective production of pharmaceuticals using any 3DP methods. At present, 3DP technology is experimental. Still, it has one significant advantage over conventional processes, i.e., with this technology, it is possible to bring the production of personalized medicines closer to patients in local small-scale pharmacies and hospital settings. However, every technology has its own merits and demerits. One of the significant demerits of 3DP is that it takes a lot of time to produce only a modest amount of product. For instance, the tableting process produces~15,000 tablets per minute using a single press machine with conventional techniques. On the other hand, 3D printing of tablets is time-consuming. Usually, the production time for a single tablet varies from 2 min to 2 h [15]. Due to this limitation, there are very few industrial applications of this technology since producing a large number of products is time-consuming. Moreover, the process is energy-dependent, i.e., the time taken for each batch production is directly related to energy consumption. Due to these challenges, industrial applications are mini-mal at present. Consistent efforts are being made to overcome the barriers that hinder the applications of 3DP in the pharmaceutical industry. Conclusions For several years now, 3DP technology has gained significant attention from researchers. Researchers are now starting to understand the potential of this technology for the production of individualized novel solid dosage forms. Spritam (Levetiracetam), one of the 3D printed solid dosage forms, was authorized by the FDA in 2015. Out of the many 3D printed methods, FDM has proven to be more significant to produce pharmaceuticals and has been vastly researched. This technology can revolutionize the pharmaceutical industry by creating a doorway for the possibility of achieving personalized medicines. As this technology has many advantages, it also possesses significant demerits that cannot be overlooked. Restricted material usage, slow production process, and unreacted material in the final product are some of the significant drawbacks of this technology. In the future, consistent efforts are needed to alleviate these shortcomings. Overall, the potential of this technology in the healthcare sector is undeniable as it can help us achieve the seemingly difficult task of personalized medicine. It will be interesting to see how much time and work it takes to make it available to all healthcare settings and to the pharmaceutical sector. Conflicts of Interest: The authors declare no conflict of interest. The funding agency had no role in the writing of the manuscript.
v3-fos-license
2022-10-06T15:14:54.132Z
2022-09-29T00:00:00.000
252730288
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://reports-vnmedical.com.ua/index.php/journal/article/download/1036/991", "pdf_hash": "68cf254850aeb3d521bf528f64694d5f4d2daf99", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43958", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0738a4fd85e5b6d0c28f02c12432c4db8b5ab22f", "year": 2022 }
pes2o/s2orc
CHARACTERISTICS OF THE POST-INFARCTION PERIOD IN OBESE PATIENTS AFTER PERCUTANEOUS CORONARY INTERVENTION Annotation. The study on the metabolic profile in the long-term period after myocardial infarction with comorbidity is relevant. The aim of the work was to examine metabolic profile and echocardiographic parameters in patients with ST-elevation myocardial infarction (STEMI) and obesity following percutaneous coronary intervention (PCI) after a 1-year follow-up. A total of 60 patients with STEMI and obesity were examined. The first subgroup consisted of 20 patients with medicamentous therapy, and the second group 38 patients with PCI. Adropin, irisin, fatty acid-binding protein 4 (FABP4), C1q/tumor necrosis factor-related protein-3 (CTRP3) were measured by the enzyme-linked immunosorbent assay. The statistical processing of the study results obtained was carried out using the software package "IBM SPPS Statistics 27.0". The following parameters were increased in patients who received combined medicamentous and PCI therapy before and after the treatment (p<0.05): end-diastolic size (EDS) (by 16.83% and 10.89%, respectively), end-diastolic volume (EDV) (by 45.95% and 18.92% respectively), end-systolic volume (ESV) (by 40.0% and 27.69%, respectively), stroke volume (SV) (by 33.85% and 18.46%, respectively), left ventricular myocardial mass index (LVMMI) (by 18.93% and 10.06%, respectively), adropin (by 27.13% and 47.21%, respectively), irisin (by 2.07 times and 2.75 times, respectively) and CTRP3 (by 15.98 % and 31.96%, respectively), while the following parameters were decreased: systolic blood pressure (by 16.0% and 16.67%, respectively), diastolic blood pressure (by 15.56% and 14.44%, respectively), insulin (by 40 .38% and 48.59%, respectively), glucose (by 10.97% and 15.74%, respectively), atherogenic index (by 6.03% and 12.33%, respectively). Thus, patients with post-infarction cardiosclerosis and obesity have been revealed with increased echocardiographic parameters and imbalanced energy and adipokine metabolism. Introduction Obesity is a global health problem worldwide and a risk factor for cardiovascular disease (CVD). It is known that obesity plays an important role in the development of atherosclerosis and coronary heart disease (CHD), as well as it is involved in the structural and functional alterations of the heart, the progression of heart failure (HF) and responsible for the risk of atrial fibrillation and sudden cardiac death [2]. A study has demonstrated that stratification by a change in body mass index (BMI) after percutaneous coronary intervention (PCI) can help predict adverse events in patients with CHD [16]. Scientists deal with the issues of obesity paradox that indicates reduced mortality rates among high BMI patients following PCI [10]. PCI has the same effect on STEMI patients with or without obesity, and the low risk of side effects in obese patients can not be explained by a lower severity of myocardial infarction [6,11]. Meanwhile, metabolic changes in obese patients with STEMI after PCI remain insufficiently studied to this day. The purpose of this study was to examine the metabolic profile and echocardiographic parameters in obese STEMI patients following PCI after a 1-year follow-up. Materials and methods In total, 60 patients with STEMI and obesity were enrolled in the study that continued from September 1, 2018 to December 31, 2021. The first subgroup consisted of 20 patients who received standard drug therapy alone, and the second subgroup was composed of 38 patients after PCI. All the patients were diagnosed with STEMI, diagnosis and treatment were carried out according to the European recommendations of cardiologists [5]. The study design was approved by the Ethics Commission of Kharkiv National Medical University (Protocol No. 2 dated April 2, 2018). All the patients included in the study were notified and signed a voluntary informed consent to participate in the study. Myocardial revascularization was not performed owing to anatomical difficulties in performing coronary artery stenting, hospitalization of patients in the period of time window incompatible with reperfusion (more than 24 hours after the onset of myocardial infarction) or without pain syndrome manifestations upon admission and in patients who refused to implant a stent. Serum levels of all biochemical indicators were measured prior to the treatment. Adropin, irisin, fatty acid binding protein 4 (FABP4) and C1q/TNF-related protein (CTRP3) were detected by the enzyme-linked immunosorbent assay using commercially available reagents "Human adropin", "Human Fibronectin type III domain-containing protein 5", "Human FАBP4" (Elabscience, Houston, USA) and "Human CTRP3" (Aviscera Bioscience Inc, Santa Clara, USA), respectively. Serum total cholesterol (TC) and high-density lipoprotein (HDL) cholesterol were quantified by peroxidase enzymatic method with "Cholesterol liquicolor" test (Human GmbH, Germany) and "HDL Cholesterol liquicolor" test (Human GmbH, Germany), respectively. Serum triglyceride (TG) levels were determined by enzymatic colorimetric method using a "Triglycerides liquicolor" reagent (Human GmbH, Germany). Atherogenic index (AI) was calculated in accordance with the standard formula proposed by A. M. Klimov. The levels of very low-density lipoprotein (VLDL) and low-density lipoprotein (LDL) were estimated based upon the Friedewald equation by. Fasting blood glucose levels were tested by glucoseoxidase method with a commercial test system "Human Glucose" (LLC NPP "Filisit-Diagnostics", Ukraine). Doppler echocardiographic examination was performed according to the conventional technique on a Radmir ULTIMA Pro30 ultrasound scanner. End-diastolic size (EDS), endsystolic size (ESS), end-diastolic volume (EDV), end-systolic volume (ESV), left ventricle ejection fraction (LV EF), stroke volume (SV), interventricular septal thickness (IVST), aorta diameter, left atrial size (LA), and posterior wall thickness of the left ventricle (LV PWT) in diastole were evaluated. LV myocardial mass (LVMM) and LVMM index (LVMMI = LVMM / body surface area (m2)) were calculated. LV hypertrophy (LVH) was considered at an LVMMI value of more than 110 g/ m2 for women and more than 125 g/m2 for men. The study also entailed calculation of the LV relative wall thickness (RWT) (LV RWT = (LV PWT + IVST) / LV EDS)) as well as determination of the LV remodeling type. LV RWT?0.45 and normal LVMMI was classified as concentric LV remodeling. Obesity was diagnosed by BMI following the established formula: weight (kg)/height (m 2 ). Statistical processing of the data obtained was carried out using the computer program IBM SPPS version 27.0 (2020) (IBM Inc., USA, license No. L-CZAA-BKKMKE). The analysis of the examined parameters regarding the normality of the distribution was carried out according to the Shapiro-Wilk test. Quantitative variables were used in the statistical analysis. Quantitative data were presented as percentage, median, and interquartile range (25th and 75th percentiles). The non-parametric Mann-W hitney rank test was used to compare quantitative indicators between two independent groups, and the Wilcoxon T test was used for two dependent groups. The limit value of significance for testing statistical hypotheses in the study was set at a level of p<0.05. This paper is a part of the scientific-research works "Ischemic heart disease in polymorbidity: pathogenetic aspects of development, course, diagnostic and improvement of treatment" No. 0118U000929, valid term 2017 -2019 and "Prediction of the course, improvement of diagnosis and treatment of ischemic heart disease and arterial hypertension in patients with metabolic disorders" № 0120U102025, valid term 2020 -2022. Table 1 shows the dynamics of anthropometric indicators and structural and functional parameters of the LV in patients with obesity after myocardial infarction before treatment and 1 year after myocardial revascularization. The following indicators demonstrated significant differences before treatment and 1 year after standard medicamentous therapy and PCI: systolic blood pressure (SBP) and diastolic blood pressure (DBP), EDS, EDV, ESV, SV, LVMMI (р<0.05). After combined medicamentous and Table 2 shows the dynamics of indicators related to carbohydrate, energy, lipid, and adipokine metabolism in obese patients after myocardial infarction before treatment and a year after myocardial revascularization. Significant differences were found regarding such indicators as glucose, insulin, AI, adropin, irisin, FABP4, CTRP3, (р<0.05) before treatment and after the one-year standard medicamentous therapy and PCI. After treatment, the patients with post-infarction cardiosclerosis and obesity had Table 2. Metabolic profile of patients with obesity and myocardial infarction after the 1-year follow-up. Comparing the studied indicators between the subgroups after medicamentous treatment and PCI, a significant decrease in EDV by 18.52%, SV by 11.49%, LA by 9.53%, glucose levels by 5.36%, insulin levels by 13.76% was detected along with an increase in the serum levels of adropin by 15.79%, irisin by 33.13%, and CTRP3 by 13.78%, respectively (p<0.05). Discussion The study on the clinical course characteristics in patients after myocardial infarction who underwent an invasive intervention should be detailed, because the restoration of coronary blood circulation does not exclude the further progression of atherosclerotic lesions and the recurrence of major cardiovascular events. Adropin is principally involved in cardiac energy metabolism and may be a presumed contestant for the treatment of heart diseases associated with insulin resistance [1]. Serum adropin levels were significantly lower in overweight/obese individuals as compared to those in normal-weight ones, suggesting a probable role of this hormone in the pathogenesis of obesity [14]. Serum adropin levels were decreased to a greater extent in patients with STEMI than those in patients without CHD. In addition, the levels of serum adropin were decreased with worsening severity of coronary artery damage, indicating the severity of CHD [9]. Irisin regulates mitochondrial energy, glucose metabolism and fatty acid oxidation. Cardiomyocytes produce irisin that influences various functions of the cardiovascular system. At different stages of heart failure, the impact of irisin varies widely on mitochondrial dysfunction, oxidative stress, metabolic imbalance, energy expenditure, and the prognosis of heart failure [4]. Patients with CHD and a high degree of coronary artery lesion severity had lower concentrations of serum irisin as compared to patients with less severe lesions of coronary arteries [3]. Serum FABP4 concentrations have been shown to be associated with prognosis for stable angina patients following PCI, suggesting that serum FABP4 levels may be useful indicators for secondary prevention assessment [15]. According to M. Obokata et al. (2018), FABP4 attained a maximum level at hospital admission or immediately after PCI in patients with AMI [8]. Kyung Mook Choi et al. (2014) have reported a correlation between lower serum CTRP3 levels and higher both weight and waist circumference [7]. According to M. Sawicka et al. (2016) [12], serum levels of CTRP3 were decreased in patients with AMI. Authors M. Shanaki et al. (2020) have noted a post-infarction cardiac fibrosis attenuation and myofibroblast differentiation inhibition through AMP-activated protein kinase and Akt signaling pathways mediated by CTRP3 [13]. We have found increased values of EDS, EDV, ESV, SV, adropin, irisin, CTRP3 and decreased values of SBP and DBP, parameters of carbohydrate metabolism, adipokine FABP4, AI in patients who received both medicamentous therapy and PCI. Following PCI, the patients had a slowmoving tendency to echocardiographic changes, a significant decrease in HR and TG levels as compared to those before treatment. In the study process, changes in energy and adipokine metabolism have been determined. Conclusions and prospects for further development 1. An imbalance of energy and adipokine metabolism indicators has been revealed as evidenced by the low serum levels of adropin, irisin, CTRP3 and increased concentrations of FABP4 in obese patients after myocardial infarction. 2. Slowing structural and functional changes in the LV myocardium have been found in patients following PCI compared to patients receiving medicamentous therapy alone. 3. Higher values of energy metabolism markers have been demonstrated by patients after PCI. The profile of adipokine homeostasis markers has been also found to be improved, namely the increased serum levels of CTRP3 and decreased serum levels of FABP4. This study had some limitations. First, the sample size was relatively small (n = 58), therefore it should be larger in the future to confirm the conclusions. Second, since only patients with STEMI and obesity were included in the study, it would be interesting to examine patients with non-STsegment elevation myocardial infarction and obesity following PCI.
v3-fos-license
2018-12-05T07:29:00.424Z
2015-01-10T00:00:00.000
55620911
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://lirias.kuleuven.be/bitstream/123456789/489113/1/Effectiveness%20of%20selected%20soil%20conservation.pdf", "pdf_hash": "6759351c13c047c1e4ed8ca6efe28d9409e17d35", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43959", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "e5d1df204963a52eeba634abb6351ccffa38c241", "year": 2015 }
pes2o/s2orc
Effectiveness of Selected Soil Conservation Practices on Soil Erosion Control and Crop Yields in the Usambara Mountains, Tanzania Indigenous soil conservation measures such as miraba have been widely used in Usambara Mountains for controlling soil erosion but with little success. On-farm runoff experiments were set from 2011–2014 on Acrisols in Majulai and Migambo villages with contrasting agro-ecological Original Research Article Mwango et al.; JAERI, 2(2): 129-144, 2015; Article no.JAERI.2015.014 130 conditions in the Usambara Mountains, Tanzania. The aim was to investigate the effectiveness of miraba and miraba with various mulching materials in reducing runoff, soil and nutrient losses and improving productivity of maize (Zea mays) and beans (Phaseolus vulgaris). Results show that mean annual runoff coefficients (mm mm) ranged from 0.72 for cropland with no soil conservation measure (control) to 0.15 for cropland with miraba and Tithonia (Tithonia diversifolia) mulching in Majulai village and respectively from 0.68 to 0.13 in Migambo village. Soil loss was significantly (P = .05) higher under control than under miraba with either Tughutu (Vernonia myriantha) or Tithonia mulching e. g. 184 vs. 20 in Majulai and 124 vs. 8 Mg ha -1 year -1 in Migambo village in 2012. The Pfactors were significantly (P = .05) higher under miraba sole than under miraba with mulching in Majulai village (0.18 vs. 0.11) and in Migambo village (0.10 vs. 0.05).The annual nutrient losses in kg hayr were significantly (P = .05) higher under control than under miraba with mulching 367 vs. 37 total N, 0.8 vs. 0.1 P and 14 vs. 4 K for Majulai village; 474 vs. 26 total N, 0.7 vs. 0.1 P and 20 vs. 1.2 K for Migambo village in 2012. Maize and bean yields were significantly (P = .05) higher under miraba with Tughutu mulching than under control (e.g. 2.0 vs. 0.7 Mg ha for maize in Majulai in 2012). Thus miraba with Tughutu mulching is more effective in improving crop yields than miraba with Tithonia and miraba sole. INTRODUCTION Soil erosion has been reported as a serious problem facing agricultural production all over the world [1][2][3][4][5].Soil erosion by water is a major factor causing land degradation in the Usambara highlands of Tanzania and severely affects soil functions resulting in low crop productivity [6,7].Soil erosion by water is defined as the detachment and displacement of soil particles by water, resulting in the development of rills and gullies [8].To overcome the problem of soil deterioration, the Usambara farmers have developed local soil and water conservation (SWC) measures such as miraba (rectangular grass bound strips that do not necessarily follow contour lines [9]),micro-ridges and stone bunds as an integral part of their farming systems [7,9,10].Most of the introduced measures have often been rejected or minimally adopted because such measures e.g.bench and Fanya Juu terraces (hillside ditches made by throwing excavated soil on the upslope of the ditch, built along the contour lines at appropriate intervals depending on slope) were expensive in terms of labour and money, while also their promoters paid little attention on indigenous practices. Miraba are widely practised by farmers in the Usambara Mountains.Miraba as a SWC measure is traditionally characterized by a wide spacing of grass strips across the slope and usually the spacing depends on the size of farm plots.For decades these SWC technologies were never a subject of scientific writing to allow improvements be made to effectively address problems of soil degradation and low crop productivity [10].On the other hand, farmers have not been able to adjust these indigenous SWC techniques to rapidly changing farming systems and increasing intensity of land use [11,10]. On steep slopes like in Usambara Mountains, bench terraces are highly recommended as the most effective soil and water conservation measure in cropland [12,7,10,13).However, due to low adoption rates in Usambara Mountains, the solution would be to improve and use indigenous SWC technologies such as miraba for sustained agricultural productivity in the area. In the Usambara Mountains miraba are established by using either Napier or Guatemala grass.Grass strips forming miraba serve as barriers which capture soil particles that have been detached and transported with runoff from the cultivated land.Napier grass is mostly preferred because it is also used as forage for stall feeding, while Guatemala grass is appreciated for its drought resistance and to some extent is also used as for age for stall feeding. Studies on effectiveness of some SWC technologies such as bench terraces, Fanya Juu terraces, grass strips [7,14] and miraba [15,16] on soil erosion control and agricultural productivity have recently been carried out in Western Usambara Mountains.However, the contribution of indigenous SWC technologies including miraba mostly practised in the study area have not fully been investigated for sustained agricultural productivity [10,15].Moreover, it has been reported that establishment of miraba is far cheaper than the construction of bench terraces.Therefore efforts towards improving this technique are warranted [14,15,16]. Several erosion models are available to predict soil loss and to assess soil erosion risk [4,9].However, RUSLE, the Revised Universal Soil Loss Equation [17] is widely used for estimating potential soil erosion by water especially at regional and national level because of its relative simplicity and robustness [4,18,16].Likewise, this study applied RUSLE model to investigate the effectiveness of miraba and miraba with Tithonia (Tithonia diversifolia) and Tughutu (Vernonia myriantha) mulching materials in reducing runoff, soil and nutrient losses using maize and beans as test crops.Specifically, the study intended to: (i) quantify soil and nutrient losses under selected soil conservation practices (ii) determine rainfall-runoff responses under selected soil conservation practices (iii) select the best soil conservation practice using the Revised Universal Soil Loss Equation (RUSLE) and (iv) determine the influence of selected soil conservation practices on crop yield. Description of the Study Sites The study was conducted in Migambo and Majulai villages which represent different agroecological zones in Western Usambara Mountains, Lushoto District, Tanzania (Fig. 1) located between longitudes 38º15' to 38º24' E and latitudes 4º34' to 4º48' S. The area is highly dissected with steep slopes ranging from 20 % to over 50 % and altitude of about 1402 m.a.s.l.in Majulai and 1682 m.a.s.l. in Migambo village.Migambo is humid cold with mean annual air temperature of 12−17ºC and annual precipitation is 800-2300 mm [16].Majulai is dry and warm with mean annual air temperature between 16 and 21ºC and annual precipitation of 500-1700 mm [13,16].The monthly reference evapotranspiration (ETo) as estimated by the local climate estimator software (New_LocClim) [19] ranges from 100 mm to 145 mm.Majulai and Migambo villages support a large population density of more than 120.4 persons/km 2 [20]. According to the World Reference Base (WRB) [21], the soil type in Majulai site classifies as Chromic Acrisol (Humic, Profondic, Clayic, Cutanic, Colluvic) whereas in Migambo site the soil is Haplic Acrisol (Humic, Profondic, Clayic, Colluvic).The main land uses include cultivation on slopes and in valleys, settlements on depressions, ridge summits and slopes and forest reserves on ridge summits and upper slopes.Vegetables such as carrots, onions, tomatoes, cabbages, and peas are grown as sole crops in valleys under rain fed or traditional irrigation.Beans are mainly grown during the long rainy season while maize is grown during the short rains.Round potatoes and fruits, namely peaches, plums, pears, avocado and banana are grown on ridge slopes under rain fed mixed farming.Round potatoes are also grown in valleys as sole crop or intercropped with maize. Miraba Establishment in Runoff Experiments Miraba were established by using Napier grass (Pennisetum purpureum) barriers in runoff experiments in April 2011 about nine months before data collection started.Napier grass barriers forming miraba were established by planting tillers in a single row at 10 cm spacing perpendicular to slope and were maintained to about 50 cm wide strips.In the current study Napier grass barriers across the slope were spaced 5 m apart to mimic the recommended maximum effective width of hand made bench terraces [12].Along the slope the Napier grass barriers were set at 3 m apart.It has been documented that soil conservation measures such as Fanya Juu and stone bunds tend to progressively form bench terraces when they are at narrow spacing [12,22] Experimental Design Closed runoff plots of 22 mx 3 m in a randomized complete block design (RCBD) were set along lower ridge slopes at 50 % slope in Majulai and 45% slope in Migambo village respectively.The plots were enclosed by miraba and bounded by pieces of wood that protruded 15 cm above the soil surface to prevent inflow and outflow from the plot borders.The pieces of wood were connected to three collector drums (each 220 litres) with hinged lids.Maize (Zea mays) and beans (Phaseolus vulgaris) were planted inrotation as test crops in 2012 and 2013/14 rainy seasons.Maize was planted during short rains (vuli), while beans were planted during long rains (masika).The treatments included runoff plots (Fig. Rainfall Data Collection Daily rainfall was measured from 1 st January 2012 to 16 th February 2014 using standard rain gauges and tipping buckets with a CR10 data logger (Campbell Scientific, Logan UT) installed at the experimental sites in Migambo and Majulai villages. Runoff, Sediment and Nutrient Loss Determination Runoff and sediment was collected daily from 1 st January 2012 to 16 th February 2014.Beans were grown during the long rains, weeds were left to grow in the field during off season, and maize was grown in short rains.Runoff volume was estimated by measuring the depth of water in cm in the collecting drums and then converted to volume of water in litres.Sediment load was estimated by sampling water in collecting drums after vigorously stirring the suspension.) was determined.Soil losses from heavy sediments and from suspended materials from each runoff event were added to compute total soil loss for the events.These losses were finally added to compute total soil loss per annum.The soil samples for nutrient loss determination were collected by decanting the suspended sediment in buckets. In each runoff experimental site a soil profile was excavated and soil samples were collected from each horizon for pedological characterization.Undisturbed core soil samples were taken at 0-5 cm, 45-50 cm and 95-100 cm soil depth by Kopeck's core rings (100 cm 3 ) for bulk density, gravimetric moisture and available moisture determinations.The soil was classified to Tier-2 according to the World Reference Base for Soil Resources WRB [21]. Crop Yields Maize (Zea mays) PAN 67 variety and beans (Phaseolus vulgaris) Kilombero variety were planted in runoff plots during the 2012 and 2013/14 rain seasons with maize in the short rains (vuli) and beans in the long rains (masika) at the recommended spacing of 30 cm within rows and 75 cm between rows for maize and 25 cm within rows and 50 cm between rows for beans.Beans were always planted one month before the maize was harvested in Migambo and a b c d two weeks before harvesting maize in Majulai village.Farmyard manure with 1.7% N, 0.4% P and 1.9% K was basal and spot applied at the rate of 3.6 Mg ha -1 air-dry weight, diammonium phosphate (DAP) 18: 46: 0 NPK ratio and Urea 46% N were applied at the rate of 80 kg ha -1 , but Urea was not applied for beans.At maturity maize and bean grains were harvested and dried to about 13% moisture content. Soil Analysis Soil analysis was done following Moberg's [28] Laboratory Manual.Organic carbon (OC) was measured using the dichromate oxidation method, total nitrogen (TN) by Kjeldahl method, available phosphorus (Bray-I), exchangeable bases (Ca 2+ and Mg 2+ ) by atomic absorption spectrophotometer, exchangeable Na + and K + by flame photometer and pH water by normal laboratory pH meter. Determination of the RUSLE Factors The RUSLE equation expresses average annual soil loss Mg ha -1 year -1 caused by sheet and rill ... (1) Where A is the long term average soil loss (Mg ha -1 year -1 ), R is rainfall erosivity factor (MJ mm ha ), K is the soil erodibility factor (Mg ha MJ -1 mm -1 ), LS is dimensionless factor combining slope steepness(S) and slope length (L), C and P are dimensionless factors accounting respectively for crop cover and management and conservation practices.The equation developed by Vrieling et al.Where p is the average monthly rainfall (mm) and P is the average annual rainfall (mm).In the absence of any cover crop or soil protection measure, as for the bare plot, C and P factors are equal to 1. Thus K factor was calculated from Where: s, is the slope gradient in %; l, is the plot length in m; constant ½, is used where slope steepness is ≥ 5% The effectiveness of soil conservation practices on reducing soil loss was determined by the use of C and P factors when compared to the bare plots.The C factor in the long rain season was a function of the bean crop cover; in the off season the C factor was determined by weed cover, while in the short rains maize cover was considered.The C factor was calculated as the ratio between the seasonal or annual soil losses of the control plot to the seasonal or annual soil losses of the bare plot.The P factor was calculated as the ratio between the seasonal or annual soil losses under miraba plots to the seasonal or annual soil losses under control plots. C(CO plot)= A (CO plot) / A (BA plot)…………( 6) 7) 8) Where: A(CO plot), is the soil loss (Mg ha ) under miraba, miraba with Tithonia mulching and miraba with Tughutu mulching.The effectiveness of soil conservation practices on reducing soil loss was determined by the percent of C and P factors with reference to bare plots.The effectiveness of soil conservation practices on reducing nutrient losses was also calculated in percentages in respect of bare plots. Statistical Analysis Bartlett's test for homogeneity of variance was conducted to test data normality using GenStat software [31].The relationships between daily rainfall and daily runoff were determined by Linear Regression Analysis with threshold runoff values obtained from the X-axis intercept.Analysis of Variance (ANOVA) in Gen Stat statistical software [31] was performed where Least Significant Difference (LSD 0.05 ) was used to detect mean differences between treatments. Rainfall Erosivity between the Two Villages with Contrasting Climatic Conditions The annual and seasonal rainfalls recorded during the two consecutive years are presented in Table 1, while rainfall distribution in Fig. 3.It can be seen that, as the rainfall depth was higher in Majulai village than Migambo village in 2012, rainfall erosivity R factor was also higher in Majulai, while in 2013 higher values of rainfall depth and R factor were observed in Migambo village (Table 1). Soil loss in Relation to SWC Measures in the Two Villages with Contrasting Climatic Conditions From our results (Table 1), Majulai village had significantly (P<.001) higher annual soil losses than Migambo in 2012, but in 2013 annual soil losses weresignificantly (P<.001) higher in Migambo than in Majulai village.The difference in soil losses between the two villages can partly be attributed to the rainfall depth (Table 1), as it can clearly be seen that the higher the rainfall depth the higher the soil losses in the studied villages.Similar observations were reported by Kabanza et al. [32], where soil losses in Makonde plateau were much higher than in in land plains and rainfall depth was spotted as the main contributing factor.The relatively steeper slopes in Majulai than in Migambo could also explain the soil loss differences.This is supported by the work of Liu et al. [33] where slope gradients were found to be strong determinants of soil loss.On the other hand, soil losses differed significantly (P<.001) between SWC measures in both villages.Soil losses followed the trend: bare plots > cropl and with no SWC measures > cropl and with miraba sole > cropl and with miraba and Tithonia or Tughutu mulching.The reduced soil losses under miraba and miraba with mulches could be explained by the effect of grass barriers forming miraba that captured some soil sediments that were with runoff.This observation is also supported by Wanyama et al. [34] who reported grass strips to effectively trap more than 70% sediments under natural rainfall. Besides, miraba were progressively forming bench terraces such that the terrace height reached about 1m in Migambo and 0.7 m in Majulai village after two years of experimentation.The terraces so formed reduced the slope steepness, thereby resulting reduced runoff velocity and increased rate of infiltration.This ultimately reduced runoff volume and sediment losses.Similarly, mulches also reduced runoff velocity, thereby increasing rate of infiltration and reducing runoff volume and sediment losses.Such observations were also reported by Bajracharya et al. and Tiwari et al. [35,36] in Nepal where mulching was found to reduce annual soil loss by 60 to 90% in maizemustard cropping system as compared to conventional farmers' practices. Rainfall-Runoff Responses under Selected Soil Conservation Practices The slope of the regression line was used as a measure of the rainfall-runoff response.The rainfall-runoff response varied between the villages and between soil conservation measures (Fig. 4a).The differences can be explained by the influences of the studied soil conservation measures; bare plots had the highest annual runoff coefficient, while miraba with Tithonia and miraba with Tughutu mulching had the lowest (Table 2 & Fig. 4).The rainfall threshold values to initiate runoff varied between the soil conservation measures and between the studied villages.These differences were also directly associated with the effects of soil conservation measures and the differences in climatic conditions between the villages (Fig. 3). Effectiveness of Selected Soil Conservation Practices in Relation to RUSLE Factors The relative effectiveness of selected soil conservation practices with reference to soil losses from cropl and with no SWC measures are presented in Fig. 5.It can clearly be seen that miraba sole, miraba with Tithonia and miraba with Tughutu mulching were more effective in reducing soil loss in Migambo than in Majulai village.This can be attributed to the differences in rainfall distribution where the poor rainfall distribution in Majulai village (Fig. 3) causes Napier grass in the miraba to die during dry spells, while the reliable rainfall in Migambo makes Napier grass barriers that form miraba to persist throughout the year and thus form denser grass strips than in Majulai village.It is evident from Fig. 4 that miraba reduced soil losses by about 80% in Majulai and 90% in Migambo village relative to soil losses from cropland with no SWC measures.On the other hand miraba with Tithonia and Miraba with Tughutu mulching reduced soil losses by 90% in Majulai and 95% in Migambo village relative to soil losses from cropl and with no SWC measures (Fig. 5). Based on the work by Kabanza et al. [32] RUSLE factors were found to provide better insight than other attributes when assessing the effectiveness of soil conservation measures.The observed K factors were 0.0016 (Mg h MJ -1 mm -1 ) for Chromic Acrisol in Majulai and 0.0018 (Mg h MJ -1 mm -1 ) for Haplic Acrisol in Migambo village (Table 4).The observed K factor values are very low, indicating high susceptibility of the studied soils to erosion.More erodible soils such as silt loams have their K factor values ranging from 0.03 -0.05(Mg h MJ -1 mm -1 ) [37,38].The P factor values are much higher in Majulai than in Migambo village (Table 3 & 4) indicating that the studied soil conservation practices have stronger effect in Migambo than in Majulai village.This can be explained by the good rainfall distribution in Migambo as compared to Majulai village which experiences long dry spells (Fig. 3) resulting natural death of miraba Napier grass, thus, reducing its effectiveness.Similarly significant differences were observed between soil conservation practices where miraba with Tithonia and miraba with Tughutu mulching were more effective in reducing soil loss than miraba sole and control (plots with maize or bean crop).This is due to the fact that grass barriers forming miraba and mulches tend to reduce runoff speed there by increasing the rate of infiltration.This tendency was also reported by Dur'an et al.Relative soil loss % Soil Nutrient Losses under the Studied SWC Measures Soil nutrient losses under soil conservation practices are presented in Fig. 6.Soil nutrient losses were significantly (P<.001) different between SWC practices.The differences in soil nutrient losses can directly be associated with the effects of soil conservation practices (Table 5).Soil losses followed the trend: Bare plots > cropl and with no SWC measure (control) > cropl and with miraba sole > miraba with Tughutu and miraba with Tithonia mulching (Table 5).Similarly Msita [16] in Migambo village, Tanzania reported lower losses of total N, P and K + in plots with miraba, farmyard manure and Tithonia mulching than in cropl and with no SWC measures. The relative effectiveness of soil conservation practices with reference to soil losses from cropland with no soil conservation measures are presented in Fig. 6.There are obvious differences in soil nutrient loss control between soil conservation measures.It is clear that miraba with mulching reduced soil nutrient losses by about 95% in Migambo and 85% in Majulai village, while miraba sole reduced nutrient losses by 90% in Migambo and about 80% in Majulai village (Fig. 6). Impact of Selected Soil Conservation Practices on Crop Productivity in the Two Studied Villages The yields of maize and beans are presented in Table 6.The results show that there is a significant (P=.05) difference in crop yields between selected soil conservation practices and between the two studied years in both villages.In Majulai village maize grain yields were higher under miraba with Tughutu mulching than under miraba with Tithonia, miraba sole and control in 2012, but there was no maize yield in 2013 due to drought.The trend of bean grain yields followed the trend: miraba with Tughutu > miraba with Tithonia > miraba sole > control.The trend was similar in Migambo village where miraba with Tughutu > miraba with Tithonia > miraba sole > control for both maize and bean grain yields (Table 6).Maize grain yields were significantly (P = .05)higher in 2013 than in 2012, but there were no significant (P=.05) differencesv in bean grain yields between the two years of study except under miraba with Tithonia and miraba with Tughutu in Majulai village.There were also some differences in maize and bean yields between the two villages, with higher yields in Migambo than in Majulai (Table 6).The observed crop yields under the studied SWC practices (Table 6) were higher than the average yields according to FAO [42] of 1.5 Mg ha -1 for maize and of 0.7 Mg ha -1 for beans in Tanzania.It is clearly observed that the crop yield differences are highly influenced by the SWC practices (Table 6) and could partly be explained by differences in climatic conditions of the two villages.The rainfall in Majulai is unreliable while Migambo village experiences reliable rainfall with a fair distribution during the growing seasons (Fig. 3).Msita [15,16] reported increased maize yields by 57 % under miraba as compared to control in Migambo village, while bean yields did not differ and the maize yield differences were associated with improved soil properties due to the effects of miraba. Bean grain yields in 2012 and 2013 respectively increased by 48% and 70% under miraba with Tughutu, 41% and 58% under miraba with Tithonia and 27% and 37% under miraba sole when compared with control.It is clear that soil conservation measures contribute to higher crop yields by reducing the loss of plant nutrients and assuring better water supply to the crop.The study by Wickama et al. [13] in Usambara Mountains observed the average maize and bean yields of 270% and 583% higher in well managed farms with good quality terraces, well maintained grass strips, good quality seed for crops, adequate use of manure or fertilizer as compared to the control i.e. the farms with no terracing, no grass strips, use of local seed material, little use of manure and no use of fertilizer and no tree cover.The yield differences were reported to be influenced by the sustainable land management categories studied. CONCLUSIONS AND RECOMMENDATIONS Rainfall erosivity R and soil erodibility K factors did not differ significantly between the studied villages.Soil loss was significantly (P = .05)higher under cropl and with no soil conservation measures (control) than under miraba with mulching.The P factors were significantly (P = .05)higher under miraba sole than under miraba with mulching.The annual nutrient losses were significantly (P = .05)higher under control than under miraba with mulching.Maize and bean yields differed significantly (P = .05)between soil conservation practices in the following order: miraba with Tughutu mulching >miraba with Tithonia mulching >miraba sole >control.Whereas miraba with either Tughutu or Tithonia mulching showed greater potential in reducing soil and nutrient losses than miraba sole, miraba with Tughutu mulching was more effective in improving crop yields than miraba with Tithonia and miraba sole.Despite that the soils of Usambara Mountains are susceptible to erosion, the C and P factors indicate that these soils are responsive to soil conservation measures.More local shrubs and grasses should be investigated for use as both green manure and soil conservation measure under miraba.Further research needs to be conducted to investigate effectiveness of the studied soil conservation practices on waters hed to mitigate river stream sedimentation.It is strongly recommended that Tithonia and Tughutu shrubs be planted in the borders of the farm plots along the slope for easy availability.It is also recommended in Majulai village that drought resistant grasses such Guatemala be used for establishing miraba since Napier grass which is mostly preferred for fodder is sensitive to drought. Table 6 . Impact of selected soil conservation practices on crop yields in Majulai and Migambo villages 109% and 147% under miraba with Tithonia and 70% and 90 % under miraba sole when compared to control.Studies by
v3-fos-license
2020-11-25T14:06:55.528Z
2020-11-01T00:00:00.000
227157845
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/gigascience/article-pdf/9/11/giaa132/35531019/giaa132.pdf", "pdf_hash": "adc892d704c224da9edd7af2503c54e8a9be0c44", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43961", "s2fieldsofstudy": [ "Biology" ], "sha1": "4cd545ec1a1842c579e9de0059af6f530eb6c3b4", "year": 2020 }
pes2o/s2orc
Chromosomal genome of Triplophysa bleekeri provides insights into its evolution and environmental adaptation Abstract Background Intense stresses caused by high-altitude environments may result in noticeable genetic adaptions in native species. Studies of genetic adaptations to high elevations have been largely limited to terrestrial animals. How fish adapt to high-elevation environments is largely unknown. Triplophysa bleekeri, an endemic fish inhabiting high-altitude regions, is an excellent model to investigate the genetic mechanisms of adaptation to the local environment. Here, we assembled a chromosomal genome sequence of T. bleekeri, with a size of ∼628 Mb (contig and scaffold N50 of 3.1 and 22.9 Mb, respectively). We investigated the origin and environmental adaptation of T. bleekeri based on 21,198 protein-coding genes in the genome. Results Compared with fish species living at low altitudes, gene families associated with lipid metabolism and immune response were significantly expanded in the T. bleekeri genome. Genes involved in DNA repair exhibit positive selection for T. bleekeri, Triplophysa siluroides, and Triplophysa tibetana, indicating that adaptive convergence in Triplophysa species occurred at the positively selected genes. We also analyzed whole-genome variants among samples from 3 populations. The results showed that populations separated by geological and artificial barriers exhibited obvious differences in genetic structures, indicating that gene flow is restricted between populations. Conclusions These results will help us expand our understanding of environmental adaptation and genetic diversity of T. bleekeri and provide valuable genetic resources for future studies on the evolution and conservation of high-altitude fish species such as T. bleekeri. Introduction The Qinghai-Tibetan Plateau (QTP), the largest and highest plateau in the world, is one of the most important world biodiversity centers [1].The environments of QTP and its peripheral areas have been affected significantly by the continuing uplift, which is one of the most important driving forces for the biological evolution of organisms on the plateau [2].The endemic species of the QTP present high adaptability to the harsh environmental conditions, such as low temperature, low oxygen supply, and high UV radiation, by exhibiting cold tolerance, hypoxia resistance, enhanced metabolic capacity, and increased body mass [3][4][5][6]. An investigation into the biological evolution of organisms residing on the QTP and its peripheral regions will broaden our understanding of essential evolutionary questions regarding mechanisms of environmental adaptation and speciation.Phenotype comparisons were frequently used to study environmental adaptations in previous studies [7,8].In recent years, advancing genomic technology, especially third-generation sequencing techniques, has presented novel opportunities to explore the genetic basis of environmental adaptations.Many genomic studies of terrestrial animals on the QTP and its peripheral regions revealed that genes involved in hypoxia response, energy metabolism, and DNA repair were under positive selection and rapid evolution [9][10][11].In those studies, high-quality genome and population resources are essential to understand critical biological processes for adaptations [11][12][13]. The QTP boasts many highland fish species, especially in the family Sisoridae, subfamily Schizothoracinae, and genus Triplophysa [14].To date, there have only been several high-quality highland fish genomes reported on the basis of long-read sequencing data, including Glyptosternon maculatum in the family Sisoridae, Schizothorax o'connori and Oxygymnocypris stewartii in the subfamily Schizothoracinae, and Triplophysa tibetana and Triplophysa siluroides in the genus Triplophysa [15][16][17][18][19]. Triplophysa is a highly diverse genus and the largest group of the subfamily Nemacheilinae [20].There are 152 records for Triplophysa species in FishBase, and the majority are distributed on the QTP and its adjacent drainage areas from an elevation of 100 to >5,200 m [21].Given the broad elevation distributions and species diversity, the Triplophysa genus offers an attractive study model not only to investigate the adaptive mechanisms of fish in high altitudes but also to examine the similarities and differences between the adaptive mechanisms in different Triplophysa species.Previous studies have reported the genomic data of T. siluroides and T. tibetana without any emphasis on the genetic basis of high-altitude adaption [17,18].To date, environmental adaptations of Triplophysa species to high altitudes are not fully understood, and the genetic resources for the reference genome and population data remain insufficient.Triplophysa bleekeri, another member of the Nemacheilidae family, is mainly distributed in the stem streams and tributaries of the Yangtze and Jinsha rivers [22].It exhibits different ecological and physiological characteristics compared with its relatives, T. siluroides and T. tibetana [23].T. bleekeri has a wide distribution, from 200 to 3,000 m [24], whereas T. tibetana and T. siluroides occur at elevations of ∼4,000-5,000 and ∼3,000 -4,000 m, respectively [17,25].Apart from altitude of habitation, there is a significant difference in habitat environments.T. bleekeri lives in the fast-flowing rivers, whereas T. tibetana and T. siluroides inhabit lakes and slow-flowing rivers [14].Reproduction biology in these Triplophysa species is also different; T. tibetana and T. siluroides spawn once a year (June-July and July-August, respectively), whereas T. bleekeri can spawn twice a year, with peak breeding seasons occurring from October to December and March to April [24].The primary food source of T. bleekeri and T. tibetana is Chironomus larvae, caddisfly larvae, and diatoms, whereas T. siluroides feeds on smaller fishes [25].The genome resource for T. bleekeri will contribute to understanding its evolution and environmental adaption and explore the convergent genetic mechanisms of Triplophysa species in high-elevation adaption. In this study, we generated the first chromosomal genome sequence of T. bleekeri using combined Illumina, PacBio, and Hi-C technology.Evolutionary and comparative genomic approaches were applied to clarify the origin of T. bleekeri and to investigate the potential signals of adaption.Furthermore, the population genetics of T. bleekeri were also investigated to reveal the genetic divergence among different populations. Samples and tissue collection T. bleekeri individuals (Fig. 1a; NCBI:txid595395; fishbase ID: 56059) were obtained from the Daning River (31.157383 N,109.892133E.), a tributary in the upper reaches of the Yangtze River, using brail nets (Fig. 1b).The fish were then transferred to the Aquaculture Laboratory of Southwest University and reared in indoor tanks.To collect enough tissues for the genome and transcriptome sequencing, the largest female individual was used for library construction and sequencing.The fish was anesthetized with tricaine MS-222 and was immediately dissected to collect 12 types of tissues, viz., brain, eye, skin, gill, heart, liver, trunk kidney, spleen, gut, muscle, gallbladder, and gonad.Tissues were quickly frozen in liquid nitrogen for >1 hour and then stored at −80 • C.Among these tissues, muscle tissue was used for genomic DNA sequencing and Hi-C library construction.Meanwhile, all tissue samples were used in the application of transcriptome sequencing to comprehensively characterize the transcriptome.To elucidate the population structures of T. bleekeri, a total of 28 individuals were collected from 3 different reaches of the Daning River, i.e., 11, 11, and 6 individuals from Lianghekou (LHK), Xixi (XX), and Baiyang (BY), respectively (Fig. 1b).These individuals were anesthetized with tricaine MS-222, and muscle tissue samples of each fish were collected in the aforementioned manner. Genome DNA extraction and sequencing library construction DNA was extracted from muscle tissue using the phenolchloroform DNA extraction method [26].The Qubit (Thermo Fisher Scientific, Waltham, MA, USA) and Agilent Bioanalyzer 2100 (Agilent Technologies, Palo Alto, CA, USA) were used for evaluating the quantity and quality of DNA.For sequencing based on the Illumina HiSeq technology, a short-read sequencing library with an insert size of 250 bp was constructed using 1 μg of DNA.For sequencing on the PacBio Sequel platform (Pacific Biosciences [PacBio], Menlo Park, CA, USA), the muscle DNA was used to construct the long-read sequencing library.Briefly, 10 μg of T. bleekeri genomic DNA was used for 20-kb library preparation following the manufacturer's protocol (PacBio), and the BluePippin Size Selection system (Sage Science, Beverly, MA, USA) was used for library size selection.DNA molecules from the largest individual were sequenced using the PacBio and Illumina platforms for genome assembly, and other samples were subjected to short-read whole-genome resequencing on the Illumina platform. RNA extraction and sequencing library construction RNA sequencing data provide important evidence for gene prediction in the genome [27].To include as many expressed genes as possible, the 12 aforementioned tissue types were used for RNA sequencing library construction.RNA was isolated from the 12 tissue samples using TRIzol reagent (Invitrogen, USA).The quantity and quality of extracted RNA were determined us- ing the Nanodrop ND-1000 spectrophotometer (LabTech, Holliston, MA, USA) and 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA).Samples with a total RNA concentration ≥10 μg and RNA integrity number ≥8 were used for sequencing.RNA molecules extracted from tissues were mixed in equal proportions for the following RNA library construction.The RNA sequence library was constructed following the protocol of Paired-End Sample Preparation Kit (Illumina Inc., San Diego, CA, USA), which was identical to that used in our previous study [28]. DNA and RNA library sequencing The short-read DNA and RNA sequencing libraries were sequenced with the 150 bp paired-end (150PE) mode using the Illumina HiSeq X Ten platform (Illumina Inc.).The 20 kb long-read genome DNA SMRTbell libraries sequencing library was sequenced with the PacBio Sequel platform.The raw sequencing data were quality checked before the bioinformatics analysis.The HTQC v0.90.8 package [29] was used to filter low-quality bases and reads, and sequences with adapters or low quality (average quality score < 20) were removed. Genome size estimation The genome size was estimated on the basis of Illumina sequencing data using the k-mer method before genome assembly.Raw Illumina reads were processed to remove adapter sequences, reads with >10% N bases, and reads with >50% lowquality bases (≤5).All filtered reads were used for k-mer frequency analysis [30].Using k-mer size of 17, the k-mer frequencies were obtained using Jellyfish v2.0 software [31].k-mers with a frequency <3 were eliminated because those likely resulted from sequencing errors.The genomic size was estimated on the basis of the following formula: G = (L − K + 1) × n base /(C k-mer × L), in which G is the estimated genome size, n base is the total count of bases, C k-mer is the expectation of k-mer depth, L indicates the read length, and K represents k-mer size.The revised genome size was calculated as follows: Revised Genome size = Genome size × (1 − Error Rate). De novo assembly of the T. bleekeri genome Long reads generated from the PacBio sequencing platform were used for T. bleekeri genome assembly with the Falcon v0.3.0 package [32].The assembled genome sequences were further polished with Arrow using long-read sequencing [33]; thereafter, 2 rounds of polishing using next-generation sequencing short reads were performed with Pilon (Pilon, RRID:SCR 014731) v1.23 [34].Finally, redundant genomic sequences were eliminated using Redundans v0.14a with the parameter overlap of 0.95 and an identity of 0.95 [35].Completeness of the assembled genome was evaluated using BUSCO (BUSCO, RRID:SCR 015008) v3.0 [36].The database of actinopterygii odb9 was used for the BUSCO analysis. Chromosome assembly using Hi-C technology Muscle tissue (1 g) of T. bleekeri was collected for PacBio sequencing and was used for Hi-C library construction.The Hi-C processes, including cross-linking, lysis, chromatin digestion, biotin marking, proximity ligations, cross-linking reversal, and DNA purification, were performed using the protocol described in previous studies [37].The purified and enriched DNA was used for sequencing library construction.The library was sequenced using the Illumina HiSeq X Ten platform (Illumina), and the short-reads were then mapped to the polished genome of T. bleekeri with Bowtie (Bowtie, RRID:SCR 005476) v1.2.2.The chromosomal assembly using interaction frequency matrix extracted from the Hi-C read mapping was performed according to a previously reported methodology [37]. Repetitive element annotation The de novo prediction and homology prediction were combined to annotate the repetitive sequences in the T. bleekeri genome.RepeatModeler (RepeatModeler, RRID:SCR 015027) v2.0.1 [38] was used for the detection of de novo repetitive elements in the T. bleekeri genome.The detected genome repeats were combined with RepBase library [39], as a comprehensive library for the final prediction of repetitive elements in the T. bleekeri genome, using the RepeatMasker (RepeatMasker, RRID: SCR 012954) v4.1.1 software [40].Transposons were predicted using ProteinMask, and the tandem repeats were identified in the genome using Tandem Repeats Finder v4.10 [41]. Only genes with complete sequences and 70% overlaps among different gene model prediction methods will be retained as high-quality gene models. Gene family clustering and phylogenetic analysis Coding sequences annotated from whole-genome sequences for the closely related species were extracted from genome sequences.Gene family clustering was performed for T. bleekeri with 8 fish species living in non-QTP regions, viz., zebrafish, Japanese medaka, elephant shark (Callorhinchus milii), spotted gar (Lepisosteus oculatus), Atlantic cod (Gadus morhua), platyfish (X.maculatus), tiger puffer (Takifugu rubripes), and large yellow croaker (Larimichthys corcea) by the Orthomcl v1.2 pipeline [57] with default settings.The single-copy orthologs across all species were selected for gene family, phylogenetic, and evolutionary analyses.Briefly, proteins of these genes were aligned with MUSCLE (MUSCLE, RRID:SCR 011812) v3.8.31 [58] and were then transformed into alignments of nucleotide sequences with pal2nal [59] on the basis of the corresponding coding sequences.Next, non-conservative regions were removed using Gblocks (Gblocks, RRID:SCR 015945) [60] with default settings, and the conservative regions were concatenated and fed into RaxML (RAxML, RRID:SCR 006086) v8.2.10 [61] to deduce the phylogenetic relationships of these species using a GTRGAMMA model.Rapid bootstrap runs (100 times) were performed to test the robustness of the topology [62].On the basis of the topology and the alignment matrix, their divergence times were deduced using MCMCTREE included in the PAML (PAML, RRID:SCR 014932) v1.3.1 package [63] To investigate the evolutionary relationships within genus Triplophysa, we also added another 4 Triplophysa genus fish species to the phylogenetic analysis.Because the genomes of T. xichangensis and T. scleroptera have not been reported, we downloaded the short reads of the transcriptomes of those 2 species from the NCBI SRA and conducted de novo assembly using Trinity (Trinity, RRID:SCR 013048) v2.11.0 [64] with default settings.The longest transcripts for each gene were used in the following phylogenetic analysis.The single-copy orthologs across all species were used for phylogenetic tree reconstruction and divergence time estimation following the aforementioned method. Gene family expansion and contraction in the T. bleekeri genome To identify expanded and contracted gene families in the T. bleekeri genome, we compared gene families in the T. bleekeri genome to those fish species living in non-QTP regions, viz., including elephant shark, spotted gar, zebrafish, Japanese medaka, platyfish, tiger puffer, large yellow croaker, Atlantic cod, green spotted puffer, and three-spined sticklebacks.CAFE v4.2.1 [65] was used to analyze the expansion and contraction of gene clusters in the T. bleekeri genome using a probabilistic model.A GO enrichment analysis was performed on expanded and contracted genes using the topGO v2.40.0 package [66].The enrichment of genes in KEGG pathways was also analyzed using KOBAS (KOBAS, RRID: SCR 006350) v1.2.0 [67]. Positively selected genes in genomes of Triplophysa species MUSCLE v3.8.31 was used for multi-protein sequence alignment among the T. bleekeri genes and their orthologs, and compared to 8 fish species living in non-QTP regions used in the gene family clustering analysis.Conserved coding sequence (CDS) alignments of each single-copy gene family were extracted using Gblocks [68] and used for further identification of positively selected genes (PSGs).The ratios of nonsynonymous to synonymous substitutions (K A /K S , or ω) were estimated for each single-copy orthologous gene using the CodeML program with the branch-site model as implemented in the PAML package.A likelihood ratio test was conducted, and the false discovery rate correction was performed for multiple comparisons.Genes with a corrected P-value <0.05 were defined as PSGs.The genes putatively influenced by positive natural selection of T. tibetana and T. siluroides were also identified using the identical method.The functional annotation of PSGs for T. bleekeri, T. tibetana, and T. siluroides was also conducted using the same approach with the gene family expansion and contraction analysis. Whole-genome resequencing and population genetics Raw reads of samples subjected to resequencing were quality controlled as mentioned previously.The filtered short reads were mapped using BWA mem (BWA, RRID:SCR 010910) v0.7.17-r1188 with default settings for each individual, followed by the marking of duplicates with Picard (Picard, RRID:SCR 006525).Regions near INDELs were thought to be poorly aligned and were identified and realigned using GATK (GATK, RRID:SCR 001876) v4.1.8.1 [69].GATK was also used to call single-nucleotide polymorphisms (SNPs) and INDELs based on the alignments.The SNPs and INDELs were then filtered by these parameters: QUAL (phred quality) > 30, QD (quality score divided by depth to comprehensively evaluate the quality and depth) > 2, DP (read depth) > 5, FS (Phred-scaled P-value using Fisher exact test to detect strand bias for reads) < 60, MQ (mapping quality to evaluate read alignment) > 40, SOR (strand odds ratio to evaluate strand bias for reads) < 4.0.The identified SNPs were filtered using SNPhylo v20180901 [70] with default settings, except for LD threshold and Minimum depth of coverage, which were set to 0.8 and 5, respectively.Next, the principal component analysis (PCA) clusters and population structure for these individuals were deduced with Plink (PLINK, RRID:SCR 001757) v1.9 [71] and Admixture [72,73] with default settings, respectively.Their phylogenetic relationships were recovered using the neighbor-joining method with MEGA4 [73], and bootstrap resampling (100 times) was performed to test the robustness of the tree topology. Historical effective population size inference for T. bleekeri Historical effective population size of T. bleekeri was estimated using Pairwise Sequentially Markovian Coalescent (PSMC) v0.6.5 software [74].We used the data for whole-genome variants of individuals for the genome assembly.The consensus sequences were generated using vcfutils.pl(vcf2fq -d 10 -D 300).The fq2psmcfa tool was used to create the input file for PSMC modeling.Sequences were used as the input for the PSMC estimates using "psmc" with the options -N25 -t15 -r5.The reconstructed population history was plotted using "psmc plot.pl" with the generation time of 2 years and rate of 4 × 10 −9 substitutions per synonymous site per year.The mutation rate was estimated from the gene comparison of T. bleekeri and D. rerio.Bootstrapping was conducted by randomly sampling with replacement 5-Mb sequence segments and 100 bootstrap replicates were performed. Selection sweep analysis for populations To identify genome-wide selective sweeps among populations, we calculated the genome-wide distribution of fixation index (F ST ) values and θπ ratios using SNPs from different populations.The F ST values were Z-transformed as follows: Z (F ST ) = (F ST − μF ST )/σ F ST , in which μF ST is the mean F ST and σ F ST is the standard deviation of F ST .The θπ ratios were log 2 -transformed.Subsequently, we scanned the genome in a 1-kb sliding scale, and estimated and ranked the empirical percentiles of Z (F ST ) and log 2 (θπ ratio) in each window.We considered the windows with the top 1% Z (F ST ) and log 2 (θπ ratio) as candidate outliers under strong selective sweeps.Genes residing in the outlier regions were considered as the candidate functional genes.The GO and KEGG enrichment were carried out by cluster Profiler v3.14.3 [75] and DAVID v6.8 [76]. DNA and RNA library sequencing We generated 81.69 Gb genomic (∼120×) and 10.6 Gb transcriptomic short reads for the following genome size estimation and annotation (Table 1).We also obtained 100.87 Gb genomic long reads from the PacBio platform, with a rough coverage of 160× for the T. bleekeri genome (Table 1).The mean and N50 length of the long reads were 5.8 and 16 kb, respectively (Table 1 and Supplementary Fig. S1). Genome size estimation To determine the possible sample contamination, 10,000 nextgeneration sequencing short reads were randomly selected for an NCBI nt database search.Cyprinus, Danio, and Sinocyclocheilus represent the top 3 sources of best hits, ruling out obvious contamination during library construction and sequencing.Using genomic short reads generated from the Illumina platform, 59.8 million k-mers were generated.The genome of T. bleekeri was estimated as 632.5 Mb, with a heterozygosity ratio of 0.26% and repeat content of 42.2% (Supplementary Fig. S2).Based on the above genome character estimation, the genome of T. bleekeri was mid-sized with low heterozygosity. De novo assembly of the T. bleekeri genome Using genomic PacBio long reads for T. bleekeri, we assembled a 628-Mb genome with 856 contigs and an N50 length of 3.82 Mb (Table 2).Among these contigs, the longest contig for the genome was 15.5 Mb.The completeness of the assembled genome was evaluated using BUSCO v3.0 [36] with the actinopterygii odb9 database, indicating that 92.9% of BUSCO genes were identified in the assembled genome (Supplementary Fig. S3). Chromosome assembly using Hi-C technology Hi-C technology recruits interaction information among different chromosome regions and assumes that the interactions for nearby regions are more prevalent than for distant regions.In this study, 82.9 Gb sequencing data were obtained via Hi-C library sequencing.On the basis of the interaction information, a chromosome assembly of 628 Mb with a scaffold N50 length of 22.9 Mb was obtained (Supplementary Fig. S4).More than 596.9 Mb sequences were anchored upon 25 chromosomes, highlighting a high chromosome anchoring rate of 96.2% on the base level. Repetitive element annotation The annotation pipeline showed that >17.9 Mb of the genome sequences were predicted as tandem repeats, covering ∼2.8% of the genome, and finally 203.2 Mb, accounting for ∼32.4% of the genome, were annotated as repetitive elements in the T. bleekeri genome (Supplementary Table S1 Protein-and non-coding gene prediction, and functional annotation For predicting protein-coding genes in the de novo assembled genome, 10.6 Gb short-read transcriptome data from 12 tissues were generated.Based on the de novo, homolog, and RNAseq data methods, a total of 20,274, 27,243, and 15,875 proteincoding genes were predicted, respectively.After integration and redundancy elimination, 21,198 protein-coding genes were predicted in the T. bleekeri genome (Supplementary Table S2). Of the 21,198 protein-coding genes, roughly 93.0%, 96.9%, and 90.9% displayed homologous sequences in the NCBI NR, TrEMBL, and Swissprot databases, respectively.Additionally, 89.2% contained InterPro domains, and 46.9% were assigned GO terms.Overall, >97.3% of the protein-coding genes were functionally annotated by ≥1 method (Supplementary Fig. S5).Non-coding genes have received increased attention in recent years because accumulating evidence suggests that many of them play crucial roles in a variety of biological processes [77].In this study, all the possible non-coding DNA sequences were predicted based on the de novo prediction strategies and are summarized in Supplementary Table S3. Gene family clustering and phylogenetic analysis of T. bleekeri Using the whole-genome and transcriptome data of 4 other Triplophysa species, viz., T. tibetana, T. siluroides, T. scleroptera, and T. xichangensis, and the 8 other fish species living in non-QTP regions, we performed gene family clustering for those species.As a result, we identified 1,364 single-copy orthologs among those fish species. We then investigated the evolutionary relationship of T. bleekeri with respect to other Triplophysa and the non-QTP species.Using single-copy genes among species, a concatenated alignment matrix was generated with a total length of 73,887 bp, which was used for the phylogenetic analysis and divergence time estimation.The result showed that Triplophysa species are phylogenetically closer to D. rerioand that T. siluroides is a basal species within the Triplophysa group.Divergence time estimation showed that T. bleekeri diverged from their common ancestor, T. scleroptera and T. xichangensis, ∼25.2 million years ago (Mya) (Fig. 2). Genes under natural positive selection We identified 788 PSGs in the T. bleekeri genome.The functional analysis on the KEGG and GO parameters showed that several categories associated with nucleotide metabolism and DNA repair were significantly enriched (Supplementary Tables S4 and S5).Additionally, the PSGs were also enriched in immune response, such as MyD88-dependent Toll-like receptor signaling pathway (Supplementary Table S4).Concomitantly, 969 and 1,253 PSGs were identified for T. tibetana and T. siluroides, respectively.Among those genes, 197 genes were identified as shared PSGs for the 3 Triplophysa species (Fig. 3a). To detect candidate PSGs for Triplophysa ancestral lineage, we also performed PSG identification for the common ancestor of the Triplophysa with the branch-site model in PAML.As a result, we identified 439 PSGs for the Triplophysa ancestral lineage.Interestingly, we found that only 35 shared PSGs for the 3 Triplophysa species were identical to Triplophysa lineage PSGs (Fig. 3b).The functional analysis with respect to biological pathways for the 3 Triplophysa species showed that those genes were significantly enriched for various processes including protein digestion and absorption, Fanconi anemia pathway, and salivary secretion (Fig. 3c).Twenty-five biological pathways, including peroxisome, autophagy, non-homologous end-joining, homologous recombination, basal transcription, ribosome biogenesis, and spliceosome, were enriched for PSGs of Triplophysa ancestral lineage (Fig. 3c).The homologous recombination and basal transcription factor pathways were both enriched for Triplophysa lineage PSGs and the 3 Triplophysa species shared PSGs (Fig. 3c). Gene family expansion and contraction in the T. bleekeri genome Following the Orthomcl pipeline, 21,862 ortholog groups were obtained after gene family clustering with 10 fish species from non-QTP regions.Gene family analysis showed that 1,533 and 2,401 gene families were significantly expanded and contracted in T. bleekeri, respectively (Supplementary Fig. S6).The functional enrichment of expanded gene families was analyzed using GO and KEGG.The expanded gene families were primarily enriched in categories of metabolism and immune regulation (Supplementary Tables S6 and S7).The categories of metabolism include fatty acid metabolism (arachidonic acid metabolism and glycosphingolipid biosynthesis), carbohydrate metabolism (glycosaminoglycan biosynthesis and glycan degradation), and amino acid metabolism (RNA transport).The categories of immune regulation include the Hippo signaling pathway (corrected P-value = 2.40E−03), necroptosis, and vitamin B 6 metabolism (corrected P-value = 8.90E−03).The contracted gene families were mainly made up of several signaling pathways, including the MAPK signaling pathway, calcium signaling pathway, adrenergic signaling in cardiomyocytes, GnRH signaling path-way, and retrograde endocannabinoid signaling (Supplementary Tables S8 and S9). Historical effective population size for T. bleekeri during formation of the QTP We used the whole-genome short-read sequencing data based on the sample used for genome assembly to obtain the genomewide genotype data.Then, those variants were used to probe the profiles of historical effective population size for T. bleekeri during the formation of the QTP.We used gene comparison between T. bleekeri and D. rerio to estimate the mutation rate.As a result, we estimated a mutation rate of 4 × 10 −9 for T. bleekeri.PSMC analysis performed using the above data showed that the effective population size of T. bleekeri increased >0.7 Mya and reached a peak of 70 × 10 4 ∼0.6-0.7 Mya.However, the T. bleekeri population size experienced a dramatic decrease afterwards to 1 × 10 4 from 0.6 Mya to 60,000 years ago (Fig. 4).The effective population size decline was consistent with the accelerating QTP uplift ∼1 Mya [78] and the quaternary glaciation spanning the Pleistocene (2.6-0.11Mya) and Holocene (0.11-0 Mya) [19,79].We speculate that both the geotectonic movements and temperature fluctua- tions during the period exerted intense survival pressure on the ancient T. bleekeri populations, leading to the ∼70 times effective population size drop during the period. Population genetics analysis of T. bleekeri The high-quality SNPs were obtained according to the filtering criteria set previously and were used to deduce the population structures of T. bleekeri.As a result, >34 million short reads were obtained for 28 individuals, and >3 million SNPs were detected for all individuals.The phylogeny reconstruction analyses based on whole-genome SNPs showed that individuals from populations LHK and XX clustered together forming 2 neighboring groups, whereas individuals from population BY formed another cluster (Fig. 5a).The PCA clusters (Fig. 5b) also suggested that the first 2 principal components could successfully separate the in- dividuals in population BY from those in populations LHK and XX.In addition, genetic structure analysis also indicated that gene flow between population BY and the other 2 populations might be limited (Fig. 5c). To identify putative signals of differential selection among populations, we also performed selective sweep analysis for the BY, LHK, and XX populations (Fig. 6a).Based on Fst comparison among those populations (Supplementary Table S10), we identified genomic regions (∼1 kb in length) that scored in the top 1% (Supplementary Fig. S7).As a result, 1,734, 3,009, and 3,244 regions (1 kb) harboring 474, 878, and 957 functional candidate genes were identified to be significantly genetically differentiated for LHK-XX, LHK-BY, and XX-BY comparisons, respectively.Genomic regions with less differentiation identified in LHK-XX comparison were consistent with the above phylogenetic analysis.The GO and KEGG pathway functional analyses showed 20, 25, and 31 significant biological pathway enrichments for LHK-XX, XX-BY, and LHK-BY comparisons, respectively (Fig. 6b, Supplementary Tables S11-S13).Six enriched biological pathways were shared in the LHK-BY and XX-BY comparisons but not in LHK-XX, viz., ubiquitin-mediated proteolysis, tight junction, starch and sucrose metabolism, melanogenesis, longevityregulating pathway-mammal, and circadian rhythm (Fig. 6c, Supplementary Tables S12 and S13).Five enriched biological pathways were shared for all comparisons, viz., axon guidance, long-term potentiation, Rap1 signaling pathway, circadian entrainment, and calcium signaling pathway (Fig. 6b).Besides, the α-trehalose glucohydrolase (treh), β-catenin (ctnnb1), and lymphoid enhancer-binding factor 1 (lef1) genes exhibited significant genetic differentiation in the LHK-BY and XX-BY comparisons but not in LHK-XX, implying that these genes might be related to the living environments for BY (Fig. 6d). Discussion In this study, we presented the chromosome-level genome assembly of T. bleekeri with a contig N50 of 3.1 Mb and a scaffold N50 of 22.9 Mb.The N50 lengths of contigs of the T. bleekeri genome assembly were much longer than previously reported genome assemblies of T. tibetana [17].Twenty-five chromosomes were obtained with the mounting rate up to 96.2%, and the assembled chromosome number was consistent with the karyotype of T. bleekeri (unpublished data), which suggests that the present analysis resulted in successful assembly of the T. bleekeri genome to the chromosome level.The completeness of the genome was also evaluated, confirming the high quality of the assembled T. bleekeri genome.The combined results of the homology-based and de novo predictions showed that repetitive sequences accounted for 32.4% of the genome.Among them, DNA transposons represented the most abundant tandem repeats, which was also observed in T. tibetana [17].Within the genome, 21,198 protein-coding genes were predicted, of which 97.3% could be functionally annotated.Overall, this genome assembly and annotation provides valuable data to the genomic resources currently available for the study of phylogeny and environmental adaptations of Triplophysa species. The phylogenetic analysis results indicated that the Triplophysa genus formed a clade with D. rerio and that T. bleekeri was most closely related to T. tibetana and T. scleroptera.The divergence time estimation indicated that T. siluroides diverged from their common ancestor roughly 38.8 Mya, occupying a basal position in the Triplophysa genus.The extensive QTP was elevated by >4,000 m ∼40 Mya [80], and this time is consistent with the divergence of T. siluroides.Therefore, we speculated that the speciation of Triplophysa was likely triggered by the uplifting of the QTP [81].Uplift of the QTP induced profound climatic and environmental changes to the plateau and its peripheral regions, including low oxygen and low temperature [82].The oxygen content of air is inadequate in the QTP, while investigations into water quality indicated that a high dissolved oxygen concentration exists in the QTP water [83][84][85][86].This led us to speculate that thermal stress may present a major factor in natural selection for fish species in the QTP and its peripheral regions.Although Triplophysa species are widely distributed in different regions, these regions are all generally characterized by a cold environment [23,87].However, to our knowledge, only a few studies have been conducted to explore the genetic basis of adaptation of Triplophysa species to low temperatures.Through the comparative analysis of the genome with other fish species, we found that the expanded gene families of T. bleekeri were significantly (P < 0.05) enriched in fatty acid metabolism, including glycosphingolipid biosynthesis and arachidonic acid metabolism pathways.The glycosphingolipid located in the bilayer lipid membrane is a major structural component of cell membranes [88], whereas arachidonic acid, an integral constituent of biological cell membranes, aids in the maintenance of cell membrane fluidity even at low temperatures [89].Our results suggest that the increased number of genes related to fatty acid metabolism might be responsible for maintaining membrane structure and improving membrane fluidity under cold environments. In the genome of T. bleekeri, significant expansion was also observed in the Hippo signaling pathway gene family, which participates in regulating innate immunity [90,91].These results suggest that T. bleekeri may tend to increase gene numbers in certain families related to immune response for improving the defense against pathogens.It is notable that genes involved in innate immunity, such as Toll-like receptor signaling pathway genes, all underwent positive selection in T. bleekeri, T. tibetana, and T. siluroides.Similar results were also observed in previous transcriptomic studies of Tibetan Schizothoracinae species, Gymnocypris przewalskii, and G. przewalskii ganzihonensis [92,93].These results indicated that the adaptive evolution of innate immunity might play crucial roles in the highland adaptation of fish. Low temperatures and UV radiation can cause DNA damage [94], and DNA damage response and repair pathways may show functional adaptation.Within the 3 Triplophysa species, the PSGs were enriched in the functional categories of nucleotide excision repair, non-homologous end-joining, homologous recombination, and Fanconi anemia pathways (Supplementary Table S4 and S5).These pathways all participate in DNA repair, of which non-homologous end-joining and homologous recombination are the 2 main pathways for repairing double-strand break [95], and the Fanconi anemia pathway is essential for the repair of DNA interstrand crosslinks [96].PSGs influencing DNA repair may contribute to DNA integrity and genomic stability under high-altitude environments with low temperatures and intense UV radiation.Our results suggest that Triplophysa species have evolved an integrated DNA-repair mechanism to adapt to high-altitude environments.The previous studies also showed that genes involved in DNA repair were under positive selection pressure in many species living at high altitudes, such as the snub-nosed monkey [97] and the Tibetan hot-spring snake [11].This indicated that DNA damage caused by the environment is a common stress that animals in high-altitude regions need to cope with.We also identified 197 PSGs shared by the 3 Triplophysa species (Fig. 3a), indicating that those naturally selected genes might have originated from their common ancestor and that Triplophysa species were genetically convergent on PSGs.We found many species-specific PSGs for the 3 Triplophysa species.The result implies the requirement of a distinct ecological niche for T. bleekeri, T. tibetana, and T. siluroides.On the basis of the generally used genomic comparison methods, hundreds of PSGs for Triplophysa species were identified in this investigation.However, a previous study has shown that ancient demographic fluctuation could generate severe overestimation of selective signatures [98].Therefore, PSG identification in this work might have been influenced by the demographic scenarios of Triplophysa species.It is worth estimating the demographic fluctuation to PSG identification, and examining the present methods for potential biases. In addition to comparative genomics analyses, the relationships among populations of T. bleekeri were analyzed to probe possible differences in genetic structures.Population structure analysis divided 28 T. bleekeri samples into 2 clusters, with individuals from the LHK and XX population grouped together, and individuals from BY population forming the other cluster.Both PCA and structure analyses corroborated these findings.The BY population was separated from the LHK and XX population, and the observed admixture of genetic lineages was limited (K = 3).These results could be because LHK and XX are directly connected by the river, and gene flow between individuals residing in the 2 places occurs more frequently.The difference between the BY population and the LHK and XX populations might be attributed to the relatively limited gene flow caused by natural and artificial barriers among those populations.The Daning River measures a height of up to 1,648 m, which flows through many narrower canyons [99].Therefore, the geographical barriers formed by canyons and shallows could contribute to the diminished interaction among those populations.More importantly, artificial barriers, such as cities and dams, could also weaken the migrations between the BY and LHK/XX populations.Therefore, the whole-genome resequencing data of T. bleekeri provided a valuable genetic resource to reveal that geographical and artificial barriers could distinctly influence genetic exchange among populations. The selective sweep analysis showed that genomic differentiation of LHK-XX was nonintensive compared to that of the BY population, which is consistent with the above population phylogenetic analysis.Notably, we identified 6 shared enriched biological pathways for LHK-BY and XX-BY comparisons but not in LHK-XX.The natural gorge might change the water flow and biodiversity of environments, and human activity could as well influence the nutrition supplies and circadian rhythm for local fish populations directly. Conclusions We present a chromosomal-scale genome assembly of T. bleekeri, a representative high-altitude fish.Evolutionary, comparative, and population genomic analyses were performed to investigate the evolution, environmental adaptation, and genetic diversity of T. bleekeri.Our results provide insights into how fish adapt to the high-altitude environment, and the genomic data serve as a valuable resource for further study on functional validation of candidate genes contributing to environmental adaptation. Figure 1 : Figure 1: Morphology and geographic distribution of T. bleekeri.(a) T. bleekeri used in this study.(b) Geographic distribution of the sampling locations for T. bleekeri.The red circles, green triangle, yellow trapezoid, and dotted ellipse represent the sampling sites, gorge, artificial dam, and Wuxi Town, respectively. Figure 2 : Figure 2: Phylogenetic relationships and divergence time estimation for T. bleekeri and other fish species.All nodes were completed and supported by 100 cycles of bootstrap resampling.Numbers near the nodes (shown in blue) indicate the estimated divergence times with a 95% confidence interval.Divergences used for the recalibration of time estimation are indicated with red dots. Figure 3 : Figure 3: Natural positively selected gene (PSG) identification and functional analysis for T. bleekeri, T. tibetana, and T. siluroides.(a) Venn diagram for PSGs for the 3 fish species.(b) Venn diagram for PSs identified from species-and lineage-based method.(c) Enrichment analysis on the biological pathways for candidate PSGs identified from the species-and lineage-based method.mRNA: messenger RNA. Figure 4 : Figure 4: Historical effective population size profile deduced from the wholegenome sequencing data.One hundred bootstrap replicates were performed for the effective population size estimation. Figure 5 : Figure 5: Population genetics analysis for T. bleekeri.(a) Neighbor-joining phylogenetic tree of individuals based on whole-genome SNP loci.Samples from population LHK, XX, and BY are labeled with red, green and blue, respectively.(b) Principal component (PC) analysis plots of the first 2 components.The fraction of the variance obtained was 14.5% for PC1 and 6.4% for PC2.(c) Population structure plots of T. bleekeri.The samples from population LHK, XX, and BY are represented by red, green, and blue color, respectively.We assume that there were 3 populations for the analysis (K = 3).The y axis quantifies the proportion of the individual's genome from inferred ancestral populations, and x axis shows the different populations. Figure 6 : Figure 6: Selective sweep analysis to identify candidate selected functional genes among populations.(a) Manhattan plot to show the genome-wide differentiation between LHK and BY populations.(b) Venn plot for shared enriched biological pathway for candidate selected functional genes from the selective sweep analysis among population comparisons.(c) The shared enriched biological pathway from LHK-BY and XX-BY comparisons.(d) The Fst profiles for genomic regions containing treh, ctnnb1, and lef1 genes. Table 1 : A summary of sequencing data used in genome assembly and gene annotation Table 2 : Length statistics for contig assembly for the T. bleekeri genome
v3-fos-license
2024-05-25T15:04:05.949Z
2024-05-23T00:00:00.000
269998860
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/15/6/675/pdf?version=1716477291", "pdf_hash": "9bebc09fb752b36b4d9773b2a63df9fda86c4ee9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43963", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "21456f08889e90ecd67468dd2a3446f061ecbb3c", "year": 2024 }
pes2o/s2orc
Concordant Gene Expression and Alternative Splicing Regulation under Abiotic Stresses in Arabidopsis The current investigation endeavors to identify differentially expressed alternatively spliced (DAS) genes that exhibit concordant expression with splicing factors (SFs) under diverse multifactorial abiotic stress combinations in Arabidopsis seedlings. SFs serve as the post-transcriptional mechanism governing the spatiotemporal dynamics of gene expression. The different stresses encompass variations in salt concentration, heat, intensive light, and their combinations. Clusters demonstrating consistent expression profiles were surveyed to pinpoint DAS/SF gene pairs exhibiting concordant expression. Through rigorous selection criteria, which incorporate alignment with documented gene functionalities and expression patterns observed in this study, four members of the serine/arginine-rich (SR) gene family were delineated as SFs concordantly expressed with six DAS genes. These regulated SF genes encompass cactin, SR1-like, SR30, and SC35-like. The identified concordantly expressed DAS genes encode diverse proteins such as the 26.5 kDa heat shock protein, chaperone protein DnaJ, potassium channel GORK, calcium-binding EF hand family protein, DEAD-box RNA helicase, and 1-aminocyclopropane-1-carboxylate synthase 6. Among the concordantly expressed DAS/SF gene pairs, SR30/DEAD-box RNA helicase, and SC35-like/1-aminocyclopropane-1-carboxylate synthase 6 emerge as promising candidates, necessitating further examinations to ascertain whether these SFs orchestrate splicing of the respective DAS genes. This study contributes to a deeper comprehension of the varied responses of the splicing machinery to abiotic stresses. Leveraging these DAS/SF associations shows promise for elucidating avenues for augmenting breeding programs aimed at fortifying cultivated plants against heat and intensive light stresses. Introduction RNA splicing is a pivotal post-transcriptional phenomenon that orchestrates the maturation of precursor messenger RNA (pre-mRNA) transcripts into mature messenger RNA (mRNA) by excising intervening sequences, termed introns [1].This process predominantly unfolds within pre-mRNA molecules through a series of reactions mediated by the spliceosome-a multiprotein complex composed of five small nuclear ribonucleoproteins (snRNPs) [2].Types of spliceosomes can be major or minor, differing in the structure of snRNPs, where they are composed of U1, U2, U4, U5, and U6 in the first type, while, respectively, composed of U11, U12, U4atac, and U6atac in the second [3].Essential for splicing are three intron recognition sites: the 5 ′ donor site, the branch site proximal to the 3 ′ terminus, and the 3 ′ acceptor site [4,5].These sites are delineated by consensus sequences, including G-G-[cut]-G-U-R-A-G-U. ..intron, intron. ..Y-U-R-A-C. ..intron (situated 20-50 nucleotides upstream of the acceptor site), and intron. ..Y rich -N-C-A-G-[cut]-G [6][7][8].In some rare events, certain pre-mRNA introns undergo self-splicing, obviating Genes 2024, 15, 675 2 of 24 the necessity for spliceosomal involvement and leading to the classification of these RNA molecules as ribozymes [9]. Alternative splicing (AS) dynamically responds to developmental cues based on spatiotemporal requirements, including tissue specificity and environmental stimuli [10].AS engenders multiple isoforms from a single multiexonic gene, thereby augmenting proteome diversity as an evolutionary mechanism.Major modes of AS encompass intron retention, exon skipping, an alternate 5 ′ donor site, and an alternate 3 ′ acceptor site [11].Functionally, AS modulates protein or protein domain sequences [12], facilitates the emergence of novel protein-protein interactions [13], influences mRNA turnover including RNA stability and decay [14], and impacts translational processes [15].Consequently, generated isoforms often exhibit distinct functions, occasionally displaying opposing functionalities [16][17][18]. The equilibrium between expression levels and functionalities of diverse gene isoforms is finely tuned [19,20], with variations across tissues, developmental stages, and environmental conditions [21,22].Empirically, discerning the function of novel isoforms proves challenging, given that many of them exhibit distinctions as subtle as a single amino acid alteration [17,18].Structural approaches, primarily leveraging protein structure prediction methods based on amino acid sequences and conserved protein domains, offer insights into isoform functionalities [17,23,24].Notably, the majority of isoforms of a given gene retain common active domains [17], thereby identifying isoforms harboring distinct or absent active domains as likely artifacts. The nascent methodologies within high-throughput mRNA sequencing and computational biology facilitate the detection and quantification of the prevalence of alternative splicing isoforms [25].These advancements enable the prediction of functions attributed to the newly generated differentially expressed alternatively spliced (DAS) gene isoforms, predicated upon their expression profiles within specific stress conditions.Integration of transcriptomic datasets stemming from disparate environmental conditions, facilitated by suitable bioinformatics tools, shows promise in discerning the authenticity of novel isoforms versus artifacts.Moreover, employing cluster analysis on RNA-Seq datasets may unveil regulatory elements influencing the architectures of both previously annotated and novel isoforms, contingent upon their synchronized expression across diverse environmental conditions [16][17][18]. In this study, we utilized clean RNA-Seq datasets retrieved from National Center for Biotechnology Information (NCBI), derived from stress experiments conducted on Arabidopsis thaliana seedlings subjected to various stress combinations, including salt, intensive light, and heat stresses [26].Our investigation aimed to elucidate DAS genes under diverse environmental stresses and identify splicing factors (SFs) exhibiting concordant expression with isoforms of these DAS genes, with the intention of subsequent experimental validation of their inter-relationship.We expect this investigation to enhance our understanding of the diverse reactions exhibited by the splicing apparatus under abiotic stresses.Harnessing the associations between DAS and SF offers potential pathways for elucidating strategies to enhance breeding programs targeted at fortifying cultivated plants against abiotic stresses. Materials and Methods RNA sequencing datasets were acquired from the publicly available repository of BioProject PRJNA622644 within the NCBI, detailing a recent multifactorial stress experiment [26].Accession numbers corresponding to distinct samples and replicates under varying abiotic stress conditions are itemized in Table 1. Table 1.Information available for RNA-Seq datasets of 10-day-old A. thaliana (wild-type Col-0) seedling samples exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = intensive light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Further information is available in Table S1, and further growth and abiotic stress conditions were recently reported [26].High-throughput RNA sequencing reads were retrieved and subsequently aligned to the A. thaliana reference genome (TAIR10) utilizing HISAt2 software (version 2.2.1), following established protocols [27].This alignment process facilitates the precise mapping of reads to known genomic loci.Subsequent analysis of the mapped reads was conducted using StringTie software (version 1.3.3b),enabling the identification of transcripts potentially absent in the existing gene annotation repository of the NCBI.These newly identified transcripts, denoted as "new isoforms", were compared with previously annotated isoforms within the same loci using GffCompare software (v0.11.2) to discern novel alternative splicing events.Then, transcript sequences corresponding to both previously annotated and newly discovered isoforms were extracted from the A. thaliana TAIR10 genome using a Perl script, agat_sp_extract_sequences.pl.The merged sequences were compiled into a unified FASTA file, and transcript abundance for both isoform types was quantified utilizing RSEM software (v1.1.17)based on the RNA sequencing reads shown in Table 1.Differential expression analysis was executed employing EdgeR (R version 2.1.5)employing stringent criteria, including a fold change of ≥4 (log2(fpkm +1 )) and a false discovery rate (FDR) of ≤10 −3 [28], to identify transcripts exhibiting significant alterations in expression across diverse stress conditions.The fold change of ≥4 is typically not log2-transformed.Subsequently, differential gene expression profiles underwent Blastx analysis, with the establishment of significant Pearson correlations corroborated through permutation analysis.Following differential expression profiling, a focused inquiry targeted differentially expressed alternatively spliced (DAS) genes exhibiting consistent expression patterns and harboring multiple regulated isoforms, aiming to discern potential concordant expression with splicing factors (SFs).Clusters featuring DAS/SF gene pairs meeting the aforementioned criteria were subjected to rigorous quality control assessments, with candidate DAS/SF pairs surveyed for conceivable relationships. Hierarchical Clustering Analysis In this investigation, 1D and 2D hierarchical clustering heatmaps were constructed to delineate the transcriptomic responses of Arabidopsis subjected to diverse stress conditions (Figure 1 and Figure S1, respectively).The 1D heatmap served to discern prevailing expression patterns within these transcriptomic datasets, while the 2D heatmap facilitated the identification of closely associated stress combinations at the level of gene regulation.Figure 1 illustrates the prevalent occurrence of upregulated transcript expression patterns in response to individual and combined heat and light stresses (H/L/HL↑), control conditions and salt stress (C/S↑), combined heat and light stresses (HL↑), and intensive light stress (L↑).C/S↑ and C/S↓, respectively, refer to downregulation and upregulation at all individual and combined stresses except for salt stress.The heatmap in Figure S1 reveals overlapping clusters of differential gene expression in response to various stresses and stress combinations.The fact that there is little change in response to salt stress is likely due to the low concentrations of added salt (50 mM NaCl).Expounding upon the expression profiling, an abundance of gene clusters exhibiting regulatory dynamics under conditions of heightened luminosity was discerned.To further investigate these observations, we searched transcripts to detect light-responsive genes, notably showcasing the upregulation of four isoforms of the gene encoding phytochrome A under conditions of elevated luminosity stress and its combinations (depicted in Figure 2).The upregulation of this gene underscores the plant's adaptive resilience to cope with the intense luminosity stress.we searched transcripts to detect light-responsive genes, notably showcasing the upregulation of four isoforms of the gene encoding phytochrome A under conditions of elevated luminosity stress and its combinations (depicted in Figure 2).The upregulation of this gene underscores the plant's adaptive resilience to cope with the intense luminosity stress.).Further information is available in Table S1.Further growth and abiotic stress conditions were recently reported [26].).Further information is available in Table S1.Further growth and abiotic stress conditions were recently reported [26].).Further information is available in Table S1. Detection of Concordantly Expressed DAS/SF Genes The analysis of RNA-Seq datasets revealed the existence of 1974 clusters (T among which 250 exhibited consistent or discernible expression patterns (Figur Table S1).Consistent expression, denoting similar expression levels across samp cates under the same stress condition within a given cluster, was a criterion for t tion of these clusters.These selected clusters encompassed the eight most promi pression patterns observed in our stress experiment (Figure 3).Subsequently, th ters were surveyed for the presence of splicing factors (SFs) exhibiting concordan sion with differentially expressed genes, resulting in the identification of six cluste ing this criterion, i.e., 101, 102, 223, 279, 569, and 929.Within these latter assemb comprehensive examination was conducted to pinpoint differentially expressed g hibiting regulatory variation across isoforms within the same or disparate clust discernment effectively shortened the transcriptomic repertoire within these six ated clusters to 71 distinct differentially expressed alternatively spliced (DA isoforms (Figure 4 and Table S2).These gene isoforms collectively correspond to a of 35 DAS genes.Through the utilization of StringTie and GffCompare compu tools, loci harboring either previously annotated or novel isoforms of these gen strictly delineated, with their respective expression profiles illustrated in Figure detailed in Table S3.Intriguingly, amidst the cohort of selected SFs, a subset of t hibited discernible alternative splicing patterns, in which they harbored multiple of their respective encoding genes, as delineated in Figure S4 and detailed in Tab S1. Detection of Concordantly Expressed DAS/SF Genes The analysis of RNA-Seq datasets revealed the existence of 1974 clusters (Table S1), among which 250 exhibited consistent or discernible expression patterns (Figure S2 and Table S1).Consistent expression, denoting similar expression levels across sample replicates under the same stress condition within a given cluster, was a criterion for the selection of these clusters.These selected clusters encompassed the eight most prominent expression patterns observed in our stress experiment (Figure 3).Subsequently, these clusters were surveyed for the presence of splicing factors (SFs) exhibiting concordant expression with differentially expressed genes, resulting in the identification of six clusters meeting this criterion, i.e., 101, 102, 223, 279, 569, and 929.Within these latter assemblages, a comprehensive examination was conducted to pinpoint differentially expressed genes exhibiting regulatory variation across isoforms within the same or disparate clusters.This discernment effectively shortened the transcriptomic repertoire within these six delineated clusters to 71 distinct differentially expressed alternatively spliced (DAS) gene isoforms (Figure 4 and Table S2).These gene isoforms collectively correspond to a number of 35 DAS genes.Through the utilization of StringTie and GffCompare computational tools, loci harboring either previously annotated or novel isoforms of these genes were strictly delineated, with their respective expression profiles illustrated in Figure S3 and detailed in Table S3.Intriguingly, amidst the cohort of selected SFs, a subset of three exhibited discernible alternative splicing patterns, in which they harbored multiple isoforms of their respective encoding genes, as delineated in Figure S4 and detailed in Table S4.), H = heat stress (0 mM NaCl, 33 °C, 50 µmol m −2 s −1 ), L = intensive light stress (0 mM NaCl, 21 °C, 700 µmol m −2 s −1 ).Further information is available in Table S1. = upregulation,  = downreguation.Colored lines refer to regulated transcripts.).Detailed information on gene-SF concordant expression within the selected clusters is available in Table S2, while detailed information on the DAS or SFs gene isoforms is shown in Tables S3 and S4, respectively.Single or double asterisks refer to isoforms of the same genes. Analysis of DAS and SF Gene Isoforms The array of isoforms across all loci within Arabidopsis, as cataloged by the aforementioned computational tools, is exhaustively detailed in Table S5, whereas those corresponding to the concordantly expressed DAS and SF gene variants are elaborated upon in Table S6.The surveyed results shown in Figures S3 and S4 delineate instances where certain isoforms of DAS and SF genes exhibit a lack of distinct regulatory modulation amidst the multifarious stress milieu.Nevertheless, we deemed it imperative to include these inconsistently expressed gene variants within our analysis to discern the prevailing splicing modalities governing their regulated isoforms under stress conditions.Subsequently, a comprehensive examination was undertaken to survey the splicing architectures governing both previously annotated and novel isoforms across the spectrum of 38 disparate DAS and SF genes, thereby providing insights into the prevailing splicing preferences of DAS genes amidst stress conditions (Figures S5-S42).Across these divergent ).Detailed information on gene-SF concordant expression within the selected clusters is available in Table S2, while detailed information on the DAS or SFs gene isoforms is shown in Tables S3 and S4, respectively.Single or double asterisks refer to isoforms of the same genes. Analysis of DAS and SF Gene Isoforms The array of isoforms across all loci within Arabidopsis, as cataloged by the aforementioned computational tools, is exhaustively detailed in Table S5, whereas those corresponding to the concordantly expressed DAS and SF gene variants are elaborated upon in Table S6.The surveyed results shown in Figures S3 and S4 delineate instances where certain isoforms of DAS and SF genes exhibit a lack of distinct regulatory modulation amidst the multifarious stress milieu.Nevertheless, we deemed it imperative to include these inconsistently expressed gene variants within our analysis to discern the prevailing splicing modalities governing their regulated isoforms under stress conditions.Subsequently, a comprehensive examination was undertaken to survey the splicing architectures governing both previously annotated and novel isoforms across the spectrum of 38 disparate DAS and SF genes, thereby providing insights into the prevailing splicing preferences of DAS genes amidst stress conditions (Figures S5-S42).Across these divergent isoforms, encompassing both pre-existing and novel variants, we discerned four types of alternative splicing, namely intron retention, exon skipping, an alternate 5 ′ donor site, and an alternate 3 ′ acceptor site, each manifested at varying frequencies. Validation of New Isoforms and Documented Functionalities of DAS and SF Genes Among the 35 DAS genes under study, a subset of seven genes underwent further analysis, predicated upon the alignment between their expression profiles in this experimental milieu and their documented functionalities (as depicted in Table S2).The ensuing exploration of isoforms pertaining to these selected genes unveiled a prevalence of six exon skipping events, alongside instance occurrences of five, two, and one of an alternate 3 ′ acceptor site, an alternate 5 ′ donor site, and intron retention, respectively (refer to Figures S5, S9, S13, S24, S29, S30 and S38). Nevertheless, we approached the outcomes pertaining to the novel isoforms with caution, recognizing a tendency for several among them, across a multitude of genes, to potentially harbor artifacts.This speculation is rooted in the intrinsic limitations of the analytical software, which fails to discern both frameshift mutations and premature termination codons within transcript open reading frames (ORFs).This inference was drawn following an exhaustive survey of isoforms derived from a singular gene, namely the ABC transporter B family member 11 (AT2G43500), localized within locus XLOC_008527 (Figure S20).Extensive examination of the aligned amino acid sequences encompassing the five distinct isoforms of this gene revealed instances wherein certain splicing events within the novel isoforms were deemed artifactual (refer to Figure 5 and Figures S43-S48). Genes 2024, 15, x FOR PEER REVIEW 8 of 24 isoforms, encompassing both pre-existing and novel variants, we discerned four types of alternative splicing, namely intron retention, exon skipping, an alternate 5′ donor site, and an alternate 3′ acceptor site, each manifested at varying frequencies. Validation of New Isoforms and Documented Functionalities of DAS and SF Genes Among the 35 DAS genes under study, a subset of seven genes underwent further analysis, predicated upon the alignment between their expression profiles in this experimental milieu and their documented functionalities (as depicted in Table S2).The ensuing exploration of isoforms pertaining to these selected genes unveiled a prevalence of six exon skipping events, alongside instance occurrences of five, two, and one of an alternate 3′ acceptor site, an alternate 5′ donor site, and intron retention, respectively (refer to Figures S5, S9, S13, S24, S29, S30 and S38). Nevertheless, we approached the outcomes pertaining to the novel isoforms with caution, recognizing a tendency for several among them, across a multitude of genes, to potentially harbor artifacts.This speculation is rooted in the intrinsic limitations of the analytical software, which fails to discern both frameshift mutations and premature termination codons within transcript open reading frames (ORFs).This inference was drawn following an exhaustive survey of isoforms derived from a singular gene, namely the ABC transporter B family member 11 (AT2G43500), localized within locus XLOC_008527 (Figure S20).Extensive examination of the aligned amino acid sequences encompassing the five distinct isoforms of this gene revealed instances wherein certain splicing events within the novel isoforms were deemed artifactual (refer to Figures 5 and S43-S48). Figure 5.A schematic overview delineating the alternative splicing outcomes discerned within the annotated and novel isoforms originating from the A. thaliana locus XLOC_008527, manifested across varying multifaceted stress contexts.Herein, the gene isoform AT2G43500.11,encoding the ABC transporter B family member 11, serves as the foundational sequence for comparative analysis.Within this context, three distinctive splicing events transpired, encompassing the skipping of exons 2 and 3, alongside intron retention occurring within exon 5. Scrutiny at the amino acid sequence level of disparate exons suggests a probable exon skipping event within exon 3, while the remaining events (highlighted in yellow boxes) appear indicative of artifacts.Noteworthy are the active conserved motifs, notably RWP-RK and PB1-NLP, identified within the resultant protein, as informed by recent scholarly contributions.The RWP-RK motif (pfam02042), localized at the C-terminus of this transporter protein, plays a pivotal role in nitrogen-mediated developmental processes, as corroborated by extant literature.Similarly, the PB1 motif (cd06407) is characteristic of NIN-like proteins (NLP), pivotal regulators involved in mediating symbiotic relationships between legumes and nitrogen-fixing bacteria, alongside other critical biological processes.Detailed elucidation pertaining to the functional attributes and expression profiles of distinct isoforms of this gene can be found in Tables S2 and S3, while insights regarding isoform structure are expounded upon in Figures S20 and S43.A schematic overview delineating the alternative splicing outcomes discerned within the annotated and novel isoforms originating from the A. thaliana locus XLOC_008527, manifested across varying multifaceted stress contexts.Herein, the gene isoform AT2G43500.11,encoding the ABC transporter B family member 11, serves as the foundational sequence for comparative analysis.Within this context, three distinctive splicing events transpired, encompassing the skipping of exons 2 and 3, alongside intron retention occurring within exon 5. Scrutiny at the amino acid sequence level of disparate exons suggests a probable exon skipping event within exon 3, while the remaining events (highlighted in yellow boxes) appear indicative of artifacts.Noteworthy are the active conserved motifs, notably RWP-RK and PB1-NLP, identified within the resultant protein, as informed by recent scholarly contributions.The RWP-RK motif (pfam02042), localized at the C-terminus of this transporter protein, plays a pivotal role in nitrogen-mediated developmental processes, as corroborated by extant literature.Similarly, the PB1 motif (cd06407) is characteristic of NIN-like proteins (NLP), pivotal regulators involved in mediating symbiotic relationships between legumes and nitrogen-fixing bacteria, alongside other critical biological processes.Detailed elucidation pertaining to the functional attributes and expression profiles of distinct isoforms of this gene can be found in Tables S2 and S3, while insights regarding isoform structure are expounded upon in Figures S20 and S43. Furthermore, our investigation identified the upregulation of the 1-aminocyclopropane-1-carboxylate synthase 6 (ACS6) gene under heat/light stress; this gene plays a pivotal role in ethylene biosynthesis pathway, thereby facilitating adaptive responses and environmental stress tolerance (Figure 6).Additionally, we uncovered two other genes (AT5G40910 and AT4G01850) involved in the ethylene biosynthesis pathway exhibiting expression patterns supportive of upregulation under various stress conditions (Figures S49 and S50).While AT5G40910 displayed alternative splicing with two regulated isoforms, AT4G01850 remained unaltered and non-alternatively spliced despite its confirmed involvement in ethylene biosynthesis.Interestingly, the AS event observed in AT5G40910 did not affect the protein structure/size at the 5 ′ untranslated region (UTR) of the gene (Figure S50).Such instances, where AS events occur at UTRs without altering protein structure, were recurrently observed across isoforms of various genes (Figures S6, S12, S14, S19-S21, S25, S27, S31, S36, S37, and S40). Genes 2024, 15, x FOR PEER REVIEW 9 of 24 Furthermore, our investigation identified the upregulation of the 1-aminocyclopropane-1-carboxylate synthase 6 (ACS6) gene under heat/light stress; this gene plays a pivotal role in ethylene biosynthesis pathway, thereby facilitating adaptive responses and environmental stress tolerance (Figure 6).Additionally, we uncovered two other genes (AT5G40910 and AT4G01850) involved in the ethylene biosynthesis pathway exhibiting expression patterns supportive of upregulation under various stress conditions (Figures S49 and S50).While AT5G40910 displayed alternative splicing with two regulated isoforms, AT4G01850 remained unaltered and non-alternatively spliced despite its confirmed involvement in ethylene biosynthesis.Interestingly, the AS event observed in AT5G40910 did not affect the protein structure/size at the 5′ untranslated region (UTR) of the gene (Figure S50).Such instances, where AS events occur at UTRs without altering protein structure, were recurrently observed across isoforms of various genes (Figures S6, S12, S14, S19-S21, S25, S27, S31, S36, S37, and S40). Discussion The process of precursor-mRNA (pre-mRNA) splicing in plants is intricately linked to the epigenetic chromatin landscape, which influences splice site selection and subsequent post-transcriptional alternative splicing events [29][30][31]. In the realm of Arabidopsis genetics, the prevalence of alternative splicing (AS) phenomena has been previously documented, with estimates suggesting its occurrence in approximately 42% of genes housing intronic sequences, a subset constituting 11.6% of the entire genome [32,33].However, recent investigations have unveiled a significant augmentation in the AS prevalence, surpassing the 60% threshold within intron-containing genes [34].Based on the results of the present study, we can claim that the AS apparatus can extend its reach beyond coding sequences, occasionally targeting non-coding regions across diverse isoforms, while maintaining similarity in their encoded sequences (Figures S6, S12, S14, S19-S21, S25, S27, S31, S36, S37, and S40).Hence, it is imperative to authenticate the novel isoform before drawing definitive conclusions regarding the splicing patterns of a particular DAS gene.Exemplifying this phenomenon within our current study, we observe instances such as the isoforms of the SBT23 gene (AT1G63010) localized within the locus XLOC_005837 (Figure S14). Cluster Selection and Concordant Expression of DAS/SF Gene Pairs Cluster analysis of RNA-Seq datasets yielded 250 clusters with consistent expression patterns out of a total of 1974 (Figure S2 and Table S1).Predominant expression patterns Discussion The process of precursor-mRNA (pre-mRNA) splicing in plants is intricately linked to the epigenetic chromatin landscape, which influences splice site selection and subsequent post-transcriptional alternative splicing events [29][30][31]. In the realm of Arabidopsis genetics, the prevalence of alternative splicing (AS) phenomena has been previously documented, with estimates suggesting its occurrence in approximately 42% of genes housing intronic sequences, a subset constituting 11.6% of the entire genome [32,33].However, recent investigations have unveiled a significant augmentation in the AS prevalence, surpassing the 60% threshold within intron-containing genes [34].Based on the results of the present study, we can claim that the AS apparatus can extend its reach beyond coding sequences, occasionally targeting non-coding regions across diverse isoforms, while maintaining similarity in their encoded sequences (Figures S6, S12, S14, S19-S21, S25, S27, S31, S36, S37, and S40).Hence, it is imperative to authenticate the novel isoform before drawing definitive conclusions regarding the splicing patterns of a particular DAS gene.Exemplifying this phenomenon within our current study, we observe instances such as the isoforms of the SBT23 gene (AT1G63010) localized within the locus XLOC_005837 (Figure S14). Cluster Selection and Concordant Expression of DAS/SF Gene Pairs Cluster analysis of RNA-Seq datasets yielded 250 clusters with consistent expression patterns out of a total of 1974 (Figure S2 and Table S1).Predominant expression patterns within this selection encompassed upregulation under heat, intensive light, and combined heat/intensive light stress (H/L/HL↑), upregulation under combined heat/intensive light stress (HL↑), and no regulation under control and salt stress (C/S↓) (Figure 3).The subsequent tier of cluster curation encompasses those manifesting one among the eight expression patterns elucidated in Figure 3. Globally, it is evident that heightened luminosity stress imposes the most pronounced perturbation in Arabidopsis seedlings, followed by thermal stress.Notably, salinity stress at the prescribed concentration (50 mM NaCl) appears to exert minimal impact, akin to the baseline non-stress condition (Figure S2). Light stress, particularly the perception of light by red/far-red-absorbing phytochrome photoreceptors, exerts profound effects on plant growth and development [35].In Arabidopsis, the phytochrome family comprising phyA-phyE plays crucial roles, with phyA prominently involved in seedling de-etiolation and sensing continuous far-red light (cFR) [36][37][38][39].Light modulates the transcription kinetics of numerous genes and influences AS incidence by favoring specific gene isoforms conducive to optimal stress responses [40,41].Consistent with this, our RNA-Seq data revealed the induction of four gene isoforms encoding phytochrome A under intensive light and related conditions (Figure 2). Further refinement of clusters focused on identifying concordant expression of differentially alternatively spliced (DAS) genes and splicing factor (SF) genes.Six clusters, including 101, 102, 223, 279, 569, and 929, with 71 DAS genes, exhibited such concordant expression (Figure 4 and Figure S2 and Table S2).Notably, these clusters predominantly showcased positive transcript responses to intensive light stress, except for cluster 102, which exhibited a negative response to combined intensive light and heat stresses.Subsequent analysis highlighted DAS genes with isoforms distributed across various clusters under distinct stress conditions, totaling 35 genes for further scrutiny (Figure 4 and Table S2).Within the spectrum of the six discerned clusters, the DAS/TF gene isoforms within Cluster 101 exhibited an SL/L↑ expression pattern, while those in Clusters 102, 223, and 929, respectively, showcased HL↓, C/S↓, and H/L/HL↑ profiles, with Clusters 279 and 569 evincing HL↑ dynamics (Figure 4).Predominantly, DAS isoforms across other clusters displayed analogous expression patterns, except for a few isoforms (locus XLOC_008527 as an example), which exhibited variable expression profiles.However, noteworthy are the loci housing some unregulated isoforms under diverse stress conditions (locus XLOC_003623 as an example) (Figure S3 and Table S3).Additionally, three of the six Arabidopsis SFs were observed to possess isoforms (as depicted in Figure S4).Regarding the loci housing these SF isoforms, our findings revealed differential expression patterns within XLOC_001540 and XLOC_012492, whereas the singular isoform within locus XLOC_007097 exhibited no discernible consistent expression patterns under stress conditions (Figure S4 and Table S4). Fidelity of New DAS Gene Isoforms under Stress Previously annotated isoforms typically depict only the coding sequences, further complicating the detection of new isoforms (Figures S5-S42).Moreover, a single locus often houses multiple previously annotated genes, necessitating careful analysis to differentiate genuine new isoforms from artifacts (Figure S6 as an illustrative example).An extra stringent layer of DAS selection mandates that new isoforms of a given gene must co-occur with previously annotated isoforms within the locus to bolster their authenticity.Additionally, in cases where a locus encompasses multiple genes, individual gene isoforms within the locus necessitate separate scrutiny as an extra layer of DAS selection.These stringent selection criteria led to the exclusion of isoform investigations for loci XLOC_001851 (Figure S7) and XLOC_019807 (Figure S34). Prior studies have underscored intron retention (IR) as the predominant form of AS in Arabidopsis during development and under stress conditions [32,34,42,43].Across the 35 DAS and three SF genes examined in our study, exon skipping and an alternate 3 ′ site were the favored splicing events under multifactorial stress conditions (Figures S5-S42).Notably, our investigation of isoforms of the gene encoding ABC transporter B family member 11 (AT2G43500) at locus XLOC_008527 revealed a prevalence of AA and AD splice sites under various multifactorial stress conditions (Figure S20).IR events often result in isoforms containing premature termination codons and truncated proteins, while AA and AD splice sites predominantly lead to downstream frameshifts and proteins with altered functions [44,45].However, our investigation searched the speculation surrounding an intronic presence within exon 5 of the novel gene isoform (i.e., STRG.10463.14)situated within locus XLOC_008527, encoding the ABC transporter B family member 11 and yielding disparate outcomes.Contrary to expectations, the retention of this purported intron in the aforementioned isoform did not engender either the premature stop codons or downstream frameshifts across the four alternative isoforms (Figures S20 and S47).Upon careful examination of the speculated intron's splicing within isoform STRG.10463.14, a consequential frameshift manifested immediately downstream of the splicing site, with stop codons emerging a mere 20 amino acids post-splice site (Figure S47).Consequently, the hypothesis regarding intronic presence within this exon is invalidated, thereby classifying this gene isoform as an artifact.To substantiate this assertion, we surveyed the active, conserved domains of the resultant protein (Figure S48).Structure and function of these two motifs were previously describes [46][47][48][49].Active conserved domains in the generated protein were detected based on recent information [48].Notably, isoforms AT2G43500.11and STRG.10463.9displayed an expression pattern characterized by heightened luminosity (HL↑), whereas isoform STRG.10463.14exhibited a distinctive expression profile across all stress combinations (all stress combinations↑). Prior studies have suggested that isoforms of a given gene predominantly share identical active domains [17].Upon surveying the extant active domains within the ABC transporter B family protein, we observed the presence of two such domains localized within exons 7 and 8 across the remaining four isoforms, characterized by the conserved motifs RWP-RK and BP1-NLP, respectively (Figure 5).The structural and functional attributes of these two motifs have been previously elucidated [46][47][48][49].The apparent absence of these conserved motifs within isoform STRG.10463.14(Figure S47) underscores its incapacity to fulfill the anticipated functional role of the gene.Conversely, our investigation into the speculated exon 3 skipping within the three previously annotated isoforms-AT2G43500.9,AT2G43500.10,and AT2G43500.11-revealedno occurrence of premature stop codons or frameshift mutations.This observation holds true for both the aforementioned isoforms lacking exon 3 and the novel isoform STRG.10463.9,which retains it.Hence, it is deduced that exon 3 indeed encodes 15 in-frame amino acids (as depicted in Figure S46), substantiating the authenticity of the latter novel isoform.Intriguingly, the disparate functionalities exhibited by the five distinct isoforms within locus XLOC_008527 align with findings from prior investigations [16][17][18].This overarching observation has relevance across myriad DAS genes analyzed within the present study, as elucidated in Tables S3 and S4.Therefore, careful attention is warranted when surveying novel isoforms of any given gene. Functional Analysis of Concordantly Expressed DAS/SF Genes under Stress A crucial aspect of this study is ensuring the conformity between the expression patterns and the documented functions of concordantly expressed DAS/SF gene pairs.Consequently, selection was carried out for the concordantly expressed DAS/SF gene pairs across the six clusters previously established as experimentally stress-related.DAS genes lacking prior information on their response to any stress combinations in this study were excluded from further analysis (Table S2).Notably, the DAS/SF pair in cluster 929 was not analyzed due to the absence of available information on the putative splicing factor (Table S2).The findings of this study revealed that SFs concordantly expressed with DAS genes predominantly belong to the highly conserved, multi-domain, non-snRNP spliceosome-related large family of RNA-binding proteins known as serine/arginine-rich (SR) splicing factors [50] (Table S2).Members of the SR protein family typically feature two RNA binding domains (RBDs), an arginine/serine-rich (RS) domain, and multiple RS dipeptide repeats at the C terminus [51]. In this study, six SR splicing factors across five clusters (i.e., 101, 102, 223, 279, and 569) were implicated in the alternative splicing of 34 out of the 35 DAS genes under different intensive light stress combinations (Figure 4).The SF in the seventh cluster (i.e., 929) is putative, thus its respective DAS gene was not investigated further.Among the other identified SR proteins, CACTIN, SR1-like, and SR30 were observed to undergo alternative splicing of their own pre-mRNAs under stress conditions, while SC35-like exhibited no such tendency, as evidenced by the presence of only one isoform encoding this protein in our RNA-Seq datasets (Table S1).Furthermore, the two SR45a genes residing in loci XLOC_002473 and XLOC_007097 were not found to generate isoforms of their own pre-mRNA (Table S1). In Cluster 101, the concordant expression of SR-like cactin and a DAS gene encoding a 26.5 kDa heat shock protein was observed, exhibiting upregulation under intensive light stress and its combination with salt stress.Although the function of cactin remains elusive, it is speculated to play a role in alternative splicing due to the presence of a serine/arginine-rich (SR) domain at the N terminal [52,53].As a small heat shock protein (sHSP), 26.5 kDa, acting as a molecular chaperone, aids in protecting proteins from stressinduced damage [54].Despite its documented involvement in abiotic stress responses, its specific association with light stress remains unexplored (https://www.uniprot.org/uniprotkb/Q9SSQ8/entry?version=*, accessed on 1 March 2024).The observed splicing types for the DAS gene encoding the 26.5 kDa heat shock protein involve an alternate 3 ′ acceptor site for the isoforms of the sHSP gene (AT1G52560, locus XLOC_005378) and an alternate 5 ′ donor site for those of cactin gene (AT1G32870, locus XLOC_001540) (Figures S13 and S6, respectively).Given the limited information on cactin gene, its potential response to intensive light stress and its involvement in alternative splicing of the DAS gene remain speculative. Cluster 102 denotes the concordant expression of two genetic loci, namely SR45a and a DAS gene isoform encoding heat shock 70 kDa protein 16, as delineated in Figure 4 and Table S2.The expression profile of this cluster showcases a diminution under the combined influence of intensive light and heat stresses or an increase under salt stress conditions.The SR45a protein, an integral constituent of the spliceosome machinery, was previously implicated in modulating responses to salt stress.Furthermore, it serves as a pivotal mediator in salt stress signal transduction pathways, functioning as a splicing factor for genes associated with salt stress in Arabidopsis.Its role encompasses facilitating the bridging between the 5 ′ and 3 ′ splice sites during spliceosome assembly [55][56][57][58][59]. Recent literature highlights the induction of the two co-expressed genes of SR45a under the influence of salt stress [58,59].Notably, gene ontology (GO) annotation results suggest its responsiveness to light signaling (https://www.uniprot.org/uniprotkb/Q84TH4/entry#Q84TH4-2, accessed on 1 March 2024).The DAS gene encoding the heat shock 70 kDa protein 16, a cytosolic chaperone, aids in protein folding, degradation, and translocation, conferring tolerance against heat and osmotic stresses [60,61].GO annotation further indicates its positive response to light signaling (https://www.uniprot.org/uniprotkb/Q9SAB1/entry, accessed on 1 March 2024).However, the documented functions of this DAS/SF pair do not align with those observed under the multifactorial stress combinations. Cluster 223 features the concordant expression of an isoform of SR-like 1 and isoforms of two DAS genes encoding the chaperone protein DnaJ (or HSP40) and the shaker-type potassium channel GORK.This cluster exhibits downregulation under salt stress and upregulation under all other stress combinations.SRL1 gene appears to exhibit no alternative splicing under the stress combinations in this study (Table S1).It was reported to participate in heat stress tolerance [62], with GO annotation suggesting its response to light stimulus (https://www.uniprot.org/uniprotkb/Q94L34/entry,accessed on 1 March 2024).The DAS gene encoding the chaperone protein DnaJ promotes protein homeostasis and positively responds to heat shock [63,64].Although GORK's documented function differs substantially from the concordantly expressed SF, its expression pattern aligns with this cluster's stress response profile.GORK enhances sensitivity to ABA and negatively responds to salt and osmotic stresses via phosphatase 2A-or PP2CA-mediated signals [65].Regarding the splicing modalities exhibited by the gene encoding DnaJ (STRG.871.4), it is noteworthy that two previously documented isoforms of an alternate stress-responsive gene coexist within the same locus (i.e., XLOC_003799), as illustrated in Figure S9.Consequently, our analysis is confined solely to the two novel isoforms of the DAS gene encoding DnaJ, which demonstrate concordant expression with the SR-like 1 gene.Notably, the splice type observed in these two novel isoforms (i.e., STRG.871.1/STRG.871.4) manifests as alternate 3 ′ acceptor site utilization.Turning to the splicing patterns exhibited by isoforms encoding GORK, our findings unveil instances of exon skipping and intron retention within the stress-responsive novel isoforms (i.e., STRG.14621,XLOC_015088), as delineated in Figure S29.Note that the analysis did not encompass exons/introns in the new isoforms of the GORK gene, as they appear to belong to another gene. Cluster 279 epitomizes the coherent co-expression of an isoform pertaining to the gene encoding SR30 alongside isoforms of three DAS genes encoding mitogen-activated protein kinase kinase kinase 5 (MAPKKK5), calcium-binding EF hand family protein, and DEAD-box RNA helicase, as depicted in Figure 4 and Table S2.The expression profile characteristic of this cluster is the upregulation in response to the combined stresses of intensive light and heat.SR30's established involvement in spliceosome assembly and the modulation of specific plant gene splicing further underscores its functional significance within this context [66].Alternative splicing mediated by this splicing factor exhibits tissueand developmental stage-specificity, primarily exerting its functional influence during early seedling development and root differentiation.Recent investigations have shed light on the observation that the protein product encoded by this splicing factor accumulates in response to both cold and heat stresses [58,59], with GO annotation suggesting its responsiveness to light stimulation and participation in stress tolerance mechanisms (https://www.uniprot.org/uniprotkb/Q9XFR5/entry#Q9XFR5-2,accessed on 1 March 2024).As an intricate cascade of phosphorylation and signal transduction processes, MAP-KKK initiates the activation of MAP kinase kinase (MAPKK), subsequently facilitating the activation of MAP kinase (MAPK) [67].The gene encoding MAPKKK5 (MAP3K5), a serine/threonine kinase protein, is implicated in a myriad of cellular processes triggered by oxidative stresses, cellular differentiation, and survival mechanisms, as well as in orchestrating the mitochondria-dependent apoptosis signal transduction cascade [67][68][69][70].Notably, in rice, MAP3K5 has been implicated in the regulation of cell size through modulation of endogenous gibberellin levels [71].However, extant literature lacks a precedent for elucidating the response of the MAP3K5 gene to heat or light stress.Hence, we must regard its reaction to light and heat stresses as an indeterminate phenomenon. The DAS gene encoding the calcium-binding EF-hand family protein plays a pivotal role in enhancing plant resilience to abiotic stresses.Upon exposure to external stimuli, plant cells undergo a differential response, culminating in an elevation of cytoplasmic calcium levels.This surge in calcium concentration is perceived by specific cellular proteins, such as Ca 2+ -binding proteins or Ca 2+ sensors, which undergo conformational changes to facilitate interactions with signal transduction molecules necessary for their activation.Among these sensors, a subclass known as Ca 2+ -dependent protein kinases (CDPKs) assumes a crucial role in modulating the expression of light-and heat-stress-responsive genes.Notably, the EF-hand motif within calcium-binding proteins comprises a structural arrangement of two α-helices, thereby fortifying the plant's response to abiotic stresses [72].In the context of splicing modalities pertaining to the DAS gene (AT3G10300) harbored within locus XLOC_011288, the data presented in Figure S24 elucidates the presence of three instances of exon skipping, alongside a solitary event involving alternate 3 ′ acceptor site utilization across the isoforms of this gene. The DAS gene encoding DEAD-box RNA helicase has been documented to play a pivotal role in facilitating plant adaptation to intensive light conditions [73].This function is intricately mediated through the induction of ribosome biogenesis, a process crucial for increasing gene transcription and translation [74].Consequently, exposure to intensive light serves as a mechanism to enhance the photosynthetic capacity of plant cells by supporting plastid ribosome abundance, thereby enabling the overexpression of a suite of light-responsive genes [73,74].Photosynthetic organisms intricately orchestrate a complex regulatory network to finely modulate the capture and conversion of light energy, aiming to mitigate the risk of photodamage arising from imbalances between light energy conversion and utilization processes [75][76][77].Alterations in light intensity precipitate shifts in energy conversion rates, serving as a mechanism to optimize cellular metabolic demands amidst fluctuating environmental conditions [73].The regulatory influence of the DAS gene extends to overseeing light-dependent ribosomal RNA precursor maturation in accordance with the exigencies of plant cellular physiology.This regulatory framework necessitates diverse forms of the DAS gene to dynamically respond to variances in light intensity.The examination of splicing modalities pertaining to the DAS gene (AT1G20920) housed within locus XLOC_000999, as depicted in Figure S5, reveals the occurrence of alternate 5 ′ donor site utilization within the solitary exon of this gene. Cluster 569 delineates the coordinated expression of the gene encoding SC35-like splicing factor 33 (SCL33) alongside the DAS gene encoding 1-aminocyclopropane-1-carboxylate synthase 6 (ACC synthase 6 or ACS6), as depicted in Figure 4 and Table S2.The expression profile of this cluster demonstrates upregulation under conditions of combined intensive light and heat stresses.The GO annotation highlights the light-responsive nature of the splicing factor encoding gene (https://www.uniprot.org/uniprotkb/Q9SEU4/entry,accessed on 1 March 2024).Notably, a motif termed GAA, present in numerous proteins, is implicated in bolstering exonic splicing enhancer activity by facilitating the recruitment of appropriate splicing factors for alternative splicing events [78].The identified motif has been validated to govern intron splicing in red light-responsive genes through the recruitment of the SR protein SCL33 [79].While the original splicing pattern facilitated by this splicing factor entails intron retention, our investigation did not detect this particular splicing event in the light/heat responsive isoforms of the DAS gene (AT3G53940), as illustrated in Figure S30.Instead, our findings revealed the occurrence of two alternative splicing modalities, namely exon skipping and alternative 5 ′ donor site utilization.Gene ontology annotation results further corroborated the regulatory influence of light stress on the gene encoding this enzyme (https://www.uniprot.org/uniprotkb/Q9XFI3/entry,accessed on 1 March 2024).The ACC generated by the ACS6 enzyme, encoded by this DAS gene, has recently been noted to serve as an intermediary metabolite in ethylene biosynthesis [80].Ethylene, a stress-responsive phytohormone [81], plays a pivotal role in modulating plant growth in adverse environmental conditions [82,83].The orchestrated activity of the regulated ACS6 and ACC oxidase 10 (ACO10, encoded by AT5G40910) enzymes constitutes the primary biosynthetic pathway responsible for ethylene synthesis [83].In the present study, two alternatively spliced isoforms of the gene encoding ACO10 were found to be under regulatory influence during stress conditions (Figure S49).Within plants, the biosynthesis of ethylene necessitates the utilization of the sulfur-containing amino acid methionine as the principal substrate for the enzymatic reaction (Figure 6) [84].Initially, the enzyme Sadenosyl-methionine (SAM) synthetase 2, encoded by AT4G01850, catalyzes the conversion of methionine to SAM within the Yang cycle [84][85][86].One isoform of this gene undergoes regulation in response to stress conditions, as depicted in Figure S50.Subsequently, SAM undergoes conversion into 1-aminocyclopropane-1-carboxylate (ACC), catalyzed by the enzyme 1-aminocyclopropane-1-carboxylate synthase 6 (ACS6) [87].Concurrently, the production of 5 ′ -methylthioadenosine from SAM occurs, serving as a precursor for methionine regeneration via the methionine cycle, thereby replenishing the methyl group for subsequent rounds of ethylene biosynthesis [88].ACC, serving as the direct precursor to ethylene, is enzymatically converted by 1-aminocyclopropane-1-carboxylate oxidase 10 (ACO10) to generate ethylene (Figure 6).The latter compound is synthesized in response to abiotic stressors such as drought, salt, and heat stresses, thereby instigating a series of adaptive responses [80].These responses include the maintenance of the photosynthetic rate, the production of the osmolyte glycine betaine (GB), and other antioxidant compounds (Figure 6), collectively empowering plants with the capacity to withstand challenging environmental conditions [89][90][91]. Conclusions In conclusion, the corroborated data pertaining to the six DAS genes within clusters 101, 223, 279, and 569, exhibiting concordant expression with four SFs, substantiates the findings obtained in this study.However, careful attention is warranted when examining new isoforms of any gene prior to exploring alternative splicing events.Consequently, we posit that the associations observed among these DAS/SF genes across distinct clusters warrant further experimental elucidation.Systematically cataloging and leveraging such associations hold promise for unveiling novel genetic-based avenues toward bolstering climate resilience, enhancing plant productivity, and augmenting the nutritional profile of cultivated crops. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/genes15060675/s1.Table S1.Cluster analysis of transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Further growth and abiotic stress conditions were recently reported [26].Red text refers to the most consistent expression patterns analyzed further.Yellow box refers to splicing factors (SFs), while bright green box refers to concordantly expressed genes with 2 or more regulated isoforms and bright blue box refers to concordantly expressed genes with no regulated isoforms; Table S2.Detailed description of DAS gene isoforms concordantly expressed with one or more splicing factors (SFs) within the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Red text refers to gene isoforms that are not stress-related, thus not analyed further.Blue text refers to splicing factors.Yellow box refers to splicing factors (SFs), while bright green box refers to concordantly expressed genes with 2 or more regulated isoforms; Table S3.Detailed description of DAS gene isoforms concordantly expressed with one or more splicing factors (SFs) within the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Red text refers to gene isoforms that are not stress-related, thus not analyed further.Blue text refers to a stress-related gene with no distinctive isoform.Bright green box refers to concordantly expressed genes with 2 or more regulated isoforms; Table S4.Detailed description of splicing factors with isoforms among those concordantly expressed with one or more DAS gene isoforms within the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Red text refers to gene isoforms that are stress-related, while blue text refers to the original SF isoforms that are concordantly expressed with stress-related gene isoform(s); Table S5.Description of loci with regulated DAS or SF gene isoforms in A. thaliana (Col-0) in terms of chromosome number and locus location as well as annotated and new isoforms; Table S6.Description of all loci in A. thaliana (Col-0) in terms of chromosome number and locus location as well as annotated and new isoforms; C, 700 µmol m −2 s −1 ).Further growth and abiotic stress conditions were recently reported [26].Sequences can be found in Bioproject PRJNA622644.Detailed information of all gene clusters are shown in Table S1; S3, while that of gene loci are shown in Table S6.Isoforms in light gray are not stress-regulated; S4, while that of gene loci are shown in Table S6.Isoforms in light gray are not stress-regulated; Figure S5.Structure of previously annotated and new DAS isoforms on XLOC_000999 (Gene 1, AT1G20920) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S6.Structure of previously annotated and new SF isoforms on XLOC_001540 (SF1, AT1G32870) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the SF isoform concordantly expressed with a given gene, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S4 and S6; Figure S7.Structure of previously annotated and new DAS isoforms on XLOC_001851 (Gene 2, STRG.3781.2) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6.Regulated new isoforms of this DAS gene have no annotated gene to compare with, thus, was not analyzed further; Figure S8.Structure of previously annotated and new DAS isoforms on XLOC_003623 (Gene 3, AT1G05850) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.One of the three isoforms (e.g., AT1G05870.12)belongs to another gene in this locus.The other two isoforms of the gene AT1G05850 show no clear case of alternative splicing, thus isoforms of this locus were not considered for further analysis.Further information is available in Tables S1, S3 and S6; Figure S9.Structure of previously annotated and new DAS isoforms on XLOC_003799 (Gene 4, STRG.871) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S10.Structure of previously annotated and new DAS isoforms on XLOC_003907 (Gene 5, STRG.1063) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S11.Structure of previously annotated and new DAS isoforms on XLOC_005026 (Gene 6, AT1G35730) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S12.Structure of previously annotated and new DAS isoforms on XLOC_005205 (Gene 7, AT1G48700) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S13.Structure of previously annotated and new DAS isoforms on XLOC_005378 (Gene 8, AT1G52560) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S14.Structure of previously annotated and new DAS isoforms on XLOC_005837 (Gene 9, AT1G63010) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S15.Structure of previously annotated and new DAS isoforms on XLOC_006004 (Gene 10, AT1G66410) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S16.Structure of previously annotated and new DAS isoforms on XLOC_006475 (Gene 11, STRG.6385) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S17.Structure of previously annotated and new SF isoforms on XLOC_007097 (SF3, AT2G14080) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the SF isoform concordantly expressed with a given gene, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S4 and S6; Figure S18.Structure of previously annotated and new DAS isoforms on XLOC_007392 (Gene 12, AT2G21070) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S19.Structure of previously annotated and new DAS isoforms on XLOC_008122 (Gene 13, AT2G35840) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S20.Structure of previously annotated and new DAS isoforms on XLOC_008527 (Gene 14, AT2G43500) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6.The two events of alternative splicing, e.g., exon skipping and intron retention, were deeply investigated.Active conserved domains in the generated protein were detected based on recent information [48]; Figure S21.Structure of previously annotated and new DAS isoforms on XLOC_009386 (Gene 15, STRG.8147)generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S22.Structure of previously annotated and new DAS isoforms on XLOC_009607 (Gene 16, AT2G24680) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S23.Structure of previously annotated and new DAS isoforms on XLOC_010435 (Gene 17, AT2G40340) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S24.Structure of previously annotated and new DAS isoforms on XLOC_011288 (Gene 18, AT3G10300) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S25.Structure of previously annotated and new DAS isoforms on XLOC_012141 (Gene 19, AT3G26700) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S26.Structure of previously annotated and new SF isoforms on XLOC_012492 (SF5, AT3G46490) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the SF isoform concordantly expressed with a given gene, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S4 and S6; Figure S27.Structure of previously annotated and new DAS isoforms on XLOC_014308 (Gene 20, AT3G19830) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S28.Structure of previously annotated and new DAS isoforms on XLOC_014905 (Gene 21, STRG.14271) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S29.Structure of previously annotated and new DAS isoforms on XLOC_015088 (Gene 22, STRG.14621) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S30.Structure of previously annotated and new DAS isoforms on XLOC_015398 (Gene 23, AT3G53940) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S31.Structure of previously annotated and new DAS isoforms on XLOC_015679 (Gene 24, AT3G59430) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S32.Structure of previously annotated and new DAS isoforms on XLOC_017325 (Gene 25, AT4G29340/ STRG.19084) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S33.Structure of previously annotated and new DAS isoforms on XLOC_018080 (Gene 26, STRG.16675) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S34.Structure of previously annotated and new DAS isoforms on XLOC_019807 (Gene 27, AT5G01490/AT5G01500/AT5G01520) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6.Note that the three isoforms belong to different genes, thus, the concordantly expressed isoform (e.g., AT5G01490.2) with SF5 (e.g., AT3G46490.2) was not analyzed further; Figure S35.Structure of previously annotated and new DAS isoforms on XLOC_019990 (Gene 28, STRG.20609) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S36.Structure of previously annotated and new DAS isoforms on XLOC_020544 (Gene 29, STRG.21767) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S37.Structure of previously annotated and new DAS isoforms on XLOC_021961 (Gene 30, AT5G53120) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S38.Structure of previously annotated and new DAS isoforms on XLOC_022121 (Gene 31, STRG.24929)generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S39.Structure of previously annotated and new DAS isoforms on XLOC_022555 (Gene 32, STRG.25829) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S40.Structure of previously annotated and new DAS isoforms on XLOC_023309 (Gene 33, AT5G14020) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S41.Structure of previously annotated and new DAS isoforms on XLOC_025093 (Gene 34, STRG.25124) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; Figure S42.Structure of previously annotated and new DAS isoforms on XLOC_022555 (Gene 35, STRG.26050) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other Figure 1 . Figure 1.Heatmap referring to hierarchical clusters of gene expression generated from transcriptome datasets of 10-day-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.CT = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = intensive light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Further information is available in TableS1.Further growth and abiotic stress conditions were recently reported[26].The red box refers to the expression pattern H/L/HL↑, the bright blue box refers to the expression pattern C/S↑, the bright green box refers to HL↑, and the orange box refers to L↑.The log2 fold change was computed based on the delta Ct value in comparison to the control samples, where the yellow color in the legend indicates heightened expression, whereas the blue color signifies diminished expression.↑ = upregulation. Figure 1.Heatmap referring to hierarchical clusters of gene expression generated from transcriptome datasets of 10-day-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.CT = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = intensive light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Further information is available in TableS1.Further growth and abiotic stress conditions were recently reported[26].The red box refers to the expression pattern H/L/HL↑, the bright blue box refers to the expression pattern C/S↑, the bright green box refers to HL↑, and the orange box refers to L↑.The log2 fold change was computed based on the delta Ct value in comparison to the control samples, where the yellow color in the legend indicates heightened expression, whereas the blue color signifies diminished expression.↑ = upregulation. Figure 5 . Figure5.A schematic overview delineating the alternative splicing outcomes discerned within the annotated and novel isoforms originating from the A. thaliana locus XLOC_008527, manifested across varying multifaceted stress contexts.Herein, the gene isoform AT2G43500.11,encoding the ABC transporter B family member 11, serves as the foundational sequence for comparative analysis.Within this context, three distinctive splicing events transpired, encompassing the skipping of exons 2 and 3, alongside intron retention occurring within exon 5. Scrutiny at the amino acid sequence level of disparate exons suggests a probable exon skipping event within exon 3, while the remaining events (highlighted in yellow boxes) appear indicative of artifacts.Noteworthy are the active conserved motifs, notably RWP-RK and PB1-NLP, identified within the resultant protein, as informed by recent scholarly contributions.The RWP-RK motif (pfam02042), localized at the C-terminus of this transporter protein, plays a pivotal role in nitrogen-mediated developmental processes, as corroborated by extant literature.Similarly, the PB1 motif (cd06407) is characteristic of NIN-like proteins (NLP), pivotal regulators involved in mediating symbiotic relationships between legumes and nitrogen-fixing bacteria, alongside other critical biological processes.Detailed elucidation pertaining to the functional attributes and expression profiles of distinct isoforms of this gene can be found in TablesS2 and S3, while insights regarding isoform structure are expounded upon in FiguresS20 and S43. Figure S3 . Figure S3.Expression profiling of loci involving DAS genes concordantly expressed with splicing factors (SFs) within the most consistent expression patterns generated from transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Selected genes should have one or more annotated or new stress-regulated isoforms existing on the same chromosome locus.Isoforms in light gray are not stress-regulated.Detailed information of gene-SF concordant expression is available in TableS3, while that of gene loci are shown in TableS6.Isoforms in light gray are not stress-regulated; FigureS4.Expression profiling of loci with selected splicing factors that are concordantly expressed with DAS genes within the most consistent expression patterns generated from transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Selected splicing factors should have one or more Annotated or new stress-regulated isoforms existing on the same chromosome locus.Concordantly expressed splicing factors and their isoforms are shown in TableS4, while that of gene loci are shown in TableS6.Isoforms in light gray are not stress-regulated; FigureS5.Structure of previously annotated and new DAS isoforms on XLOC_000999 (Gene 1, AT1G20920) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS6.Structure of previously annotated and new SF isoforms on XLOC_001540 (SF1, AT1G32870) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the SF isoform concordantly expressed with a given gene, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S4 and S6; FigureS7.Structure of previously annotated and new DAS isoforms on XLOC_001851 (Gene 2, STRG.3781.2) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6.Regulated new isoforms of this DAS gene have no annotated gene to compare with, thus, was not analyzed further; FigureS8.Structure of previously annotated and new DAS isoforms on XLOC_003623 (Gene 3, AT1G05850) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.One of the three isoforms (e.g., AT1G05870.12)belongs to another gene in this locus.The other two isoforms of the gene AT1G05850 show no clear case of alternative splicing, thus isoforms of this locus were not considered for further analysis.Further information is available in Tables S1, S3 and S6; FigureS9.Structure of previously annotated and new DAS isoforms on XLOC_003799 (Gene 4, STRG.871) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS10.Structure of previously annotated and new DAS isoforms on XLOC_003907 (Gene 5, STRG.1063) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS11.Structure of previously annotated and new DAS isoforms on XLOC_005026 (Gene 6, AT1G35730) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not Figure S3.Expression profiling of loci involving DAS genes concordantly expressed with splicing factors (SFs) within the most consistent expression patterns generated from transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Selected genes should have one or more annotated or new stress-regulated isoforms existing on the same chromosome locus.Isoforms in light gray are not stress-regulated.Detailed information of gene-SF concordant expression is available in TableS3, while that of gene loci are shown in TableS6.Isoforms in light gray are not stress-regulated; FigureS4.Expression profiling of loci with selected splicing factors that are concordantly expressed with DAS genes within the most consistent expression patterns generated from transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings exposed to different multifactorial stress combinations.C = control (0 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), S = salt stress (50 mM NaCl, 21 • C, 50 µmol m −2 s −1 ), H = heat stress (0 mM NaCl, 33 • C, 50 µmol m −2 s −1 ), L = high light stress (0 mM NaCl, 21 • C, 700 µmol m −2 s −1 ).Selected splicing factors should have one or more Annotated or new stress-regulated isoforms existing on the same chromosome locus.Concordantly expressed splicing factors and their isoforms are shown in TableS4, while that of gene loci are shown in TableS6.Isoforms in light gray are not stress-regulated; FigureS5.Structure of previously annotated and new DAS isoforms on XLOC_000999 (Gene 1, AT1G20920) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS6.Structure of previously annotated and new SF isoforms on XLOC_001540 (SF1, AT1G32870) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the SF isoform concordantly expressed with a given gene, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S4 and S6; FigureS7.Structure of previously annotated and new DAS isoforms on XLOC_001851 (Gene 2, STRG.3781.2) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6.Regulated new isoforms of this DAS gene have no annotated gene to compare with, thus, was not analyzed further; FigureS8.Structure of previously annotated and new DAS isoforms on XLOC_003623 (Gene 3, AT1G05850) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.One of the three isoforms (e.g., AT1G05870.12)belongs to another gene in this locus.The other two isoforms of the gene AT1G05850 show no clear case of alternative splicing, thus isoforms of this locus were not considered for further analysis.Further information is available in Tables S1, S3 and S6; FigureS9.Structure of previously annotated and new DAS isoforms on XLOC_003799 (Gene 4, STRG.871) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS10.Structure of previously annotated and new DAS isoforms on XLOC_003907 (Gene 5, STRG.1063) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not consistently regulated under the stress.Further information is available in Tables S1, S3 and S6; FigureS11.Structure of previously annotated and new DAS isoforms on XLOC_005026 (Gene 6, AT1G35730) generated due to different multifactorial stress combinations in the transcriptome datasets of 10-d-old A. thaliana (wild-type Col-0) seedlings.Red arrow refers to the gene isoform concordantly expressed with a given splicing factor, while green arrow(s) refer to other regulated isoforms of this gene.Other isoforms are not ).Further information is available in TableS1.↑ = upregulation, ↓ = downreguation.Colored lines refer to regulated transcripts.
v3-fos-license
2016-11-08T18:56:27.780Z
2016-09-21T00:00:00.000
2673715
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00383-016-3971-5.pdf", "pdf_hash": "3c7bc431297162c8ec6a35508c0185b5ba9ac19b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43965", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3c7bc431297162c8ec6a35508c0185b5ba9ac19b", "year": 2016 }
pes2o/s2orc
Post-natal erythromycin exposure and risk of infantile hypertrophic pyloric stenosis: a systematic review and meta-analysis Purpose Macrolide antibiotics, erythromycin, in particular, have been linked to the development of infantile hypertrophic pyloric stenosis (IHPS). Our aim was to conduct a systematic review of the evidence of whether post-natal erythromycin exposure is associated with subsequent development of IHPS. Methods A systematic review of postnatal erythromycin administration and IHPS was performed. Papers were included if data were available on development (yes/no) of IHPS in infants exposed/unexposed to erythromycin. Data were meta-analysed using Review Manager 5.3. A random effects model was decided on a priori due to heterogeneity of study design; data are odds ratio (OR) with 95 % CI. Results Nine papers reported data suitable for analysis; two randomised controlled trials and seven retrospective studies. Overall, erythromycin exposure was significantly associated with development of IHPS [OR 2.45 (1.12–5.35), p = 0.02]. However, significant heterogeneity existed between the studies (I 2 = 84 %, p < 0.0001). Data on erythromycin exposure in the first 14 days of life was extracted from 4/9 studies and identified a strong association between erythromycin exposure and subsequent development IHPS [OR 12.89 (7.67–2167), p < 0.00001]. Conclusion This study demonstrates a significant association between post-natal erythromycin exposure and development of IHPS, which seems stronger when exposure occurs in the first 2 weeks of life. Introduction Infantile hypertrophic pyloric stenosis (IHPS) affects 1.9 of every 1000 live births [1] making the condition the most common cause of surgical intervention in the first 6 months of life [2]. IHPS is characterised by hypertrophy of the pylorus resulting in gastric outlet obstruction, leading to the infant presenting with projectile vomiting and severe dehydration. Although genetics [3] and male sex [4] have been identified as risk factors, the aetiology of IHPS is largely unknown. Furthermore, changes in the incidence rates of IHPS have led to the hypothesis that environmental factors may have a role in the development of the condition [5]. Several studies have identified a strong relationship between exposure to erythromycin and development of IHPS [6]-with some studies identifying an eight to tenfold increase in risk of developing IHPS when erythromycin was administered in the first 2 weeks of life [7]. One theory is that erythromycin interacts with the receptors of motilin, an intestinal peptide that stimulates contraction of gut smooth muscle. This interaction could therefore produce contraction of the gastric and pyloric bulb, resulting in hypertrophy of the pylorus [8]. However, other studies refute the association between erythromycin treatment in infants and the development of IHPS entirely, identifying no association [9]. The aim of this study was to perform a systematic review and meta-analysis of published studies to clarify and quantify the relationship between any post-natal exposure to erythromycin and the development of pyloric stenosis. A second aim was to determine whether treatment with erythromycin within the first 2 weeks of life increased the magnitude of this association. Methods A systematic literature search was performed of all studies published from 1 January 1970 and 1 July 2016, using PubMed, Ovid Medline, Embase and the Cochrane Library with the medical subject heading (MeSH) terms and text words: (infantile hypertrophic pyloric stenosis OR pyloric stenosis) AND (macrolide OR erythromycin) and similar variants. Search criteria were limited to studies published in the English language, and by age of subject (age less than 6 months) to ensure that only infantile cases of pyloric stenosis were included for analysis. Reference lists of included articles and abstract lists of relevant national and international meetings were also searched to identify other studies which could be included for analysis. Studies were then assessed for inclusion by two authors independently (LM, SE). Our aim was to ensure that all robust studies which reported an association between erythromycin exposure and subsequent development of IHPS were included for analysis. Studies were excluded for several reasons; insufficient data available for analysis, unable to extract suitable data to allow meta-analysis, type of macrolide not explicitly stated, route of administration was only to the mother (either ante-natal or post-natal transfer in breast milk) or if route of administration of erythromycin was ambiguous. When more than one publication from an overlapping cohort was identified, the largest study with the most rigorous methodology was selected. Duplicate data, already available as a published paper, which had been published in the form of letters to the editor of journals was also excluded. The selection process is illustrated in Fig. 1. Data was independently extracted by the authors. The meta-analysis was performed using Mantel-Haenszel random effects model using the Cochrane Collaboration's Review Manager (RevMan 5.3, the Nordic Cochrane Centre, the Cochrane Collaboration, Copenhagen) to calculate the overall odds ratio (OR), 95 % confidence interval (CI) and I 2 test statistic for heterogeneity of studies. Publication bias was assessed using the funnel plot method. Results Literature search identified 115 papers for potential inclusion; 104 did not meet the criteria for inclusion and were excluded from the meta-analysis (Fig. 1). The remaining nine studies comprised two randomised control trials (one study on erythromycin used for improving enteral feeding tolerance and a second study on oral erythromycin for treatment of gastrointestinal dysmotility in preterm infants), and seven retrospective cohort studies. The characteristics of eligible studies are shown in Table 1. Selected cohort studies were published between 1999 and 2016. Cases were defined as infants who developed pyloric stenosis in infancy (age less than 6 months), whilst controls were patients who did not develop pyloric stenosis during the study period. National birth registries, hospital and community health records were the main data sources for both groups. Diagnosis of pyloric stenosis was confirmed from clinical diagnosis recorded in health records. The total number of infants included was 3,008,453, of whom 16,431 had received erythromycin. Sixty-three infants developed IHPS after receiving erythromycin, whereas 4632 infants developed IHPS without having received erythromycin. In the two randomised studies, the total number of infants included was small, and there were no cases of pyloric stenosis in either the exposed or the unexposed groups, so that they could not contribute to the odds ratio. Overall, there was a significant association between erythromycin exposure and subsequent development of pyloric stenosis [OR 2.45 (1.12-5.35), p = 0.02, Fig. 2]. However, there was significant heterogeneity between the studies (I 2 = 84 %, p \ 0.0001). A funnel plot of published studies demonstrated possible asymmetry indicating potential publication bias, although asymmetry is difficult to determine with only seven studies contributing to the funnel plot (Fig. 3). Fig. 1 Diagram of workflow in the systematic review and metaanalysis A further analysis was performed to identify the relationship between exposure to erythromycin in the first 14 days of life and development of IHPS. Only four of the selected nine studies documented whether exposure had occurred within this period. In these studies, the association between erythromycin exposure and subsequent development of pyloric stenosis was even stronger [OR 12.89 (7.67-2167), p \ 0.00001] (Fig. 4). Discussion This study is the only published meta-analysis which reviews the association between erythromycin use in infants and subsequent development of IHPS and provides a comprehensive estimate of this risk. The key finding of the meta-analysis is that the OR of developing IHPS after any erythromycin in the post-natal Table 1 A summary of the studies included detailing country of origin, study type, data source, total number of infants studied, number of infants within study group who were exposed to erythromycin, and subsequently developed IHPS and the weight of the study in the meta-analysis Fig. 2 Forest plot comparing the incidence of IHPS between infants with exposure to erythromycin at any time and infants who had never been exposed to erythromycin Pediatr Surg Int (2016) 32:1147-1152 period is two and a half (OR = 2.45) times greater than in those infants not exposed to the drug. Furthermore, subgroup analysis of included studies identified a 12-fold increase in the development of IHPS when erythromycin was administered in the first 14 days of life; a value significantly higher than previously reported. Literature search did not identify any published metaanalyses and only one systematic review. Maheshwa et al. [7] investigated the relationship between young infants treated with erythromycin and risk of developing hypertrophic pyloric stenosis by analysis of six papers published between 1976 and 2005. Their review concludes that while more evidence is required regarding the relationship between erythromycin use and IHPS, young infants exposed to erythromycin in the first few weeks of life are at a greater risk of IHPS. Their analysis is also in agreement with this study in stating that the risk appears to be highest in the first 2 weeks of life, but stipulates that this occurs in term or near-term infants or when antibiotics are administered for more than 14 days. It should be noted that two papers included in our analysis, Ng et al. [10] and Mohammadizadeh et al. [17], study populations of preterm infants alone whilst Ericson et al. [15] have analysed only infants within a neonatal intensive care (NICU) environment. Therefore, variability of the calculated OR may occur due to the inclusion of these groups of infants within the analysis. This could also explain the high I 2 value representing heterogeneity. In addition, significant geographical bias exists with five of the nine studies selected for analysis focusing on populations from the United States. Such bias may partly result from the literature search criteria which only include studies published in the English language. A further source of bias occurs due to the greater proportion of cohort studies included for analysis in comparison to other study types. There were no published casecontrol studies which reviewed this relationship. However, this may result from the ethical feasibility of designing a study which may prevent an infant from receiving erythromycin to treat infection in cases where alternative antibiotics are contraindicated or insensitive. Bias In accounting for the variability of the calculated OR and the significant heterogeneity present between the nine included studies, several factors must be considered. An important factor is that the incidence of IHPS is heterogeneous, varying significantly according to ethnicity, sex and time. There is also a significant genetic component to development of IHPS, so that any conclusive study should also include analysis of confounders, such as gender, ethnicity, and genetic status. Risk/benefit Erythromycin is commonly indicated within the neonatal population for prophylaxis following Chlamydia trachomatis infection [18] in preventing conjunctivitis or pneumonia and in the treatment of pertussis [14]. In addition, erythromycin has also been utilised in the treatment of gastrointestinal dysmotility within this population [10]. Although this study concludes that OR for developing IHPS following erythromycin exposure is high, particularly Fig. 4 Forest plot comparing the incidence of IHPS between infants with exposure to erythromycin within the first 2 weeks of life and infants who have never been exposed to erythromycin in the first 14 days of life, physicians must evaluate the risk-benefit ratio in making an informed decision as to whether the potential morbidity or mortality of an infection such as pertussis is outweighed by the risk of developing IHPS. It should also be noted that the absolute risk of developing IHPS following erythromycin exposure is not high [0.4 % (95 % CI 0.3-0.5 %) in those receiving erythromycin at any time, and 2.6 % (95 % CI 1.5-4.2 %) in those receiving erythromycin in the first 14 days]. However, consideration should be made to the fact that despite the indications, macrolides (including erythromycin) remain unlicensed for use by the US Food and Drug Administration for use in infants less than 6 months. Limitations The main limitation of this study is the lack of published studies investigating the relationship between erythromycin use and development of IHPS. Furthermore, differences existed between study designs which may have led to further variability in the calculated ORs. In particular, studies often categorised cases into time periods which often varied between studies resulting in their exclusion despite rigorous methodology. Studies which did not explicitly state that the macrolide administered was erythromycin were also excluded. In addition, all cohort studies included were performed retrospectively, thus having a negative effect on the quality of the data. Our study aimed to exclusively review the effect of neonatal administration of erythromycin on the risk of subsequently developing IHPS. However, the question remains as to whether other methods of exposure (such as maternal administration en-utero or postnatally from absorption via breast milk) may be associated with similar levels of risk. With regard to exposure via breastfeeding, Sorensen [19] concludes that an increased risk of developing IHPS exists following maternal macrolide administration postnatally [OR 2.7 (95 % CI 0.7-11.1)]. However, this is contrasted by two papers by [20] Goldstein et al. and [21] Salman et al. which found no correlation between breast milk exposure and IHPS. The data on exposure via breast milk was too sparse to meta-analyse. There is also some evidence in the literature that administration of erythromycin to pregnant women may result in the fetus developing IHPS as an infant. Kallen [22] report a risk ratio of 2.51 (95 % CI 0.92-5.46) of infants developing IHPS in cases where their mother had received erythromycin after the first antenatal visit. However, studies by Lin [23] and Louik [24] found no relationship between prenatal exposure to macrolide and pyloric stenosis. Furthermore, from the papers analysed, there is no report regarding the family history, and therefore it remains unclear if a genetic predisposition is required to increase the risk of acquiring IHPS following administration of erythromycin. With such significant variability in the available literature in reporting the exact nature and magnitude of risk of erythromycin administration (during both fetal and neonatal development) further study is warranted. Conclusion This study provides clinicians with the first comprehensive estimate for the OR of infants developing IHPS when exposed to erythromycin. Physicians should utilise this study as a tool in evaluating the risk-benefit ratio of administering erythromycin for treatment and prophylaxis of infections in neonates versus the risk of developing IHPS. However, in determining whether erythromycin is a suitable treatment for infections within this group, the limitations of this study should be noted. In particular, publications bias and the lack of high-quality, with significant patient numbers should be considered.
v3-fos-license
2021-03-07T06:16:21.975Z
2021-02-25T00:00:00.000
232130449
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/12/3/328/pdf", "pdf_hash": "c699484ec61f681e118535e9ffb07e100d33a840", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43966", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "ec7121a5f4c9bf3c9596a785eea3b00e40ff640a", "year": 2021 }
pes2o/s2orc
A Workflow for Selection of Single Nucleotide Polymorphic Markers for Studying of Genetics of Ischemic Stroke Outcomes In this paper we propose a workflow for studying the genetic architecture of ischemic stroke outcomes. It develops further the candidate gene approach. The workflow is based on the animal model of brain ischemia, comparative genomics, human genomic variations, and algorithms of selection of tagging single nucleotide polymorphisms (tagSNPs) in genes which expression was changed after ischemic stroke. The workflow starts from a set of rat genes that changed their expression in response to brain ischemia and results in a set of tagSNPs, which represent other SNPs in the human genes analyzed and influenced on their expression as well. Introduction The ischemic stroke (IS) is a multifactorial disease, where the genetic factors contribute substantially [1]. The same seems to be true for outcomes after IS. However, their associations with the particular genetic factors are poorly known and require further investigation [2,3]. There are two main approaches to identify the genes involved in the development of complex traits: candidate gene approach and genome-wide association (GWA) study (GWAS) [4]. Both were extensively applied to study the genetic bases of IS and resulted in revealing several tens of genes involved in stroke development and risk [5]. In contrast, only few GWA studies have been published on outcomes after IS [6,7]. Therefore, the real genetic control of them remains a black box and the full list of the risk (prognostic) loci is yet to be identified. In this paper we describe an approach to explore the genetic bases of variability in IS outcomes. GWAS does not require the prior knowledge on the importance of the specific functional features of the trait under consideration. At the same time, it is less precise in revealing causal loci (genes) generally located in particular chromosomal regions that can contain no genes or alternatively be abundant with them [8]. The usability of a gene-based approach was mainly restricted by the incompleteness of knowledge about the biology of the phenotypes studied. To break the information bottleneck, several strategies extending the candidate gene approach were proposed [4]. They were based on linkage information in a chromosomal segment, methods of comparative genomics, and gene expression at different stages. There were also the approaches that combine two or more strategies together. One such method is the digital candidate gene approach (DigiCGA), which extract, filter, and analyze the resources on the web available publicly [9]. The method we propose incorporates the best strategies of the mentioned above approaches and puts them in a form of a workflow. The idea of this research originates from the models of brain ischemia in laboratory animals that were developed to understand the biological processes underlying cerebral Genes 2021, 12, 328 2 of 9 ischemic injury [10]. Studies of rat and mouse genomes showed that most part of human disease genes (99.5%) had orthologues in rodents [11]. Furthermore, comparison of conservation rates of rodent orthologues associated with different types of diseases demonstrated that gene set related to neurological conditions evolved slowly. Together that suggested the rodent models of human neurological diseases to be appropriate representations of the disease processes in humans. Many of the results obtained in model experiments were subsequently confirmed (correlated) in corresponding GWA studies in humans, including those assessed with outcomes after IS [6]. Although there is no animal model that could cover all aspects of human ischemic stroke [12], one of such models-the transient middle cerebral artery occlusion (tMCAO)-is quite promising and actively tested for the development of neuroprotective therapeutic approaches. It is based on temporal artery occlusion and subsequent restoration of blood flow. According to Howells, such model was used in 42.2% of 2582 neuroprotection experiments. The occlusion with subsequent restoration of blood flow can influence the functioning of different genes. Recently, Dergunova et al. identified a list of rat genes that substantially changed their expression in brain in the response to tMCAO [13]. We propose to explore the genomic variations in human orthologues of these genes for searching the genomic markers of IS outcome. Below, we describe in the details the workflow that starts from the list of the rat genes and leads to a set of tagging SNPs (tagSNP) that can be used in case-control studies with the conventional TaqMan real-time PCR assays. Materials and Methods The main steps of the workflow proposed are shown in Figure 1. In the beginning, there are rat genes with expression level evaluated at 24 h after tMCAO [13]. Twentyfour of them demonstrated the most significant changes in expression level (change in expression >6-fold and p-value < 0.01) and were chosen for further analysis. The idea of this research originates from the models of brain ischemia in laboratory animals that were developed to understand the biological processes underlying cerebral ischemic injury [10]. Studies of rat and mouse genomes showed that most part of human disease genes (99.5%) had orthologues in rodents [11]. Furthermore, comparison of conservation rates of rodent orthologues associated with different types of diseases demonstrated that gene set related to neurological conditions evolved slowly. Together that suggested the rodent models of human neurological diseases to be appropriate representations of the disease processes in humans. Many of the results obtained in model experiments were subsequently confirmed (correlated) in corresponding GWA studies in humans, including those assessed with outcomes after IS [6]. Although there is no animal model that could cover all aspects of human ischemic stroke [12], one of such modelsthe transient middle cerebral artery occlusion (tMCAO)-is quite promising and actively tested for the development of neuroprotective therapeutic approaches. It is based on temporal artery occlusion and subsequent restoration of blood flow. According to Howells, such model was used in 42.2% of 2582 neuroprotection experiments. The occlusion with subsequent restoration of blood flow can influence the functioning of different genes. Recently, Dergunova et al. identified a list of rat genes that substantially changed their expression in brain in the response to tMCAO [13]. We propose to explore the genomic variations in human orthologues of these genes for searching the genomic markers of IS outcome. Below, we describe in the details the workflow that starts from the list of the rat genes and leads to a set of tagging SNPs (tagSNP) that can be used in case-control studies with the conventional TaqMan real-time PCR assays. Materials and Methods The main steps of the workflow proposed are shown in Figure 1. In the beginning, there are rat genes with expression level evaluated at 24 h after tMCAO [13]. Twenty-four of them demonstrated the most significant changes in expression level (change in expression >6-fold and p-value < 0.01) and were chosen for further analysis. The human orthologues of the rat genes were comparatively identified by querying several resources: Ensembl [14], PANTHER 8.0 [15], PhylomeDB 4 [16], and MetaPhOrs [17]. The data from the database Ensembl Genes 97 were retrieved with BioMart by accessing it with web-based interface [18]. The next step was the identification of SNPs within the human genes, including their 5' and 3' flanking regions of 5000 bp length. To be relevant to the SNP frequencies in the The human orthologues of the rat genes were comparatively identified by querying several resources: Ensembl [14], PANTHER 8.0 [15], PhylomeDB 4 [16], and MetaPhOrs [17]. The data from the database Ensembl Genes 97 were retrieved with BioMart by accessing it with web-based interface [18]. The next step was the identification of SNPs within the human genes, including their 5' and 3' flanking regions of 5000 bp length. To be relevant to the SNP frequencies in the potential case-control study, the genotypic data should be taken from an appropriate population [19]. To choose such a population, the collection of population samples of 1000 Genomes Project was used. The project comprises one the most comprehensively characterized set of populations with detailed history about each of them [20]. For our purposes we selected CEU population because its genotype data had been shown to be appropriate for selection of loci to assess genetic variability in the most European populations, including those living in Russia [21][22][23][24]. We extracted the required set of SNPs from the bulk of CEU genotype data using VCFtools (0.1.15) [25]. To capture the most common genetic variants, the SNPs with minor allele frequency (MAF) higher than 10% were considered. Then, we explored the associations between the alleles of selected loci using the correlation coefficient r2 and revealed patterns of linkage disequilibrium (LD) in each of the region considered. To do this, we applied the CLUSTAG tool [26], Tagger instrument [27] implemented in Haploview 4.2 tool [28], and gpart R package (version 1.2.0) [29] using default parameters. The input files were generated from vcf files obtained in the previous step with the custom scripts. All of the tools were able to reveal patterns of LD (LD blocks) using distinct algorithms but only CLUSTAG and Haploview allowed to compute tagSNPs which represented the groups of highly correlated SNPs in a chromosomal region. Thus, they were used for revealing tagSNPs in the gene regions studied (the threshold of squared correlation between SNPs r2 ≥ 0.8). For both tools, we estimated the tagging effectiveness (TE) as the ratio of the number of tagSNPs to the number of SNPs they tagged. Because of large number of potential tagSNPs and taking into account that not all of them could mark functionally important SNPs, the subsequent step was to annotate all the possible tagSNPs from high-LD regions with expression quantitative trait loci (eQTLs). For each gene, we downloaded the Significant Single-Tissue eQTLs using the web-interface of Genotype-Tissue Expression (GTEx) project (Release V8) [30]. The eQTLs were further intersected with the tagSNPs determined with Tagger algorithm and filtered by tissue defined as Brain, Artery, Nerve, Blood, and Heart. At the final step the tagSNPs from the Haploview's Tagger runs with the maximal capture efficiency (maximal mean r2) and defying as eQTLs were selected to form a list of markers for studying in case-control associations using an appropriate genotyping approach (e.g., TaqMan real-time PCR assay). The scripts used in this research are freely available at the repository https://github. com/inzilico/tagSNP (accessed on 9 August 2020). Results We extracted 23 of 24 human orthologues in rat using such projects as Ensembl, PANTHER, PhylomeDB, and MetaPhOrs. Different repositories resulted in the same list of orthologues that showed a one-to-one relationship between human and rat genes. The exception was Glycam1 gene, which orthologue was not identified. The human GLYCAM1 is pseudogene. The genes extracted from Ensembl are presented in Table 1. The numbers of SNPs identified in each gene including flanking regions are given in Supplementary Table S1. The high-LD regions revealed with three approaches were in good agreement. The TE for CLUSTAG and Tagger are presented in Figure 2. In general Tagger demonstrated higher values of TE than CLUSTAG. Therefore, the tagSNPs revealed by Tagger were used for further analyses, particularly, searching eQTLs. Table S2. Only part of them was found to be eQTLs. Some of such tagSNPs was the eQTLs for several tissues. On other hand, no eQTLs were identified among tagSNPs located in BCL3, CCL22, FOSL1, GLYCAM1, GPR6, HMOX1, IL6, and LCN2 genes. After checking the identified sets of eQTLs, nine tagSNPs were determined as potential candidates for further analysis in case-control study using real-time PCR with TaqMan probes. Eight of them were associated with the changes of expression in brain tissues and thus to be the first-priority markers. The ninth locus-the SNP in CCR1 gene-had the greatest absolute values of eQTL-related statistics, particularly, p-value and normalized effect size (10 −47 and −0.40, respectively). Table S2. Only part of them was found to be eQTLs. Some of such tagSNPs was the eQTLs for several tissues. On other hand, no eQTLs were identified among tagSNPs located in BCL3, CCL22, FOSL1, GLYCAM1, GPR6, HMOX1, IL6, and LCN2 genes. After checking the identified sets of eQTLs, nine tagSNPs were determined as potential candidates for further analysis in case-control study using real-time PCR with TaqMan probes. Eight of them were associated with the changes of expression in brain tissues and thus to be the first-priority markers. The ninth locus-the SNP in CCR1 gene-had the greatest absolute values of eQTL-related statistics, particularly, p-value and normalized effect size (10 −47 and −0.40, respectively). Genes 2021, 12, x FOR PEER REVIEW 6 of 9 Discussion In this paper we proposed a workflow to identify the genetic markers associated with the outcomes of ischemic stroke. It is based on candidate gene approach that requires a prior knowledge about the system under consideration. We hypothesized that such information, particularly, a list of gene-candidates, can be taken from the model studies of brain ischemia in rat. Namely, we took 24 genes exhibited substantial changes in their expression in brain rat after tMCAO and using the workflow proposed obtained a list of the SNPs (tagSNPs with eQTLs abilities) that can be potentially applied in case-control studies. In the line of workflow, we additionally compared four different sources of human orthologues in rat and three different methods for identification of high-LD regions and selection of tagSNPs. Ensembl, PANTHER, PhylomeDB, and MetaPhOrs were chosen because of the best accuracy and call rate of orthologues inference [31]. They all revealed the same list of human orthologues in rat and thus anyone can be used for searching of orthologs. Nevertheless, human orthologues in rat was identified for each gene of interest and confirmed by four different resources. To explore patterns of LD and identify tagSNPs we used CLUSTAG, Tagger, and gpart tools. These methods were chosen because they represent three different approaches to the problem of identifying groups of highly correlated SNPs. Although they all exploit the LD-based approach and MAF to split the list of SNPs into high-LD regions (blocks), their algorithms differ. Tagger is based on the analysis of single markers and multi-marker haplotypes, CLUSTAG-on the analysis of clusters, while gpart-on graph analysis. gpart Discussion In this paper we proposed a workflow to identify the genetic markers associated with the outcomes of ischemic stroke. It is based on candidate gene approach that requires a prior knowledge about the system under consideration. We hypothesized that such information, particularly, a list of gene-candidates, can be taken from the model studies of brain ischemia in rat. Namely, we took 24 genes exhibited substantial changes in their expression in brain rat after tMCAO and using the workflow proposed obtained a list of the SNPs (tagSNPs with eQTLs abilities) that can be potentially applied in case-control studies. In the line of workflow, we additionally compared four different sources of human orthologues in rat and three different methods for identification of high-LD regions and selection of tagSNPs. Ensembl, PANTHER, PhylomeDB, and MetaPhOrs were chosen because of the best accuracy and call rate of orthologues inference [31]. They all revealed the same list of human orthologues in rat and thus anyone can be used for searching of orthologs. Nevertheless, human orthologues in rat was identified for each gene of interest and confirmed by four different resources. To explore patterns of LD and identify tagSNPs we used CLUSTAG, Tagger, and gpart tools. These methods were chosen because they represent three different approaches to the problem of identifying groups of highly correlated SNPs. Although they all exploit the LD-based approach and MAF to split the list of SNPs into high-LD regions (blocks), their algorithms differ. Tagger is based on the analysis of single markers and multi-marker haplotypes, CLUSTAG-on the analysis of clusters, while gpart-on graph analysis. gpart can effectively identify LD blocks of different range but cannot tag SNPs. In terms of TE, Tagger outperformed CLUSTAG and thus its tagSNPs were used for further analysis. However, the number of tagSNPs computed was still high for practical usage, which is why we annotated the SNPs from high-LD regions with eQTLs and subset the appropriate tagSNPs manually. Because the expression of a particular gene can be potentially affected not only the loci located inside the gene (cis-eQTLs) but the loci lied outside the gene (trans-eQTLs) [32] the workflow may be extended with searching additional distant loci associated with the changes of expression of target genes, particularly, the genes in which no cis-eQTLs were identified. Like other studies pointed to establish genomic landscape of complex traits, our approach is also based on exploration of data of different types (mRNA transcription, population genetic variations, eQTLs) [33,34]. However, it does not rely on GWAS data which are known to be not good in identifying real causative variants and genes as well [35] and thus it is initially more confident. Another characteristic of our approach is its higher genetic complexity due to use of whole genome sequence data allowing possibility for involvement of higher number of real (not imputed) genetic loci in analysis. It should be also noted that although the workflow was applied to SNPs with frequency higher than 10%, it can be used for selecting and testing SNPs with lower frequency (e.g., loci with 5% to 1% frequency). However, it will require increasing the size of human samples analyzed (i.e., population sample, case and control samples). The data of Genome aggregation database project [36] that includes sequencing data of 1000 Genomes Project and others can be used for creating of samples with appropriate size. The limitation of the proposed approach is that it has not been experimentally validated in a cohort of patients. Nevertheless, we believe that the created workflow will help both in studying of genomics of individual variability in ischemic stroke outcomes and looking inside the black box of polygenicity in their control.
v3-fos-license
2019-05-21T13:05:22.612Z
2019-03-06T00:00:00.000
159130255
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/11/5/1406/pdf?version=1552387437", "pdf_hash": "a6bce309f7fdcff1bfc3e1632aa9c672037186b5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43967", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "09f3580a34665309b7752effec9af9fdc7b11c7f", "year": 2019 }
pes2o/s2orc
Recovery of Gold from Chloride Solution by TEMPO-Oxidized Cellulose Nanofiber Adsorbent : The goal of this study was to assess the sustainability of a modified cellulose nanofiber material for the recovery of precious gold from chloride solution, with a special focus on gold recovery from acidic solutions generated by cupric and ferric chloride leaching processes. TEMPO-oxidized cellulose nanofiber in hydrogel (TOCN), dry (H-TOCN, F-TOCN) and sheet form (S-TOCN) was examined for gold adsorptivity from chloride solution. Additionally, this work describes the optimum conditions and parameters for gold recovery. The data obtained in this investigation are also modeled using kinetic (pseudo first-order and pseudo second-order), isotherm best fit (Freundlich, Langmuir and Langmuir-Freundlich), and thermodynamic (endothermic process) parameters. Results demonstrate that high levels of gold removal can be achieved with TEMPO-oxidized cellulose nanofibers (98% by H-TOCNF) and the interaction characteristics of H-TOCN with gold suggests that other precious metals could also be efficiently recovered. Introduction At present, the gold content in specific scrap materials like electronic waste can be 10-100 times higher than that available in many naturally-occurring ores [1,2].In addition to gold, mobile phones, for example, contain ca.40 other elements [3].Consequently, the ability to recover gold, along with other rare and valuable metals, in a sustainable manner will play a central role in the development of the metals circular economy. The dominating state-of-the-art technology for gold containing metal rich waste relies on pyrometallurgical treatment as part of primary copper production or via the secondary raw material smeltery through copper electorefining/electrowinning into a precious metal plant [4].This process route can recover the main base metals, such as Cu and Ni, and precious metals, such as Ag, Au, Pt, and Pd efficiently, however, the loss of several critical and rare earth metals present in the secondary raw materials is evident [2].By contrast, a hydrometallurgical approach is a promising scalable alternative that can potentially provide the opportunity to recover a wider variety of metals, including gold.In addition, hydrometallurgy offers the possibility for pre-leaching of gold before material is fed into the copper smelter.Hydrometallurgical recovery begins with the transfer of gold into a water soluble ionized form in a leaching process, utilising a lixiviant.Cyanide is the lixiviant most applied in gold extraction from ores, but interest has increased towards less toxic, cyanide-free processes like the ones utilising concentrated sodium or calcium chloride media with cupric or ferric ions as oxidant [5,6].Several precious metal plants are known to operate in hydrochloric acid media at very high acidity [7] and, furthermore, gold chloride is purported to undergo faster dissolution during leaching than the corresponding cyanide salt [4]. The main challenge after leaching is to recover the ionized gold, which can be performed by, e.g., solvent extraction, ion exchange, reduction, or adsorption.Of these, adsorption offers a method that is low cost, scalable, robust, efficient, and environmentally friendly.Generally, the low concentration of gold (1-100 mg/L) in the lixiviant means that an adsorbent, particularly nanomaterials at the atomic level with higher surface area, need to be further developed in order to enhance the recovery of chemicals or contaminants in aqueous phases by adsorption processes [8,9].Moreover, the utilization of sustainable/biodegradable substances derived from nature have emerged as novel materials for the adsorption and concentration of metallic ions from aqueous solution [10][11][12][13][14][15][16][17][18][19][20][21][22][23].In this paper, we introduce a highly-charged nanocellulose [15] template as an outstanding, biologically-derived adsorbent for gold recovery from aqueous solutions.Particularly suitable for this purpose are cellulose nanofibers extracted with oxidation catalysed by the 2,2,6,6-tetramethylpiperidinyl-1-oxyl radical (TEMPO) [16,17].This oxidation process is performed in aqueous solution at room temperature and the resulting TEMPO-oxidized cellulose nanofibers (TOCNs) are slender threads of high aspect ratio (nm scale width, µm scale length) and exceptionally high mechanical properties [14,24].Due to TEMPO-oxidation in their isolation process, the nanofibers are highly charged and their recovery from aqueous environment can be difficult because of their small size and intrinsic gelling properties.To incorporate TOCNs in a macroscopic matrix, we have utilized a simple procedure: The initial oxidation was left incomplete, leaving aldehyde moieties in addition to carboxylates on the TOCN surface.Here, a new concept of utilizing entirely bio-based, cross-linked TEMPO-oxidized cellulose nanofibers (TOCNs) for selective gold recovery is presented.Specifically, we demonstrate how the selective recovery occurs in dilute, mildly acidic solutions also containing copper ions, thereby reflecting conditions in a genuine environment typical for a development-stage cyanide-free process, namely cupric chloride leaching of gold [25].The fundamental effect of time and temperature were investigated to optimize the conditions and parameters for gold recovery in this system.The technique with TOCNs combines two important aspects of green and sustainable engineering: (i) supporting the development of a new cyanide-free alternative for water-based, hydrometallurgical gold processing from, e.g., electronic waste and (ii) utilisation of widely available, renewable TOCNs to realise that alternative. The aim of the present work is to investigate the ability of bio-based material adsorbent, TOCNs, with a special focus on gold recovery in acidic solutions-as is usual in the case of cupric and ferric chloride leaching processes. Materials All chemicals, purchased from Sigma-Aldrich (Espoo, Uusimaa, Finland), were of analytical grade and used without the need for any further purification.Gold stock solution (1000 mg/L in 5% HCl w/w%), copper (II) chloride dehydrate and sodium chloride were used for the measurement solutions.Never dried bleached softwood pulp sourced from a Finnish pulp mill and used in the preparation of the nanocellulosic materials.TEMPO (2,2,6,6-tetramethylpipedine-1-oxyl radical), sodium bromide, sodium hypochlorite solution (10% w/v, poly vinyl alcohol (PVA) (Mowiol 56-98, M w 195,000 g/mol, DP 4300) were used to prepare adsorbents.Solution pH was adjusted by the addition 3 M NaOH solution with stirring and samples were centrifuged in order to separate the suspension from aqueous solution.Finally, the concentration of metals in the solutions were determined using inductively-coupled plasma optical atomic emission spectrometry, ICP-OES (model 7100DV, Perkin Elmer). Adsorbent Preparation TEMPO-oxidized cellulose nanofiber (TOCN) in hydrogel and film form were used.For preparation of the TEMPO-oxidized cellulose nanofiber (TOCN), the first step was to synthesize cellulose nanofiber (CNF).Afterward the CNF was converted to TOCN hydrogel before being modified to the different forms of dry and sheet film. TEMPO-Oxidized Cellulose Nanofibers (TOCN) A birch wood pulp obtained from a Finnish pulp mill was subjected to oxidation with 2,2,6,6tetramethylpiperidine-1-oxyl (TEMPO) and sodium hypochlorite in order to produce TOCN following the procedure described by Saito et al. [22].In brief, a high-pressure fluidizer (Microfluidics M110Y Microfluidics Corporation, MA, USA) that featured two Z-type chambers (400 × 100 µm diameter) was applied to fibrillate TEMPO-oxidized pulp.The pulp was passed through the fluidizer twice at 1800 bar operating pressure.Charge of the oxidized pulp was determined using a standard conductometric titration method (SCAN-CM 65:02, 2002) and was determined to be ca.1.39 mmol g −1 .The aldehyde groups formed during the TEMPO oxidation were oxidized further into carboxylic groups in order to produce a viscous and transparent gel that had a dry matter content of 1.1 wt%, (Figure 1a).TOCN hydrogels (soggy adsorbent, as it was gained from preparation) were stored at +4-5 Adsorbent Preparation TEMPO-oxidized cellulose nanofiber (TOCN) in hydrogel and film form were used.For preparation of the TEMPO-oxidized cellulose nanofiber (TOCN), the first step was to synthesize cellulose nanofiber (CNF).Afterward the CNF was converted to TOCN hydrogel before being modified to the different forms of dry and sheet film. TEMPO-Oxidized Cellulose Nanofibers (TOCN) A birch wood pulp obtained from a Finnish pulp mill was subjected to oxidation with 2,2,6,6tetramethylpiperidine-1-oxyl (TEMPO) and sodium hypochlorite in order to produce TOCN following the procedure described by Saito et al [23].In brief, a high-pressure fluidizer (Microfluidics M110Y Microfluidics Corporation, MA, USA) that featured two Z-type chambers (400 × 100 μm diameter) was applied to fibrillate TEMPO-oxidized pulp.The pulp was passed through the fluidizer twice at 1800 bar operating pressure.Charge of the oxidized pulp was determined using a standard conductometric titration method (SCAN-CM 65:02, 2002) and was determined to be ca.1.39 mmol g −1 .The aldehyde groups formed during the TEMPO oxidation were oxidized further into carboxylic groups in order to produce a viscous and transparent gel that had a dry matter content of 1.1 wt%, (Figure 1a).TOCN hydrogels (soggy adsorbent, as it was gained from preparation) were stored at +4-5°C until further use. Oven Dried TEMPO-Oxidized Cellulose Nanofibers (H-TOCN) The resulting TOCN gel was subsequently cured at 90C for 12 h and then allowed to slowly cool, leading to cross linking of the TOCNs due to the reactivity of the aldehydes (Figure 1a-c).The resulting material was film-like (Figure 1d-f) and easy to dip into a solution. Batch Adsorption Study The solution simulates a gold chloride leaching solution with both gold (the valuable metal) and copper (as a typical impurity) present in the solution.The adsorption of gold in cupric chloride solution on the TOCN material was studied in batch mode and the effect of different parameters, including adsorbent dose (25-250 mg/10 mL), initial Au concentration (10-100 mg/L), temperature (25-90°C), and contact time were assessed.In addition, the pH and ionic strength of the solution were Oven Dried TEMPO-Oxidized Cellulose Nanofibers (H-TOCN) The resulting TOCN gel was subsequently cured at 90 • C for 12 h and then allowed to slowly cool, leading to cross linking of the TOCNs due to the reactivity of the aldehydes (Figure 1a-c).The resulting material was film-like (Figure 1d-f) and easy to dip into a solution. Batch Adsorption Study The solution simulates a gold chloride leaching solution with both gold (the valuable metal) and copper (as a typical impurity) present in the solution.The adsorption of gold in cupric chloride solution on the TOCN material was studied in batch mode and the effect of different parameters, including adsorbent dose (25-250 mg/10 mL), initial Au concentration (10-100 mg/L), temperature (25-90 • C), and contact time were assessed.In addition, the pH and ionic strength of the solution were adjusted by NaOH and NaCl additions.It should be noted that all the adsorption experiments were conducted out at pH = 2. In the work outlined here, the simulated gold mining solution was varied between 10-100 mg/L in a 0.02 M copper chloride solution, pH = 2 and with the optimized quantity of adsorbents (500 mg, 150 mg for TOCN and H-TOCN, respectively) in order to investigate recovery of the gold in binary solution.Equilibrium adsorption experiments were carried out to gauge the efficiency of TOCN in removing gold from chloride solution.The solution was then agitated in a bath shaker (Stuart SBS40) at 130 rpm for 48 h to ensure that adsorption process reached equilibrium.The subsequent removal of the adsorbent TOCN film, i.e., solid/liquid separation of the adsorbent was straightforward. In order to investigate the adsorption, isotherms experiments were performed at varying solution temperatures with an initial gold concentration (100 mg/L) in chloride solution (0.02 M).In contrast, the kinetic experiments were conducted with optimized adsorbents quantities: 10.0 g gel form (TOCN), 0.5 g and dry form (H-TOCN).The initial concentration of the solution was 0.2 L of 100 mg/L Au and 0.02-1 M CuCl 2 at a pH = 2 and a temperature of 25 • C. The adsorption percentage (R%) and adsorption capacity values at equilibrium (q e ) and time t (q t ) were calculated using the following equations: where C 0 (initial), C e (equilibrium) and C t (time) are the Au (III) concentration (mg/L) in solution, respectively, V is the aqueous solution volume (L) and W represents the adsorbent sample weight added to solution (g). Effect of Ionic Strength As the primary aim of this work was the recovery of gold from chloride solutions, the influence of ionic strength (IS) was also investigated by the adjustment of the concentration of Cl -ions in the solution.This was achieved using NaCl additions to produce solutions with [Cl -] of 0.02, 0.1, 0.5, and 1 M, taking into account the amount of Cl -already present due to CuCl 2 [26]. Characterization of Materials TOCN samples were characterized by a zetasizer (Nano-ZS90, Malvern, UK).N 2 adsorptiondesorption isotherms and pore size of both the H-TOCN and F-TOCN samples was measured at 77.35K using a BELsorp-mini II instrument (BEL, Japan).All samples were outgassed at a temperature of 70 • C for 20 hours prior to the adsorption-desorption experiments.For all measurements, accuracy at each pressure step was improved by the use of a dead volume reference cell.The Brunauer, Emmett, and Teller (BET) method was used to calculate the sample surface area over a relative pressure range of 0.05 to 0.45 on the adsorption isotherm [27] In contrast, pore size distribution was determined using the Barret-Joyner-Halenda (BJH) method based on the isotherm desorption branch [28].All calculations were performed using the proprietary BELMaster (BEL, Japan) analysis software (Version 6.4.1.0). Characterization of the TOCN and S-TOCNF was also carried out using atomic force microscopy (AFM) from Anasys Instruments Inc. (Santa Barbara, CA, USA) in tapping mode with MicroMasch HQ: NSC15/Al BS probes.Typical cantilever resonance frequency was 325 kHz and radius of the curvature of the cantilever 8 nm according to the manufacturer.No other image processing was performed except flattening and at least three images per sample were taken.AFM images were analysed using the Analysis Studio software (version 3.11).The self-standing TEMPO CNF-PVA film was imaged as such in dry state.Samples from the TEMPO CNF fibrils were prepared on Au surfaces by spincoating as described by Ahola et al. [29] Root mean square roughness values (Rq) for the TEMPO CNF-PVA film in dry state were extracted from three topographic 3 µm × 3 µm AFM images and the average roughness value are reported. Adsorbent Characterization Nanoscale surface topography of the adsorbent was investigated by AFM.This technique provides three-dimensional images that allows spatial information related to changes in adsorbent surface roughness to be determined.Figure 2 shows the AFM images for both the TOCN and film form of the TOCN.In Figure 2a, the topographic profiles of height images emphasized with white line.TEMPO CNF-PVA film in dry state were extracted from three topographic 3 μm × 3 μm AFM images and the average roughness value are reported. Adsorbent Characterization Nanoscale surface topography of the adsorbent was investigated by AFM.This technique provides three-dimensional images that allows spatial information related to changes in adsorbent surface roughness to be determined.Figure 2 shows the AFM images for both the TOCN and film form of the TOCN.In Figure 2a, the topographic profiles of height images emphasized with white line.Nitrogen adsorption-desorption isotherms at 77.35K of the investigated dried TOCNs (H-TOCN and F-TOCN) are presented in Figure 3., The shape of the isotherm displays an intermediate between types II and IV based on the IUPAC classification [31] which is indicative of an initial monomolecular adsorbate layer (i.e., type II assumption) followed by multi-layer adsorption.Moreover, as the isotherm features a slight hysteresis loop, usually generated by the capillary condensation of the adsorbate in the mesoporous structure, type IV behavior is also observed.Furthermore, BET analysis results also allow the total surface area (SBET (m 2 /g)) and pore radius of the samples to be determined.The SBET of the H-TOCN and F-TOCN samples are approximately 0.5 and 6.3 m 2 /g, whereas the average pore radius are of 1.21 and 1.85 nm, respectively. In addition, the surface charge of the TOCN hydrogel was also measured by zetasizer (Malvern Zetasizer Nano-ZS90) and the results show that the average ζ potential of the solution is > 65 mV.This result indicates that solutions of TOCN are highly charged and should have excellent colloidal stability.Nitrogen adsorption-desorption isotherms at 77.35K of the investigated dried TOCNs (H-TOCN and F-TOCN) are presented in Figure 3., The shape of the isotherm displays an intermediate between types II and IV based on the IUPAC classification [30] which is indicative of an initial monomolecular adsorbate layer (i.e., type II assumption) followed by multi-layer adsorption.Moreover, as the isotherm features a slight hysteresis loop, usually generated by the capillary condensation of the adsorbate in the mesoporous structure, type IV behavior is also observed.Furthermore, BET analysis results also allow the total surface area (S BET (m 2 /g)) and pore radius of the samples to be determined.The S BET of the H-TOCN and F-TOCN samples are approximately 0.5 and 6.3 m 2 /g, whereas the average pore radius are of 1.21 and 1.85 nm, respectively. In addition, the surface charge of the TOCN hydrogel was also measured by zetasizer (Malvern Zetasizer Nano-ZS90) and the results show that the average ζ potential of the solution is >65 mV.This result indicates that solutions of TOCN are highly charged and should have excellent colloidal stability. Effect of Adsorbent Dose At first, the amount of adsorbent was optimized through various doses of the adsorbent; 25,50, 75, 100, 150, 200, and 250 mg were added into 10 mL of gold (C0 = 100 mg/ L) containing 0.02 M CuCl2 solution for either 24 or 48 h.The solution simulates a gold chloride leaching solution with both gold (the valuable metal) and copper (as typical impurity) present in the solution.Figure 4 shows that the recovery of Au increases with increasing adsorbent concentration exposed into the solution.Furthermore, an increase in dosage up to 250 mg (dry weight of TOCN) was also shown to increase Au recovery, probably as a result of increased adsorbent active site accessibility with increased adsorbent concentration.Nevertheless, it was also observed that the enhancement of gold recovery at the highest adsorbent dose (250 mg) is negligible compared to 200 mg dose; hence, the optimum removal of gold (∼98.0%)can be obtained by using 150 mg/10 mL of the TOCNs at equilibrium after 48 h. In addition, the results in Figure 4 illustrate that high gold recoveries of >95% can be readily achieved by H-TOCNs under ambient conditions after 48 h and mild agitation (130 pm). Effect of Adsorbent Dose At first, the amount of adsorbent was optimized through various doses of the adsorbent; 25,50, 75, 100, 150, 200, and 250 mg were added into 10 mL of gold (C0 = 100 mg/ L) containing 0.02 M CuCl 2 solution for either 24 or 48 h.The solution simulates a gold chloride leaching solution with both gold (the valuable metal) and copper (as typical impurity) present in the solution.Figure 4 shows that the recovery of Au increases with increasing adsorbent concentration exposed into the solution.Furthermore, an increase in dosage up to 250 mg (dry weight of TOCN) was also shown to increase Au recovery, probably as a result of increased adsorbent active site accessibility with increased adsorbent concentration.Nevertheless, it was also observed that the enhancement of gold recovery at the highest adsorbent dose (250 mg) is negligible compared to 200 mg dose; hence, the optimum removal of gold (∼98.0%)can be obtained by using 150 mg/10 mL of the TOCNs at equilibrium after 48 h.Adsorption of gold in chloride solution by H-TOCN was performed at other low rotation speeds, (data not presented here) though the results show the best gold recovery efficiencies were obtained with an agitation of 130 rpm.Moreover, an increase in the level of agitation results in an increase in In addition, the results in Figure 4 illustrate that high gold recoveries of >95% can be readily achieved by H-TOCNs under ambient conditions after 48 h and mild agitation (130 pm). Adsorption of gold in chloride solution by H-TOCN was performed at other low rotation speeds, (data not presented here) though the results show the best gold recovery efficiencies were obtained with an agitation of 130 rpm.Moreover, an increase in the level of agitation results in an increase in the gold recovery efficiency for all adsorbent weights used and this is in line with previous observations for other modified cellulose adsorbents for Au(III) adsorption available in the literature [10]. Effect of Temperature In the work outlined here, the simulated gold mining solution was varied between 10-100 mg/L at 0.02 M copper chloride solution and pH = 2 in order to investigate recovery of gold from a binary solution.In addition, the effect of temperature on gold recovery was studied over a temperature range between 25-90 • C at the same initial pH (2.0) and cupric chloride concentration (0.02 M) (Figure 5a).As can be observed from Figure 5a, the recovery of gold was enhanced up to 99% with increasing temperature.Most likely, the enhancement of gold recovery at higher temperature may be accounted for by the change in chemical morphology of the chloro-gold complexes with temperature [31].The results in Figure 5a were further analyzed to determine the total energy changes occurring during adsorption, in order to ascertain the gold recovery mechanism.As the total adsorption efficiency of gold is large at high temperature, these results were used to establish gold adsorption efficiency onto TOCN as a function of temperature versus time (See supplementary Table S1 and Figure S2). Effect of Ionic Strength Another important factor that affects the adsorption behavior of gold is the chemical As previously outlined, all experiments were performed at pH = 2.In this regard, it is worth to take into account calculation of Pourbaix Diagram to estimate Au-species at pH = 2.As can be seen from the calculation at pH 2 the main components in the Pourbaix diagram comprise of Au(s), AuCu(s), and Au(s) associated with Cu 2+ in solution (See supplementary Figure S1).Accordingly, it can derive that Cu ions are in competition with Au-species in the solution and selectivity toward gold ions is main concern in the copper (II) chloride solution.Figure 5b reveals that TOCN has good ability to adsorb Au in the binary solution in contact with Cu.In other words, TOCNs can selectively adsorb gold ions from the copper (II) chloride solution.Indeed, the sustainable material TOCN, is a promising alternative to use in industrial applications to overcome the limitations in gold recovery from chloride solutions when in competition with Cu 2+ from the oxidant in cyanide-free leaching processes.Nevertheless, the transformation from the laboratory scale to industrial application still needs to be studied in more detail. The results in Figure 5a were further analyzed to determine the total energy changes occurring during adsorption, in order to ascertain the gold recovery mechanism.As the total adsorption efficiency of gold is large at high temperature, these results were used to establish gold adsorption efficiency onto TOCN as a function of temperature versus time (See supplementary Table S1 and Figure S2). Effect of Ionic Strength Another important factor that affects the adsorption behavior of gold is the chemical morphology of chloro-gold complexes.The ratio of each chloro-gold complex concentration relates to both chloride and hydrogen ion concentration, as well as temperature, as they can exist as different species depending on pH (Figure 6) [11].AuCl2(OH)2 -+ H + +Cl -⇌ AuCl3(OH) -+H2O, K3=10 7.00 (5) AuCl3(OH) -+ H + +Cl -⇌ AuCl4 -+ H2O, K4= 10 6.07 (6) As can be seen from the above equations and Figure 6, the predominant complex of gold at pH < 3 is mainly AuCl4 -.Indeed, at pH higher than 3, AuCl4 -is likely to undergo hydrolysis, which causes the complex to change to AuCl3(OH) -in the aqueous chloride solution.Moreover, Ogata et al. [12] reported that the gold recovery from the aqueous chloride solution by tannin gel particles was almost independent of the initial pH in the range between 2.0-3.8.In this regard, all the experiments were conducted at same initial pH 2 and stable temperature (25 °C) in order to investigate the effect of chloride ions concentration, in the range of 0.02-1 M, on the gold adsorption behavior.The results obtained from these experiments are shown in Figure 7 and it can be seen that the recovery of gold decreases with higher ionic strength values. It is reported that increasing inter-fibrillar electrostatic repulsion and reduction of the adhesion between the fibrils leads to the nanofibrillation of cellulose [23].Therefore, the background electrolyte The equilibrium constants of gold-chloro complexes are as follows (35): As can be seen from the above equations and Figure 6, the predominant complex of gold at pH < 3 is mainly AuCl 4 -.Indeed, at pH higher than 3, AuCl 4 -is likely to undergo hydrolysis, which causes the complex to change to AuCl 3 (OH) -in the aqueous chloride solution.Moreover, Ogata et al. [11] reported that the gold recovery from the aqueous chloride solution by tannin gel particles was almost independent of the initial pH in the range between 2.0-3.8.In this regard, all the experiments were conducted at same initial pH 2 and stable temperature (25 • C) in order to investigate the effect of chloride ions concentration, in the range of 0.02-1 M, on the gold adsorption behavior.The results obtained from these experiments are shown in Figure 7 and it can be seen that the recovery of gold decreases with higher ionic strength values. It is reported that increasing inter-fibrillar electrostatic repulsion and reduction of the adhesion between the fibrils leads to the nanofibrillation of cellulose [22].Therefore, the background electrolyte concentration reduces the electrostatic repulsion between the fibers and in doing so, the specific surface area [32,33]. is observed in Figure 7 as an increase in the background electrolyte concentration (higher IS values) leads to a drop in the gold recovery due to a decrease in nanofibrillation tendency.It is also possible that low IS values may enhance electrostatic repulsion and consequently cause the TEMPO cellulose nanofibers to adopt a more open configuration that produces more access to the adsorbent pores for gold removal from the solution.The influence of the presence of chloride ions in the aqueous solution was also studied at lower concentrations of gold (10 mg/L) under identical conditions.The increase of IS in the aqueous phase increases the adsorption efficiency, which can be attributed to higher ionic concentration.In addition, as can be seen from Equation (7), a shift to left side -which occurs when gold is present in the aqueous phase -results in a non-adsorbable HAuCl4 species (Figure 8). Consequently, by increasing the IS from 0.02 to 1 M, the gold adsorption efficiency increased, a finding that correlates with the previous observations of Alguacil et al. who studied gold (III) adsorption using HCl concentrations ranging between 0.03 to 0.5 M [35].The influence of the presence of chloride ions in the aqueous solution was also studied at lower concentrations of gold (10 mg/L) under identical conditions.The increase of IS in the aqueous phase increases the adsorption efficiency, which can be attributed to higher ionic concentration.In addition, as can be seen from Equation (7), a shift to left side-which occurs when gold is present in the aqueous phase-results in a non-adsorbable HAuCl 4 species (Figure 8). Consequently, by increasing the IS from 0.02 to 1 M, the gold adsorption efficiency increased, a finding that correlates with the previous observations of Alguacil et al. who studied gold (III) adsorption using HCl concentrations ranging between 0.03 to 0.5 M [34]. Consequently, by increasing the IS from 0.02 to 1 M, the gold adsorption efficiency increased, a finding that correlates with the previous observations of Alguacil et al. who studied gold (III) adsorption using HCl concentrations ranging between 0.03 to 0.5 M [35]. Kinetic Study In order to both investigate the mechanism of adsorption and control adsorption rate, it is important to also study the kinetics of gold adsorption onto the adsorbents.Plot of changes in Kinetic Study In order to both investigate the mechanism of adsorption and control adsorption rate, it is important to also study the kinetics of gold adsorption onto the adsorbents.Plot of changes in adsorption capacity (q t ) vs. time (t) are shown in Figure 9, at initial concentration of 100 mg/L gold in 0.02 M chloride solution and the kinetic models were examined over a 48 h period. The adsorption kinetic data, the application of two of the most widely utilized kinetic modelspseudo-first order and pseudo-second order were investigated [35].The linear forms of these models are: ln ( (q e − q t ) q e ) = −k 1 t (8) where q e is the equilibrium value of q t , k 1 and k 2 are the rate coefficient for the pseudo first-order (PFO) and pseudo second-order (PSO) models, respectively.adsorption capacity (qt) vs. time (t) are shown in Figure 9, at initial concentration of 100 mg/L gold in 0.02 M chloride solution and the kinetic models were examined over a 48 h period.The adsorption kinetic data, the application of two of the most widely utilized kinetic modelspseudo-first order and pseudo-second order were investigated [36].The linear forms of these models are: where qe is the equilibrium value of qt, k1 and k2 are the rate coefficient for the pseudo first-order (PFO) and pseudo second-order (PSO) models, respectively.In this case, the adsorption rate is relative to the process driving force or kinetics, which is related here to the gold solution concentration (pseudo first-order).In addition, adsorption capacity can also be related to the number of adsorbent active sites occupied and aqueous solution solute concentration (pseudo second order kinetics) [36].The comparison of fitting results produced by Equations ( 8) and (9) showed that the kinetic data has a higher correlation with the pseudo second order model cf.In this case, the adsorption rate is relative to the process driving force or kinetics, which is related here to the gold solution concentration (pseudo first-order).In addition, adsorption capacity can also be related to the number of adsorbent active sites occupied and aqueous solution solute concentration (pseudo second order kinetics) [35].The comparison of fitting results produced by Equations ( 8) and (9) showed that the kinetic data has a higher correlation with the pseudo second order model cf.pseudo first-order model (Table 1).In addition, another factor (βθ) in relation to initial the concentration (C 0 ) and concentration at any time (C t ) [36], allows a comparison between C0 and βθ to be determined at the initial gold concentration of 100 mg/L at any time (Table 2).It was found that the βθ values were inconsequential when compared with the initial concentration of gold used, hence, it was determined that the pseudo second-order model best describes the adsorption kinetics of gold onto TEMPO-oxidized nanofiber cellulose.Indeed, the rate-limiting step may be related to either chemisorption or chemical adsorption between the adsorbate and adsorbent.Additionally, the Weber-Morris intraparticle diffusion model was used to identify the adsorption mechanism of gold onto these two different types of TOCN by the intraparticle diffusion (IPD) model in term of mass transfer of Au surrounded by the TEMPO-oxidized cellulose nanofiber [32].As mentioned earlier, TOCN has a porous structure, through which gold molecules may accordingly diffuse.A plot of q t versus t 0.5 (Figure 10) demonstrates that the whole adsorption process comprises of three distinct steps.The observed initial rapid increase in the slope relates to fast external surface adsorption, whereas the second section shows gradual adsorption, which is the rate-limiting step during intraparticle diffusion.Finally, the third portion of the plots outlines the equilibrium phase due to the reduced active site accessibility on the adsorbents and the low residual value of the remaining gold in the solution.Additionally, the Weber-Morris intraparticle diffusion model was used to identify the adsorption mechanism of gold onto these two different types of TOCN by the intraparticle diffusion (IPD) model in term of mass transfer of Au surrounded by the TEMPO-oxidized cellulose nanofiber [33].As mentioned earlier, TOCN has a porous structure, through which gold molecules may accordingly diffuse.A plot of qt versus t 0.5 (Figure 10) demonstrates that the whole adsorption process comprises of three distinct steps.The observed initial rapid increase in the slope relates to fast external surface adsorption, whereas the second section shows gradual adsorption, which is the ratelimiting step during intraparticle diffusion.Finally, the third portion of the plots outlines the equilibrium phase due to the reduced active site accessibility on the adsorbents and the low residual value of the remaining gold in the solution. Equilibrium Study The basic characteristics of an adsorption process can be determined with an equilibrium study, whereas kinetic data are critical for adsorbent use optimization.Taken together they outline the relationship between the adsorbent amount and the dissolved adsorbate concentration in the liquid at equilibrium.In the work outlined here, the equilibrium study gold concentration was varied between 10-100 mg/L at 0.02 M chloride solution in order to ascertain the relationship between adsorbed (q e ) and solution concentration (C e ) of gold under equilibrium conditions. Langmuir, Freundlich and Langmuir-Freundlich isotherms (Equations ( 10)-( 12)), which are commonly applied for many adsorbate/adsorbent systems, were employed to analyze the experimental equilibria data and to obtain the maximum adsorption capacity for each of the adsorbents investigated for gold removal.q e = q m K L C e 1 + KC e (10) q e = (K F C e ) 1/n (11) q q m = (KC e ) 1/n 1 + (KC e ) 1/n (12) where q e is the amount of adsorbate per unit mass of adsorbent at equilibrium, q m the maximum adsorption capacity to form a complete monolayer coverage on the surface bound at high equilibrium of adsorbate concentration C e (mg/L), K L , a model parameter that accounts for the degree of affinity between the adsorbate and adsorbent.K F, K and n are constants.Experimental data was modeled using the three equilibrium isotherm equations outlined above and Table 3 summarizes the correlation coefficients, r 2 values, and isotherm model constants obtained for gold adsorption onto the different types of TEMPO-oxidized nanofiber cellulose. The Langmuir isotherm model gave rise to the highest r 2 values for each of the TEMPO-oxidized CNF, with maximum gold adsorption capacities of 15.44 and 0.48 mg/g for H-TOCNs and TOCNs, respectively.Overall, the equilibrium adsorption data demonstrates, heat treatment has a significant effect on the properties of the adsorbent when the maximum adsorption capacity of H-TOCN for recovery of gold from the chloride solution is considered.Generally, the adsorption efficiency was increased by drying TOCN, which might be attributed to strong electrostatic attraction between the Au-complex and H-TOCN. Heat treatment of hydrogel TOCNF may result in more accessible carbonyl groups on neighboring fibrils proximal to the aldehyde group, subsequently these may react via an aldol mechanism in acidic media to a protonated form (Figure 11) [37,38]. Figure 6 . Figure 6.Fraction diagrams of chloro-gold complexes calculated with equilibrium constants at pCl 2.0 and 293 K [After 12 ]. Figure 6 . Figure 6.Fraction diagrams of chloro-gold complexes calculated with equilibrium constants at pCl 2.0 and 293 K [After 12 ]. Table 1 . Constants obtained from PFO and the models for gold adsorption onto different types of TOCN adsorbents (TOCN, H-TOCN, F-CNF, and S-TOCN). Table 2 . βθ values obtained from adsorption of gold onto the different types of TOCN (C 0 = 100 mg/L). Table 1 . Constants obtained from PFO and the models for gold adsorption onto different types of TOCN adsorbents (TOCN, H-TOCN, F-CNF, and S-TOCN). Table 2 . βθ values obtained from adsorption of gold onto the different types of TOCN (C0 = 100 mg/L). Table 3 . Parameters of the adsorption isotherms obtained for the adsorption of gold onto different types of the TOCN adsorbents.
v3-fos-license
2020-08-06T09:07:32.537Z
2020-08-01T00:00:00.000
221078192
{ "extfieldsofstudy": [ "Medicine", "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/9/8/1064/pdf", "pdf_hash": "b074a1dc8544cd36fe50c9752b13678e135b809f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43969", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "sha1": "67c64480ba45bdbda0069b3c571099edb7ea1f63", "year": 2020 }
pes2o/s2orc
Lupin Seed Protein Extract Can Efficiently Enrich the Physical Properties of Cookies Prepared with Alternative Flours Legume proteins can be successfully used in bakery foods, like cookies, to obtain a protein-enriched product. A lupin extract (10 g/100 g) was added to gluten and gluten-free flours from different sources: rice, buckwheat, oat, kamut and spelt. The impact on the physical properties of the dough and cookies was evaluated for the different systems. Rice and buckwheat doughs were 20% firmer and 40% less cohesive than the others. The incorporation of lupin extract had a reduced impact on the shape parameters of the cookies, namely in terms of area and thickness. The texture differed over time and after eight weeks, the oat and buckwheat cookies enriched with lupin extract were significantly firmer than the cookies without lupin. The incorporation of lupin extract induced a certain golden-brown coloring on the cookies, making them more appealing: lightness (L*) values decreased, generally, for the cookies with lupin extract when compared to the controls. The aw and moisture content values were very low for all samples, suggesting a high stability food product. Hence, the addition of lupin extract brought some technological changes in the dough and cookies in all the flours tested but improved the final product quality which aligns with the trends in the food industry. Introduction Current trends in the development of new food products identified by companies in consumer studies, such as Innova Market Insights, are gluten-free products, alternative vegetable proteins and snacks. In this context, the snack market is very prominent, with the demand for healthy snacks becoming increasingly relevant [1]. Cookies and crackers have become one of the most popularly consumed snacks due to their low manufacturing cost, availability, high nutrient density, long shelf-life and potential to be supplemented with a wide variety of nutraceuticals [2,3]. It is widely known that wheat cookies, commonly available in the market, lack good quality protein because of their deficiency in lysine. For this reason, the production of wheat cookies with various legume seeds has been proposed [4], to increase the protein content and improve an amino acid balance of the final product, due to the contribution of lysine by legumes and the contribution of methionine by cereals [5]. However, wheat gluten consisting of glutenins and gliadins cause severe intestinal inflammation in individuals suffering with celiac disease or other forms of gluten intolerance [6]. Hence, several alternative flours have an increase in demand, such as spelt and kamut, as is the case for species of the Triticum genus, but with a healthier nutritional profile than modern wheats, as they provide more nutraceutical compounds, vitamins and minerals [7,8]. Cookies Preparation Cookies were prepared according to an optimized formulation [28,30], using the following ingredients (as g/100 g): flour (54), sugar (15), margarine (18), water (12) and baking powder (1). For all the samples, the same quantities of the ingredients were used, except for the flour, which was replaced by 10% (w/w) LE in the case of lupin cookies. The procedures were similar for the different flours used and the sample without LE incorporation was considered as a control sample for each corresponding flour. The amount of LE to be incorporated in cookies was based on the previous studies [31,32]. Batches of 100 g were prepared, and the ingredients were mixed for 15 s at a speed of 4 in a food processor (Bimby, Vorwerk, Wuppertal, Germany). The sweet cookies were molded in a square mold and baked at 110 • C for 40 min in a forced-air convection oven (Unox, Italy). After cooling for 30 min at room temperature, the cookies were stored in hermetic containers, at room temperature and protected from the light. Dough Rheology Rheological measurements were conducted using a controlled strain rheometer (Haake, Mars III, Thermo Fisher Scientific, Karlsruhe, Germany) at a constant temperature (25.0 • C ± 0.1 • C), controlled by a Peltier system. The rheometer was equipped with serrated parallel-plate geometry (20 mm diameter) to overcome the slip effect. The dough pieces were compressed with a 1.5 mm gap. Following the preparation, the dough was allowed to rest for 5 min before measuring. The stress and frequency sweeps were carried out at 25 • C. The stress sweep, with a constant frequency (1 Hz), was performed to identify the linear viscoelastic region. Frequency sweep tests were performed with a constant stress within the linear viscoelastic region and in a frequency range from 0.01 to 100 Hz to obtain the values of elastic modulus (G' (Pa)) and viscous modulus (G" (Pa)). Dimensions The dimensions of the cookies were evaluated using a digital caliper (Powerfix, Germany). The width and thickness of the ten cookies from each formulation were measured after 24 h of cookie preparation. Color Analysis The color of the cookie samples was measured using a Minolta CR-400 (Japan) colorimeter. The results were expressed in terms of L*, lightness (values increasing from 0 to 100); a*, redness to greenness (60 to −60 positive to negative values, respectively); and b*, yellowness to blueness (60 to −60 positive to negative values, respectively) according to the CIELab system. The total color difference between the sample cookies during the storage time (up to eight weeks) was determined using average L*, a* and b* values. The measurements were performed under similar light conditions using a white standard (L* = 94.61, a* = −0.53, and b* = 3.62), at room temperature, replicated eight times for each cookie sample (control and lupin-enriched cookies) and for week 0 (24 h after baking) and week 8. The total colour difference between the control and the lupin-enriched cookies was obtained by Equation (1): Texture Analysis Instrumental texture analysis was conducted in a TA.XTplus (StableMicro Systems, Godalming, UK) texturometer. Texture measurements were performed at 20 • C ± 1 • C in a temperature-controlled room. Dough Texture Dough samples were submitted to texture profile analyses (TPAs), simulating the action of a double chewing. The dough was contained in a cylindrical flask of 2.5 cm in diameter and 4.5 cm in height. The TPAs were performed in a penetration mode using an acrilic cylindrical probe of 4 mm in diameter, 15 mm of penetration and 1 mm/s of crosshead speed. Firmness and cohesiveness were the two primary texture properties used to compare the doughs, as they were the ones with the greatest capacity to discriminate between the different samples. The firmness of the dough was considered to be the maximum force in the first cycle [33]. The cohesiveness describes how well a food retains its form between the first and second chew and it is a ratio between the work performed in the second and the first cycle [33]. These analyses were repeated eight times for each dough sample. Cookie Texture Cookie texture was evaluated with a penetration test, using a cylindrical probe of 2 mm in diameter, plunged 8 mm at 1 mm/s. Resistance to penetration was evaluated by the maximum peak shown on the texturogram which corresponds to the N value. These determinations were replicated at least eight times for each cookie sample (control and lupin-enriched dough) at week 0 (24 h after baking) and week 8. The tests were performed during storage (24 h and 8 weeks after baking) by crushing the samples into little pieces. The cookies (control and with lupin) were assayed in triplicate. Moisture Content The moisture was determined gravimetrically following ISTISAN protocols (ISTISAN Report 1996/34, method B, page 7), using an incubator (Binder GmbH, Germany) at 105 • C until a constant weight was achieved. Statistical Analyses Experimental data were obtained at least in triplicate and were statistically analyzed using SigmaPlot (version 12.5). An analysis of variance (one-way ANOVA) was applied to evaluate the differences between samples at a significance level of 95% (p < 0.05). Tukey's test was used to compare the differences between groups. All the results are presented as the mean ± standard deviation (SD). It should be noted that the control gluten-free dough without LE (rice and buckwheat), are 20% firmer and 40% less cohesive than the others. This behavior should result from the different composition of these two flours (Table 1). In these cases, the structuring of the system is essentially achieved by the starch present, although the different types of proteins present can also contribute to the reinforcement of this structure. Thus, the doughs obtained from these two flours have a greater resistance to penetration (high firmness), which is related to more compact doughs. The absence of the gluten matrix decreases the air retention capacity of the system [34], contributing to firmer doughs. At the same time, a reduction in the cohesiveness associated with a greater disaggregation is observed [35]. These characteristics are less positive in terms of the technological handling of these doughs. Physical Properties of the Dough In the case of rice, the high starch content is relevant, compared to other flours, which has an important impact on structure creation. Regarding the buckwheat, the type of proteins involved could also explain the increase in firmness and decrease in cohesiveness since its proteins are rich in lysine and arginine, unlike the other flours studied [36]. Complementary studies can be developed, in the future, in order to support this statement. Moisture Content The moisture was determined gravimetrically following ISTISAN protocols (ISTISAN Report 1996/34, method B, page 7), using an incubator (Binder GmbH, Germany) at 105 °C until a constant weight was achieved. Statistical Analyses Experimental data were obtained at least in triplicate and were statistically analyzed using SigmaPlot (version 12.5). An analysis of variance (one-way ANOVA) was applied to evaluate the differences between samples at a significance level of 95% (p < 0.05). Tukey's test was used to compare the differences between groups. All the results are presented as the mean ± standard deviation (SD). It should be noted that the control gluten-free dough without LE (rice and buckwheat), are 20% firmer and 40% less cohesive than the others. This behavior should result from the different composition of these two flours (Table 1). In these cases, the structuring of the system is essentially achieved by the starch present, although the different types of proteins present can also contribute to the reinforcement of this structure. Thus, the doughs obtained from these two flours have a greater resistance to penetration (high firmness), which is related to more compact doughs. The absence of the gluten matrix decreases the air retention capacity of the system [34], contributing to firmer doughs. At the same time, a reduction in the cohesiveness associated with a greater disaggregation is observed [35]. These characteristics are less positive in terms of the technological handling of these doughs. Physical Properties of the Dough In the case of rice, the high starch content is relevant, compared to other flours, which has an important impact on structure creation. Regarding the buckwheat, the type of proteins involved could also explain the increase in firmness and decrease in cohesiveness since its proteins are rich in lysine and arginine, unlike the other flours studied [36]. Complementary studies can be developed, in the future, in order to support this statement. When 10% (w/w) of the flours under study is replaced by LE, a relevant impact on the texture characteristics of the dough is observed. In general, the incorporation of proteins contributes to an increase in dough firmness ( Figure 1a) and a significant (p < 0.05) reduction of at least 50% in cohesiveness (Figure 1b). A similar behavior was observed by the other researchers upon the addition of potato peel to cakes [37], whey protein to cheese [38] and lupin flour to biscuits [26]. It is important to highlight the strong impact of LE addition on the two gluten rich flours-spelt and kamut doughs are about four times firmer (from 2.66 N to 12.19 N, in the case of spelt and from 2.05 N to 9.95 N in the case of kamut) than the corresponding control. This should result from a strong interaction between the main macromolecules present in the system: (i) lupin proteins-flour starch; and (ii) lupin proteins-flour gluten proteins. This type of interactions is strongly dependent on the protein composition of the added protein fraction, as well as on the starch conformation [39]. More important than the total amount of macromolecules present in the dough, which is similar in all the cases, is the biochemical composition and conformation of these proteins and polysaccharides. A firmer dough should reflect a more effective entangled network developed among these macromolecules [39], which may be important in terms of the dough stability, but which translates to a less cohesive dough. The relevant reinforcement on the structure observed for the kamut and spelt doughs, due to the incorporation of LE, allows us to predict that there was a reinforcement in the gluten structure already present in the control doughs, resulting from a synergy between the gluten and the lupin proteins. The firmness increase and cohesiveness decrease resulting from the LE incorporation has a relevant impact in technological terms: the doughs become more difficult to mold, meaning it may be necessary to optimize the cookie production process such as the optimization of the water absorption (e.g., MicrodoughLab procedure) that consists of the quantity of water needed to reach the optimal dough consistency [40]. The impact of LE addition on the linear viscoelastic behavior of the cookie's dough prepared with five different flours can be observed in Figure 2. These results were obtained from small amplitude dynamic rheological measurements (small amplitude oscillatory system -SAOS) and are related to the degree of dough structuring, reflecting the level of molecular interactions that are established, especially among the macromolecules present. When 10% (w/w) of the flours under study is replaced by LE, a relevant impact on the texture characteristics of the dough is observed. In general, the incorporation of proteins contributes to an increase in dough firmness ( Figure 1a) and a significant (p < 0.05) reduction of at least 50% in cohesiveness (Figure 1b). A similar behavior was observed by the other researchers upon the addition of potato peel to cakes [37], whey protein to cheese [38] and lupin flour to biscuits [26]. It is important to highlight the strong impact of LE addition on the two gluten rich flours-spelt and kamut doughs are about four times firmer (from 2.66 N to 12.19 N, in the case of spelt and from 2.05 N to 9.95 N in the case of kamut) than the corresponding control. This should result from a strong interaction between the main macromolecules present in the system: (i) lupin proteins-flour starch; and (ii) lupin proteins-flour gluten proteins. This type of interactions is strongly dependent on the protein composition of the added protein fraction, as well as on the starch conformation [39]. More important than the total amount of macromolecules present in the dough, which is similar in all the cases, is the biochemical composition and conformation of these proteins and polysaccharides. A firmer dough should reflect a more effective entangled network developed among these macromolecules [39], which may be important in terms of the dough stability, but which translates to a less cohesive dough. The relevant reinforcement on the structure observed for the kamut and spelt doughs, due to the incorporation of LE, allows us to predict that there was a reinforcement in the gluten structure already present in the control doughs, resulting from a synergy between the gluten and the lupin proteins. The firmness increase and cohesiveness decrease resulting from the LE incorporation has a relevant impact in technological terms: the doughs become more difficult to mold, meaning it may be necessary to optimize the cookie production process such as the optimization of the water absorption (e.g., MicrodoughLab procedure) that consists of the quantity of water needed to reach the optimal dough consistency [40]. The impact of LE addition on the linear viscoelastic behavior of the cookie's dough prepared with five different flours can be observed in Figure 2. These results were obtained from small amplitude dynamic rheological measurements (small amplitude oscillatory system -SAOS) and are related to the degree of dough structuring, reflecting the level of molecular interactions that are established, especially among the macromolecules present. The evolution of G' (storage modulus) and G" (loss modulus) over the frequency range tested reveal that both moduli slightly increased with increasing frequency. This weak gel-like rheological behavior is typical of cookie doughs [41] and other cereal dough products such as bread [42] and pasta [43]. The addition of LE causes the reinforcement of the dough structure for all the flours studied, except for the buckwheat flour. This is evidenced by the higher values of G' and G" for the formulations enriched with LE, compared to the standard flours. These results are in agreement with the texture results-also in terms of firmness, the buckwheat flour formulation was the only one without significant differences (p > 0.05) due to the addition of LE. To obtain a more detailed comparison among the linear viscoelastic behaviors of the different formulations, Table 2 shows the G' values obtained at 1 Hz (G' 1 Hz) from the three replicates of each test. It turns out that the G' 1 Hz values were significant higher in rice, spelt and kamut flours, when the lupin incorporation dough was compared with the control without LE. The maximum value for G' was 8.3 × 10 5 Pa for the lupin-incorporated rice flour. However, the greatest increment measured in G' 1 Hz due to the addition of LE was achieved for the kamut flour. In these cases, lupin incorporation increased the degree of dough structuring, which results from the formation of more complex three-dimensional structures among the macromolecules present in the systems, as previously discussed for the dough texture results. Table 2. Values of G' when the frequency corresponds to 1 Hz. Values are the means of at least three experiments ± SD. * represents p < 0.05 when compared with the corresponding control cookie. Physical Properties of Cookies Characteristic dimensions of the LE incorporation in five different flours are presented in Table 3. In general, the incorporation of LE had significant differences (p < 0.001) in all the flours tested, except for oat flour. For spelt, kamut and buckwheat flours, the addition of LE increased the area in relation to the control. However, the rice flour cookies were the only ones with a significant (p < 0.001) reduction in the cookie area. Therefore, the presence of gluten does not seem to affect the cookie area and no direct relationship can be established with the expansion of the structure. In relation to thickness, the two gluten-free flours (rice and buckwheat) showed a significant increase (p < 0.05) in the LE-containing cookies, unlike the gluten flours, where it showed a generalized decrease. Similar studies were performed with wheat cookies and Jayasena and Nasar-Abbas [26] reported no effect in the cookie diameter and an increase in the cookie thickness with the presence of 10% (w/w) lupin flour. Nevertheless, Bilgiçli and Levent [44] demonstrated no effect in the thickness in cookies containing lupin flour, whereas Tsen et al. [45] showed a reduction in the cookie diameter prepared with soy protein isolates. Table 3. The dimensions of each cookie formulation with 10% (w/w) of lupin extract (LE). Values are the averages of ten cookies ± SD. * represents p < 0.05 and ** represents p < 0.001 when compared with the corresponding control cookie. In summary, even in cases where statistically significant results were obtained, all the structural alterations resulting from the addition of LE could be neglected, as far as the magnitude was concerned (maximum 20% variation for the area and thickness of buckwheat cookies). This conclusion can be important in terms of technological performance and consumer acceptance. Rice The texture properties of foods are an important requirement for their acceptance by consumers, especially in what concerns crispy products, such as cookies [46]. In this sense, the impact of LE addition to different types of cookies was evaluated in both the presence and absence of gluten. Firmness values (N) obtained in week 0 and eight weeks later are presented in Figure 3. In summary, even in cases where statistically significant results were obtained, all the structural alterations resulting from the addition of LE could be neglected, as far as the magnitude was concerned (maximum 20% variation for the area and thickness of buckwheat cookies). This conclusion can be important in terms of technological performance and consumer acceptance. The texture properties of foods are an important requirement for their acceptance by consumers, especially in what concerns crispy products, such as cookies [46]. In this sense, the impact of LE addition to different types of cookies was evaluated in both the presence and absence of gluten. Firmness values (N) obtained in week 0 and eight weeks later are presented in Figure 3. Week 0 Control Lupin It is evident that the cookies prepared with the ancient grains and without LE (spelt and kamut) were firmer than the other control cookies and this observation remained valid after storage (eight weeks). The changes induced by the LE in the cookie structure differed over time, since at week 0 only the spelt flour had no significant difference (p > 0.05) when compared to the control; however, after eight weeks, the spelt, kamut and rice flours showed no significant differences (p > 0.05) when the cookie with LE and the control were compared. Additionally, the oat and buckwheat flours were statistically different over time (eight weeks), meaning that the incorporation of lupin clearly modified the texture of the cookies, making them firmer. Hence, it cannot be stated that the differences between the five different flours on one hand and LE addition on the other occurred due to the presence of gluten. Jayasena and Nasar-Abbas [26], Obeidat, Abdul-Hussain and Al Omari [47] and Bilgiçli and Levent [44] reported that cookie hardness increased with the addition of lupin flour in the cookie formulation. This can also be stated for other types of legume seeds, such as chickpeas [48], green lentils and navy beans [4]. The different behavior observed between the doughs and the respective cookies is corroborated with other studies [41]. Indeed, macromolecular structures present in each flour undergo dramatic changes during heat treatment. In spelt and kamut flours, gluten is the main element that accounts for the structure; in oat, the main structural role is played by βglucans, and in gluten-free flours (rice and buckwheat), the structure is mainly accounted for by starch. When the LE (protein) is added, there is an overall structural rearrangement leading to distinct interactions among these macromolecules, as supported by our results. The interactions among macromolecules and the type of structures which arise are differentially affected by the heat treatment which takes place during cooking. The impact of LE addition on the color parameters of cookies is summarized in Table 4. The ΔE* values were calculated to compare the color variation in relation to the cookies without LE. In the same table, the water activity (aw) and moisture content (H) of the ten formulations studied are also indicated. It is evident that the cookies prepared with the ancient grains and without LE (spelt and kamut) were firmer than the other control cookies and this observation remained valid after storage (eight weeks). The changes induced by the LE in the cookie structure differed over time, since at week 0 only the spelt flour had no significant difference (p > 0.05) when compared to the control; however, after eight weeks, the spelt, kamut and rice flours showed no significant differences (p > 0.05) when the cookie with LE and the control were compared. Additionally, the oat and buckwheat flours were statistically different over time (eight weeks), meaning that the incorporation of lupin clearly modified the texture of the cookies, making them firmer. Hence, it cannot be stated that the differences between the five different flours on one hand and LE addition on the other occurred due to the presence of gluten. Jayasena and Nasar-Abbas [26], Obeidat, Abdul-Hussain and Al Omari [47] and Bilgiçli and Levent [44] reported that cookie hardness increased with the addition of lupin flour in the cookie formulation. This can also be stated for other types of legume seeds, such as chickpeas [48], green lentils and navy beans [4]. The different behavior observed between the doughs and the respective cookies is corroborated with other studies [41]. Indeed, macromolecular structures present in each flour undergo dramatic changes during heat treatment. In spelt and kamut flours, gluten is the main element that accounts for the structure; in oat, the main structural role is played by β-glucans, and in gluten-free flours (rice and buckwheat), the structure is mainly accounted for by starch. When the LE (protein) is added, there is an overall structural rearrangement leading to distinct interactions among these macromolecules, as supported by our results. The interactions among macromolecules and the type of structures which arise are differentially affected by the heat treatment which takes place during cooking. The impact of LE addition on the color parameters of cookies is summarized in Table 4. The ∆E* values were calculated to compare the color variation in relation to the cookies without LE. In the same table, the water activity (a w ) and moisture content (H) of the ten formulations studied are also indicated. Table 4. Values of ∆E*, L*, a w and the moisture content (H, % w/w) of the control and lupin-enriched cookies. Values are the means of at least three experiments ± SD, except ∆E* which is the difference between the control and lupin-enriched cookie colors. * represents p < 0.05 and ** represents p < 0.001 when compared with the corresponding control cookie. The ∆E* values obtained were always higher than 5 for both time periods studied (week 0 and week 8), which means that the color difference between the lupin-enriched cookies and the control is visually distinguishable by the human eye. These differences result mainly from a general decrease in the lightness parameter (L*) in all lupin-containing cookie samples, resulting in a golden-brown color. These results agree with other studies, showing a decrease in cookie lightness with lupin flour at the same concentration level [44]. The results can be explained by the Maillard reaction, as proteins and sugars initiate a complex cascade of reactions during heating (higher than 100 • C), producing the darker color [49]. This darkening did not have a negative impact on the characteristics of the final product; on the contrary, the LE cookies presented very appealing colors, as those supported by other studies [26,28]. Cookies are a relatively dry product with a low moisture content and water activity values. These parameters are crucial to predict both the stability and safety of the product, with great impact in conservation, particularly for the maintenance of a crispy texture [50]. Moisture content values of cookies with and without LE are low (ranging from 1.04 to 5.61%), comparing favorably with other studies on similar cookies and indicating a positive impact in terms of conservation [30]. The a w values for lupin-enriched cookies at week 0 are significantly higher (p < 0.05 or p < 0.001) than those of the control cookies. After 8 weeks of storage, all the LE cookies had similar (except for rice flour) a w values, but significantly higher (p < 0.001) than the controls. Furthermore, all the samples were shown to have an a w value of less than 0.5 (except lupin-enriched cookies with rice flour at week 0), which means that all cookie formulations (with and without LE) had a low percentage of free water for microbial proliferation, leading to a high stability product [50]. Such low a w values are essential to prevent microbial growth on the cookies. Uysal et al. [51] found an increase in a w values with the incorporation of apple and lemon fiber in cookies. Batista et al. [28] also found an increase in a w values, resulting from the incorporation of microalgae biomass with a high protein content. However, Fradinho et al. [30] found an opposite effect when Psyllium fiber was added to the cookies similar to those prepared in the present work, resulting from the high-water holding capacity of Psyllium. The differential capacity to retain the water of the molecules present in the formulation had a direct impact on the water activity of the final product. For the LE cookies, the water holding capacity of the protein should be lower than that of the respective flour, justifying the increase in water activity. Lupin is considered a potential functional food because of its protein content, dietary fiber and more recently discovered bioactivities [20,21] that need to be explored in food products in the near future. Conclusions Consumers are currently more cognizant about the environmental effects and nutritional benefits of foods. In this sense, lupin can be considered a suitable raw material for food production due to its nutritional and health-promoting properties. Lupin protein extract (LE) addition to gluten and gluten-free flours showed a high impact in dough structure, increasing the degree of structuring. This impact on dough texture had technological implications, resulting in a greater difficulty in the molding process, which can be optimized in terms of industrial processing. The lupin-enriched dough based on buckwheat flour was unique because it did not show significant differences (p > 0.05) when compared to the control dough, being technologically more stable and easier to work with. Regarding the physical properties of the final products, the cookies based on buckwheat and oat flours were always firmer than the corresponding control cookies. Rice and buckwheat flours supplemented with LE produced cookies with a significant increase in thickness (p < 0.05), unlike in gluten flours (oat, spelt and kamut). These parameters are very important, since less thickness suggests more crispness, a highly desirable property appreciated by consumers. Supplementing flours with LE improves color and decreases lightness, making cookies more pleasant to consumers. After eight weeks, the a w values of all LE-containing cookies were significantly higher (p < 0.001) than the controls, a characteristic which has a positive impact in conservation. Overall, our results show that the cookies prepared with flours with or without gluten can be produced successfully by replacing 10% of the flour with LE. Therefore, the inclusion of 10% (w/w) sweet lupin protein extract in formulations improves the nutritional value and quality of cookies. Author Contributions: A.R. and J.M. were responsible for the concept and design of the study. J.M. was responsible for the lab work and data analyses. A.R. and J.M. were responsible for the interpretation of data. R.B.F, A.L., A.R. and J.M. were responsible for drafting the article. A.L., A.R. and R.B.F. were responsible for revising the article critically for intellectual content. All authors have read and agreed to the published version of the manuscript. Funding: This work was funded by national funds from FCT-Portuguese Foundation for Science and Technology, through the research unit LEAF (UID/AGR/04129/2020).
v3-fos-license
2023-06-04T15:13:33.261Z
2023-06-01T00:00:00.000
259063283
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/sym15061182", "pdf_hash": "203d7ab866c7231d4743967503ecd2f0ff05d5b7", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43971", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "f1a56052c427ae024292a15774286472f389dd5b", "year": 2023 }
pes2o/s2orc
Temporal Network Link Prediction Based on the Optimized Exponential Smoothing Model and Node Interaction Entropy : Link prediction accuracy in temporal networks is easily affected by the time granularity of network snapshots. This is due to the insufficient information conveyed by snapshots and the lack of temporal continuity between snapshots. We propose a temporal network link prediction method based on the optimized exponential smoothing model and node interaction entropy (OESMNIE). This method utilizes fine-grained interaction information between nodes within snapshot periods and incorporates the information entropy theory to improve the construction of node similarity in the gravity model as well as the prediction process of node similarity. Experiment results on several real-world datasets demonstrate the superiority and reliability of this proposed method in adapting to link prediction requirements over other methods across different time granularities of snapshots, which is essential for studying the evolution of temporal networks. Introduction The complex network model is an abstract approach to analyzing real-world interactions between objects in the form of points and lines [1]. Any individual in nature can be connected with another in a specific relation. With different observation scales, the expression range of a complex network can be applied to different fields. Complex networks can express any chemical structure [2] or microscopic biological structure [3] by taking molecules and cells as individuals. By shrinking the scale to neurons, the complex network can also express the structure of the human brain [4]. By abstracting the sensors used in autonomous driving as nodes, the complex network can be represented as a road topology within a specific area [5]. Thus, a series of complex network techniques can be further used to analyze the target system features, such as node importance evaluation [6], fractal research of complex networks [7], and community detection [8]. As one of the most representative problems in complex networks, link prediction aims to estimate the unknown or missing links possibility between nodes by using the current node connection information of the target network. Link prediction can more accurately represent interaction principle and help people to efficiently understand evolutionary trends and mechanisms of the target network [9]. Additionally, link prediction results have many applications in current society. Link prediction matching between user demand and hotel conditions can improve hotel revenue [10]. The link prediction in wireless sensor networks (WSNs) can improve the information transmission efficiency while maintaining relatively low re-transmission rates for energy-saving purposes [11]. The link prediction analysis in the disease-gene interaction network can help the pharmaceutical industry develop targeted drugs [12]. The processing objects of link prediction have been diversified, including static networks, dynamic networks, temporal networks, heterogeneous networks, heterogeneous temporal networks, and hypergraphs. There are five primary methods for temporal network link prediction: matrix factorization, probability, spectral clustering, machine learning, and time series [13]. In matrix factorization, most methods are based on nonnegative matrix factorization (NMF). Sheng et al. [14] propose a temporal link prediction method based on NMF. This method considers three aspects: the global structure of nodes, the local information of nodes, and the attributes of nodes. By leveraging multiple sources of information, this method predicts the probability of link occurrence. In probability, the uncertainty and variability of links among nodes are typically quantified using the maximum likelihood approaches or probability distributions. Based on the concept of Markov chains, the extended temporal exponential random graph model (etERGM) [15] can predict the future attributes and connections of nodes based on historical data. In spectral clustering, Fang et al. proposed a time regression model for temporal link prediction. The idea is to integrate the spectral graph theory and low-rank approximation into a time series model, allowing the model to capture more graph information and improve the accuracy of temporal link prediction. In machine learning, network representation learning (NRL) utilizes various graph embedding algorithms to represent all properties of temporal network in a low-dimensional vector space. This approach effectively eliminates the challenges associated with extracting features of snapshots. By leveraging these low-dimension feature vectors, temporal link prediction can be performed well. Zhou et al. [16] proposed a NRL method called DynamicTriad, which simulates the occurrence of triadic closure processes among nodes. This enables the model to capture temporal features and obtain vector representation for each node across different periods. In time series-based temporal link prediction, researchers have proposed various methods to enhance the accuracy of temporal link prediction by obtaining the local similarity of nodes in different ways. Huang et al. [17] used specific time granularity to treat temporal network data sets into snapshots. They proposed an improved gravity model with second-order neighbors, denoted by gravity (GR), to compute the score matrix in each static network snapshot. All score matrices will be combined with a time attenuation factor to accumulate the next period connection probability. Yang et al. [18] proposed a temporal network link prediction method, tensor-based node similarity (TBNS). The TBNS treats the collection of snapshots with time dimension as a three-dimensional tensor and employs the exponential smoothing model to compress a three-dimensional tensor into a two-dimensional node similarity matrix, which serves as the link prediction scores for the next period. This approach addresses the problem that snapshots cannot capture node connectivity strength due to the large time granularity sizes. However, capturing any microscopic connections in this approach will likely to lead to many sparse matrices, which wastes storage space. Güneş et al. [19] first calculated the nodes similarity indexes in different periods by common neighbors, preferential attachment, Adamic-Adar, and the Jaccard indicator. Then, they used the autoregressive integrated moving average (ARIMA) to predict the similarity between nodes for the future. All of the above time series-based temporal link predictions [17][18][19] follow the framework processing rules shown in Figure 1. First, the temporal network represents different snapshots at different periods. Second, the network snapshot in each layer can construct nodes' similarity matrix in different ways. Finally, various numerical prediction models are used to predict the snapshot similarity matrix at a future time. Therefore, constructing reasonable node similarity is crucial in improving the accuracy of time series-based network temporal link prediction methods. Table 1 briefly highlights some existing node similarity-based link prediction methods. work link prediction: matrix factorization, probability, spectral clustering, machine learning, and time series [13]. In matrix factorization, most methods are based on nonnegative matrix factorization (NMF). Sheng et al. [14] propose a temporal link prediction method based on NMF. This method considers three aspects: the global structure of nodes, the local information of nodes, and the attributes of nodes. By leveraging multiple sources of information, this method predicts the probability of link occurrence. In probability, the uncertainty and variability of links among nodes are typically quantified using the maximum likelihood approaches or probability distributions. Based on the concept of Markov chains, the extended temporal exponential random graph model (etERGM) [15] can predict the future attributes and connections of nodes based on historical data. In spectral clustering, Fang et al. proposed a time regression model for temporal link prediction. The idea is to integrate the spectral graph theory and low-rank approximation into a time series model, allowing the model to capture more graph information and improve the accuracy of temporal link prediction. In machine learning, network representation learning (NRL) utilizes various graph embedding algorithms to represent all properties of temporal network in a low-dimensional vector space. This approach effectively eliminates the challenges associated with extracting features of snapshots. By leveraging these low-dimension feature vectors, temporal link prediction can be performed well. Zhou et al. [16] proposed a NRL method called DynamicTriad, which simulates the occurrence of triadic closure processes among nodes. This enables the model to capture temporal features and obtain vector representation for each node across different periods. In time series-based temporal link prediction, researchers have proposed various methods to enhance the accuracy of temporal link prediction by obtaining the local similarity of nodes in different ways. Huang et al. [17] used specific time granularity to treat temporal network data sets into snapshots. They proposed an improved gravity model with second-order neighbors, denoted by gravity (GR), to compute the score matrix in each static network snapshot. All score matrices will be combined with a time attenuation factor to accumulate the next period connection probability. Yang et al. [18] proposed a temporal network link prediction method, tensor-based node similarity (TBNS). The TBNS treats the collection of snapshots with time dimension as a three-dimensional tensor and employs the exponential smoothing model to compress a three-dimensional tensor into a two-dimensional node similarity matrix, which serves as the link prediction scores for the next period. This approach addresses the problem that snapshots cannot capture node connectivity strength due to the large time granularity sizes. However, capturing any microscopic connections in this approach will likely to lead to many sparse matrices, which wastes storage space. Güneş et al. [19] first calculated the nodes similarity indexes in different periods by common neighbors, preferential attachment, Adamic-Adar, and the Jaccard indicator. Then, they used the autoregressive integrated moving average (ARIMA) to predict the similarity between nodes for the future. All of the above time series-based temporal link predictions [17][18][19] follow the framework processing rules shown in Figure 1. First, the temporal network represents different snapshots at different periods. Second, the network snapshot in each layer can construct nodes' similarity matrix in different ways. Finally, various numerical prediction models are used to predict the snapshot similarity matrix at a future time. Therefore, constructing reasonable node similarity is crucial in improving the accuracy of time series-based network temporal link prediction methods. Table 1 briefly highlights some existing node similarity-based link prediction methods. Semi-Local Semi-Local S xy (t) = q x ·π xy (t) + q y ·π yx (t) O nk t RWR [28] Semi-Local S xy (t) = q xy + q yx O n 3 Cos+ [29] Global Note: 1 Γ(x) and Γ(y) represent the neighbors of node x and node y; k x and k y are the degree of node x and node y; A is the adjacency matrix of the network; S is a matrix composed of elements representing the similarity between nodes in the network; l + xy denotes the element in the row x and column y of pseudo-inverse matrix L + ; α denotes a decay factor, which allows controlling the contribution of third-order neighbors to the similarity of nodes; π xy (t) denote the random walk probability from node x and node y at time t; G i,Z is the gravitational force between node x and node y; 2 n denotes the number of nodes in the network; k is the average degree of nodes; and t is the step of random walk steps. However, the link prediction idea of using the network snapshots as a time series prediction needs to be improved. The network snapshot is an adjacency matrix format. Multiple joins within a time granularity L will be recorded in a binary form. In temporal networks, the interaction information is represented using the triplet format (i, j, t). The time granularity L will divide time stamps t into n slices of snapshots. Each network snapshot contains interaction events within the corresponding period. In the temporal prediction process, there may be misjudgments in the reference scores of the similarity matrix among nodes within each snapshot. For example, during the snapshot period, some nodes may have lower interaction strength, and as a result, their similarity may not provide more weight for the time series forecasting model. A large time granularity L can provide more edge information for snapshots, but it can also destroy the accuracy of reference scores for similarity among nodes during the prediction process. The time series-based temporal link prediction method [17][18][19] mentioned above enhances the expressive capacity of network snapshots and improves the computational speed of the algorithm by considering the local information of node. Regardless, large time granularity may not guarantee the algorithm's accuracy [31]. To address the above problems, we propose a temporal network link prediction method based on optimized exponential smoothing model and node interaction entropy (OESMNIE). The OESMNIE method considers the fine-grained interaction information among nodes within a snapshot and the impact of interaction intensity on node similarity in the time series prediction process. The OEMNIE method leverages the characteristics of wide-ranging and low-frequency node interactions within a snapshot, combined with information entropy, to further differentiate the popularity of nodes within the snapshot structure, thereby enhancing the accuracy of constructing node similarity based on the gravity model. Furthermore, the OESMNIE method normalizes the sum of interaction entropy for each node within the snapshot and incorporates the smoothing coefficient from the exponential smoothing model. This allows the prediction of node similarity to dynamically weight according to the overall trend of network interactions, and the three-dimensional snapshot similarity tensor is eventually compressed into a twodimensional node similarity matrix (i.e., future link prediction scores). This provides a new idea for modeling time series-based temporal link prediction. The major contributions are summarized as follows: 1. We record the fine-grained interaction information among nodes within the snapshot period and incorporate the concept of information entropy and weak ties to construct the node interaction entropy. This value differentiates the popularity of nodes within the snapshot structure from a more nuanced perspective. 2. We combine node interaction entropy and eigenvector centrality to construct an enhanced node similarity that considers the network structure's distance between nodes, weak ties characteristics, and centrality. 3. We normalized the sum of node interaction entropy, and the normalized result will reflect the ratio of the current snapshot's weak ties during the entire period. With the higher ratio, the node similarity matrices can provide more weights for time series prediction. We combine the smoothing coefficient and the ratio into the exponential smoothing model. This improves the shortcoming of the single reference score of the prediction process. The remaining content of this paper is divided into several parts. In Section 2, we will highlight the related works, introduce the concepts of temporal networks, network snapshots, and multi-layer network model, as well as explain weak ties theory, gravity model, information entropy, eigenvector centrality of nodes, temporal network link prediction method, and the role of exponential smoothing model in link prediction. Section 3 will provide a more detailed description of the OESMNIE method proposed in this paper. Section 4 will include specific comparative experimental analysis and discussion. Finally, Section 5 will conclude this paper. Temporal Network The interactions between nodes in the temporal network are continuously changing. We make a simplifying assumption: the increase and decrease in nodes is not yet considered for the analyzed network. Therefore, temporal network is generally defined as G = (V, E t ), where V is the node set {v 1 , v 2 , v 3 , . . . , v n } in the temporal network. E represents the connection of the temporal network edges over the whole period. E t contains all nodes' interaction information in the recorded period, and E t can be represented as set of triples with time stamp, E t = (v 1 , v 2 , t 0 ), . . . , v i , v j , t n , where t n indicates that there is a connection between two nodes at the n instant. Construction of Network Snapshots and Multi-Layer Network Model In order to make a link prediction model based on time series to analyze the transformed temporal network, time granularity with an appropriate span L is generally adopted to record and divide the entire period of the temporal network. L can sequentially divide interaction information by hours, minutes, seconds, days, months, or years. Consequently, the adjacency matrix A n represents the record of whether there is any interaction between nodes within time granularity L. Network snapshots A n at different time will contain different edges subsets E n , and E n can be expressed by The temporal network snapshot is constructed as in Equation (1). where, E n (i, j, t) ∈ E[(n)L, (n + 1)L] represents that node i and node j are connected in [(n)L, (n + 1)L] period. We can obtain a symmetric adjacency matrix containing the connection information for a period. The multi-layer network model is constructed from multiple network snapshots taken at different periods, forming a tensor with a temporal dimension. The multi-layer network model is also widely used for representing heterogeneous networks. The specific format can be shown in Figure 2 using the supra-adjacency matrix (SAM) model proposed by Taylor [32]. The SAM model can arrange the adjacency matrices , 1 period. We can obtain a symmetric adjacency matrix containing the connection information for a period. The multi-layer network model is constructed from multiple network snapshots taken at different periods, forming a tensor with a temporal dimension. The multi-layer network model is also widely used for representing heterogeneous networks. The specific format can be shown in Figure 2 using the supra-adjacency matrix (SAM) model proposed by Taylor [32]. The SAM model can arrange the adjacency matrices (network snapshots) of different periods in chronological order along the diagonal. We have removed the interlayer relationship matrix from the SAM model, using it solely as a visualization tool for network snapshots. By partitioning the temporal network using network snapshots, we can analyze the evolution process of the temporal network. Weak Ties Theory Weak ties theory [33,34] primarily focuses on the connection frequency characteristics of nodes and divides nodes' interaction characteristics into strong and weak ties. Strong ties shows that nodes' interactions are large, and the range is relatively stable. Weak ties shows that nodes' interactions are less, but the range is relatively wider. The theory states that weak ties are easier to traverse different social groups than strong ties. In other words, nodes with weak ties can transfer information from one group to another, breaking down information silos. Additionally, these nodes are typically connected to multiple communities, and their actions and statements are more likely to influence others. Therefore, they have a more significant impact and information dissemination effect. Lü et al. [35] introduced a free parameter to control the relatively contribution of weak ties in node similarity measure. The result shows that link prediction accuracy can be effectively improved by introducing weak ties. By partitioning the temporal network using network snapshots, we can analyze the evolution process of the temporal network. Weak Ties Theory Weak ties theory [33,34] primarily focuses on the connection frequency characteristics of nodes and divides nodes' interaction characteristics into strong and weak ties. Strong ties shows that nodes' interactions are large, and the range is relatively stable. Weak ties shows that nodes' interactions are less, but the range is relatively wider. The theory states that weak ties are easier to traverse different social groups than strong ties. In other words, nodes with weak ties can transfer information from one group to another, breaking down information silos. Additionally, these nodes are typically connected to multiple communities, and their actions and statements are more likely to influence others. Therefore, they have a more significant impact and information dissemination effect. Lü et al. [35] introduced a free parameter α to control the relatively contribution of weak ties in node similarity measure. The result shows that link prediction accuracy can be effectively improved by introducing weak ties. The Gravity Model Levy et al. [36] performed an analysis on four networks with the small-world property. They confirmed that there is a dependency between the connection probability and distance. Moreover, all the four networks show the same empirical law: the probability of connection between nodes in social networks is inversely proportional to the square of the distance, which is similar to the dependence of Newton's law of universal gravity on distance. Wahid-Ul-Ashraf et al. [37] used network science's measurement method to obtain the shortest path, inverse Katz score between nodes, and different node centrality measures, which are introduced into Newton's law of universal gravity as distance and quality. They constructed new node similarity measures and proved the feasibility of physical models in link prediction. The gravity model is shown in Equation (2). where, F(v i ) can be different kinds of indicator used to evaluate node influence, including but not limited to degree centrality (DC), closeness centrality (CC), betweenness centrality (BC), eigenvector centrality (EC), and others. D v i , v j is distance between node v i and node v j . Information Entropy Information entropy is a vital concept in information theory used to measure the uncertainty or information content of a random variable. Therefore, when we use information entropy to measure a system, we focus on statistically analyzing all the states that occur in the system and converting the occurrence frequencies of events into probabilities. A high information entropy of a system indicates a more variety of event states occurring, and that the probabilities of different events are relatively dispersed. Conversely, a low information entropy of a system implies a more limited range of event states occurring, and that certain events have higher probabilities of occurrence. The information entropy method can be used to quantify the link prediction problems based on a probability description. There are several entropy weight methods that have been proposed for link prediction research, such as the node similarity index of path entropy [38], structural entropy model [39], link prediction method based on relative entropy [40], and maximum entropy model [41]. The calculation process of information entropy is expressed as Equation (3). where, X the entire target system, which includes all the events covered within the system, denoted as X = {x 1 , x 2 , . . . , x n }. P(x i ) is the probability of the occurrence of the event x i . The Eigenvector Centrality of Nodes The eigenvector centrality was proposed by Bonacich [42]. According to the eigenvector centrality of node, the node's eigenvector centrality depends on the number of the target node's neighbor nodes and the centrality of neighbor nodes. The specific calculation process is shown in Equation (4). The node's eigenvector centrality can be succinctly expressed as a one-dimensional vector of length N (it is equal to the number of nodes in network), where each element represents the centrality score of a node. According to the calculation approach defined by Equation (4), the centrality of node i is determined by the sum of centrality scores of its neighboring nodes. The final node centrality is obtained by iteratively computing this process until the eigenvector centrality converges. This process transfers and accumulates the centrality information of nodes within the network to reflect their importance and influence. Temporal Network Link Prediction With the diversification of modeling methods, link prediction can be divided into two categories: static network link prediction and dynamic network link prediction. Static network link prediction involves analyzing and supplementing unknown or missing edge information using the existing edge set E in a given analysis network G(V, E). On the other hand, dynamic network link prediction aims to predict the connection status at time T + 1 based on the E n from 0 to T. Therefore, historical edge information plays an essential role in predicting future connections. The Exponential Smoothing Model in Link Prediction The exponential smoothing model, derived from the moving average model, is a numerical prediction model. The temporal network link prediction method based on the exponential smoothing model takes each snapshot adjacency matrix as a dataset and introduces smoothing coefficient α to compress the whole temporal snapshot matrix, which is a three-dimensional tensor, into a two-dimensional matrix. The score of the last compressed matrix is used as the link prediction score for the next period. The specific exponential smoothing model applied to link prediction is given in Equation (5). where, A T is the network snapshot (adjacency matrix) at period T, S T+1 is the node similarity matrix of the snapshot A T+1 , and the smoothing coefficient α's range is [0, 1]. The model provides a reference score α for the snapshot in the recent period, and the remaining (1 − α) scores will be the historical connection information reference scores. By compressing the historical connection information of each node, the observation angle starts from the current existing link. The current snapshot's similarity matrix will be the given score α, and the remaining scores (1 − α) will gradually diminish the contribution of the historical connection information in the prediction. Finally, the connection probability matrix Z T+1 at the last time T + 1 will depend on the final compressed node similarity matrix. Establishment of the Node Interaction Entropy Link prediction methods based on network snapshots typically focus on the topology structure within the snapshots, while overlooking the potential correlation between node influence intensity and their connection frequency [43]. From the above problem, it is feasible to consider fine-grained interaction information between nodes and utilize a gravity model to construct node similarity for link prediction under network snapshots. Therefore, we conduct a statistical analysis of the frequency and range of interactions between nodes in each snapshot period and analyze the role of weak ties in characterizing node influence. In our daily lives, nodes with weak ties are widespread, such as supermarket salespeople. Although we may not interact with them frequently, they can act as bridge nodes, connecting multiple communities with low interaction frequency but a broad interaction range (e.g., interacting with lawyers, workers, police officers, and other diverse groups). The message will spread across multiple communities if these nodes are involved in information dissemination or viral propagation. Conversely, suppose the message is disseminated to nodes with high interaction frequency and a narrow interaction range (e.g., nodes with strong ties). In that case, these nodes are more inclined to transmit information within their community. Therefore, nodes with weak ties generally have greater influence. To measure weak ties of nodes, we naturally consider the application of information entropy to assess system uncertainty. The interaction range of a node determines the number of events occurring in the system observed from that node, while the ratio of interaction frequency to total interactions can be seen as the probability of each connected event occurrence. Therefore, while constructing network snapshots, we also constructed a symmetric snapshot interaction frequency matrix C n and combined it with the concept of information entropy to create the node interaction entropy for different periods. This allows us to analyze the influence of nodes within each period based on fine-grained interaction behaviors, which can be further utilized for link prediction. The snapshot interaction frequency matrix C n is calculated using Equation (6). Once the node interaction frequency matrix C n corresponding to each snapshot is obtained, we can probabilistically transform each node's interaction events. The specific process is shown in Equation (7). where, P n (i, j) is the ratio of connection between nodes i and node j compared to the node i total number of connections in the snapshot A n . τ(i) is node i's neighbor nodes set at [(n)L, (n + 1)L] period of snapshot A n . C n (i, j) is the element of the connection frequency matrix within the snapshot A n , representing the number of connections between nodes i is the sum of connections between nodes i and neighbors in the snapshot A n . According to Equation (7), the sum of the node connection probabilities P n (i, j) in each snapshot is 1. The probability that we constructed conditions the use of information entropy. Accordingly, we can create node interaction entropy using the information entropy theory to measure nodes' weak ties characteristics in each snapshot. The node interaction entropy is defined as Equation (8). The node interaction entropy in the current snapshot is positively correlated with the weak ties of nodes. Nodes with a broader range of interactions and lower connection frequency exhibit higher node interaction entropy, indicating a greater influence. Thus, by considering fine-grained connection information, the node interaction entropy is a complementary measure of node influence within the snapshot period. Establishment of the Improved Node Centrality in Each Snapshot The eigenvector centrality is calculated based on the snapshot's network structure. It does not consider the differences in link strength between nodes, but it can reflect the nodes' positions and influence within the network structure. Therefore, it can be used as a fundamental measure of node influence. The specific calculation process is shown in Equation (9). where, τ(i) is node i's neighbor node set in snapshot A n , EC n (i) is the eigenvector centrality of the nodes in the current snapshot, and c is a constant, which we take to be 1. The input value of D n (j) for the first time is the degree feature value of each node in the snapshot. Through iteration and accumulation of D n (i) = c reach the steady state, and its length is equal to the number of nodes in snapshot A n . By establishing a correspondence between the elements of D n (i) and node i, we can obtain the eigenvector centrality of each node. We combine the eigenvector centrality of the node and the node interaction entropy to construct an improved node eigenvector centrality, which expresses the popularity of a node in the current snapshot period. The improved eigenvector centrality considers the current topology of nodes and the fine-grained behavior. The procedure, as mentioned above, allows for differentiated treatment of the popularity of each node within the current snapshot. The specific process is shown in Equation (10). We denote the improved node centrality by M n i , and we can introduce the gravity model to construct more accurate node similarity matrices for each snapshot. Establishment of Node Similarity Matrix by Gravity Model We construct an improved node similarity matrix C n by treating the improved node centrality as the quality of the node in the gravity model and taking the shortest path between nodes within the snapshot as the distance input in the model. The process is shown in Equation (11). where, d n ij is the shortest path between node i and node j in the snapshot A n . M T i and M T j are the quality of nodes i and j in snapshot A n . As the number of snapshots increases, the similarity matrices from different snapshots will ultimately form a three-dimensional tensor with a time dimension C = C T 1 , C T 1 , · · · , C T n . Optimization of the Exponential Smoothing Model The exponential smoothing model is commonly used for time series prediction. It predicts future data by assigning reference scores to recent data and gradually incorporating historical information through iteration. This model considers trends, cycles, and historical patterns to make predictions. Despite its simplicity and efficient time complexity, updating the reference scores in the exponential smoothing model can be challenging. The initial smoothing coefficient α is often subjective, requiring dynamic adjustment to adapt to fluctuations in the snapshot reference weights. Therefore, we compare the sum of the interaction entropy in each snapshot and combine its ratio results with the smoothing coefficient α. In this way, we can determine whether the node similarity matrix of each snapshot has a larger or smaller score than other periods. The specific process is shown in Equation (12). ·α (12) where, A n is the nth network snapshot, V(A n ) is the set of nodes in the snapshot A n , and ∑ V(A n ) i I n (i) is the sum of interaction entropy of nodes within snapshot A n . We hypothesize that each snapshot's normalized total node interaction entropy reflects the contribution ratio of the current snapshot's node similarity during the time series prediction process. The interaction in the high incidence stage can typically generate more valuable scores for the prediction process. Once we obtain the score ratios corresponding to the similarity matrices of each snapshot, we can improve the traditional exponential smoothing model, as shown in Equation (13). where, C 0 is the first similarity matrix of snapshot. The process expressed in Equation (11) above shows that the link structure within the future snapshot A n+1 will be obtained by iteration and compression of the improved smoothing coefficient. According to Equation (13), the historical node similarity tensor with time dimension is compressed into a twodimensional matrix with different weights, which is advantageous for storage in terms of space. Detailed Explanation of the OESMNIE Method The detail computation process of the OESMNIE method is illustrated in Figure 3. where, is the th network snapshot, is the set of nodes in the snapshot , and ∑ is the sum of interaction entropy of nodes within snapshot . We hypothesize that each snapshot's normalized total node interaction entropy reflects the contribution ratio of the current snapshot's node similarity during the time series prediction process. The interaction in the high incidence stage can typically generate more valuable scores for the prediction process. Once we obtain the score ratios corresponding to the similarity matrices of each snapshot, we can improve the traditional exponential smoothing model, as shown in Equation (13). where, is the first similarity matrix of snapshot. The process expressed in Equation (11) above shows that the link structure within the future snapshot will be obtained by iteration and compression of the improved smoothing coefficient. According to Equation (13), the historical node similarity tensor with time dimension is compressed into a two-dimensional matrix with different weights, which is advantageous for storage in terms of space. Detailed Explanation of the OESMNIE Method The detail computation process of the OESMNIE method is illustrated in Figure 3. In order to better describe the details of the OESMNIE method, we construct a small temporal network for demonstration analysis, and calculate the similarity (S n+1 13 or S n+1 13 ) for nodes 1 and 3 at next period. The data preprocessing process for temporal network data is illustrated in Figure 4. In order to better describe the details of the OESMNIE method, we construct a small temporal network for demonstration analysis, and calculate the similarity or for nodes 1 and 3 at next period. The data preprocessing process for temporal network data is illustrated in Figure 4. To analyze such datasets, we employ a time granularity (or time window) of length to partition the temporal network record based on the interaction timestamps ( ). Additionally, we utilize Equations (1) and (5) to record and calculate the interaction events and fine-grained interactions of nodes in different periods. This allows us to construct the collection of snapshot matrices and interaction frequency matrices . Step 1: We utilize unweighted and weighted adjacency matrices to represent the snapshots and frequency matrices , respectively. The specific expression results are shown in Figure 5. is the total number of connections generated by the above nodes in the corresponding snapshot inclusion period. The network snapshot indicates that each node has a connection event in the corresponding period. To analyze such datasets, we employ a time granularity (or time window) of length L to partition the temporal network record based on the interaction timestamps (t). Additionally, we utilize Equations (1) and (5) to record and calculate the interaction events and finegrained interactions of nodes in different periods. This allows us to construct the collection of snapshot matrices A n and interaction frequency matrices C n . Step 1: We utilize unweighted and weighted adjacency matrices to represent the snapshots A n and frequency matrices C n , respectively. The specific expression results are shown in Figure 5. temporal network for demonstration analysis, and calculate the similarity for nodes 1 and 3 at next period. The data preprocessing process for temporal work data is illustrated in Figure 4. To analyze such datasets, we employ a time granularity (or time window) of le to partition the temporal network record based on the interaction timestamps ( ). ditionally, we utilize Equations (1) and (5) to record and calculate the interaction ev and fine-grained interactions of nodes in different periods. This allows us to construc collection of snapshot matrices and interaction frequency matrices . Step 1: We utilize unweighted and weighted adjacency matrices to represen snapshots and frequency matrices , respectively. The specific expression result shown in Figure 5. is the total number of connections generated by the above nodes in the c sponding snapshot inclusion period. The network snapshot indicates that each node has a nection event in the corresponding period. Matrix representation of C n and A n . The edge weight of the snapshot interaction frequency matrix C n is the total number of connections generated by the above nodes in the corresponding snapshot inclusion period. The network snapshot A n indicates that each node has a connection event in the corresponding period. Step 2: We use Equations (7) and (8) to calculate the probability distribution of node 1 and node 3 in each snapshot interaction frequency matrix (C n ). The specific calculation process in shown in Equation (14). Node interaction entropy reflects weak ties of nodes, which plays an important role in improving the accuracy of algorithm. Step 3: Node interaction entropy is a supplementary factor to the node's influence within the snapshot. Therefore, in addition to incorporating the basic node structure feature, eigenvector centrality, based on the snapshot for further analysis, we also consider the calculation and combination process as shown in Equation (15). The first and second steps of Equation (15) involve calculating the eigenvector centrality of node 1 and node 3. Afterward, we combine it with the interaction entropy to obtain an improved node centrality. This centrality considers the node's topology, the influence of its neighbors, and weak ties within the current snapshot. We introduce the gravity model as a tool to transform the improved node influence into node similarity. Therefore, we attempt to use the improved centrality as the mass of nodes, consider the shortest path between node 1 and node 3 as the distance, and apply the gravity model to calculate the similarity. The specific calculation is shown in Equation (16). Step 4: We start by calculating the sum of node interaction entropy for each snapshot. Among these values, we select the snapshot with the maximum sum of node interaction entropy as the denominator, while using the sum of node interaction entropy for each snapshot as the numerator. Next, we combine this ratio with the smoothing coefficient α to dynamically control its impact on the prediction process, ensuring its practical influence on the predicted results. The smoothing coefficient α is a variable parameter, and we set it to 0.5 in this case. For a traditional exponential smoothing model, this implies that the recent node similarity information will be assigned a reference weight of 0.5, while the remaining 0.5 reference value will serve as the reference weight for the historical node similarity after compression iteration. The actual reference proportion of node similarity matrices in the exponential smoothing model can be calculated using the method described in Equation (17). Step 5: After determining the reference ratio of node similarity values in different periods, we can substitute the parameters into the exponential smoothing model to predict the similarity sequence between node 1 and node 3 in T 3 period. The specific prediction process is shown in Equation (18). where, S 3 13 is the node similarity between node 1 and node 3 in the future network snapshot A T 3 . We consider it as the prediction score for the link. Experimental Environment The experimental environment for this study is as follows: Processor: 12th Gen Intel Data Selection We selected three real temporal network datasets, including Emaildept3 [44], Workspace [45], and Email-EU-core [46], to verify the efficiency and accuracy of the OESM-NIE method and compare it with TBNS, GR, and traditional node similarity indicators. Table 2 shows the statistical characteristics of the selected temporal network datasets. Figure 6 illustrates the evolution of snapshot interactions in each temporal network dataset. 1 The selected temporal network datasets possess the following characteristics: Firstly, they are mediumsized or small-sized networks, providing practical storage conditions for the matrix-based representation of snapshots in the proposed method. Secondly, they encompass both short-term and long-term temporal interaction information, which help demonstrate the compatibility of the OESMNIE method in the temporal dimension. Lastly, they exhibit significant fluctuations in interactions (including periods of no interactions), which accurately reflect the dynamic nature of real-world networks. This contributes to validating the rationality of the dynamic weighting approach in the prediction process and the reliability of the OESMNIE method; N is the number of nodes in the temporal network, and C is the actual number of interactions that occurred during the observation period; In the above temporal network, the connections are recorded in the form of triplets (i, j, t), where t is in seconds. Symmetry 2023, 15, x FOR PEER REVIEW 14 of 24 Note: 1 The selected temporal network datasets possess the following characteristics: Firstly, they are medium-sized or small-sized networks, providing practical storage conditions for the matrixbased representation of snapshots in the proposed method. Secondly, they encompass both shortterm and long-term temporal interaction information, which help demonstrate the compatibility of the OESMNIE method in the temporal dimension. Lastly, they exhibit significant fluctuations in interactions (including periods of no interactions), which accurately reflect the dynamic nature of real-world networks. This contributes to validating the rationality of the dynamic weighting approach in the prediction process and the reliability of the OESMNIE method; N is the number of nodes in the temporal network, and C is the actual number of interactions that occurred during the observation period; In the above temporal network, the connections are recorded in the form of triplets , , , where is in seconds. To validate the effectiveness of the OESMNIE method under different time granularities, we utilize different time granularities to obtain snapshots. The number of network snapshots generated based on the time granularity is represented in Table 3. Table 3. Number of temporal network snapshots. To validate the effectiveness of the OESMNIE method under different time granularities, we utilize different time granularities to obtain snapshots. The number of network snapshots generated based on the time granularity L is represented in Table 3. Note: L is the time granularity used to partition the temporal network, which can also be understood as the span of a single network snapshot; and T is the number of network snapshots. Evaluation Method We validate the algorithm from two perspectives. First, we predict the individual snapshots in the temporal network (which serves to verify the effectiveness of fine-grained interaction behavior among nodes in improving node similarity). Second, we predict the future links of the temporal network (which serves to verify the effectiveness of dynamically adjusting the smoothing coefficient based on the temporal network's interaction changes in link prediction). We treat the individual snapshot A n (randomly selected from the snapshot set of the temporal network) as a static network. We divide the edges within snapshot A n into training data E tr and testing data E te in a ratio of 20%. Therefore, the set E consists of edges that do not actually exist in the snapshot (constructed as E = U n − E tr − E te ). We divide the snapshot set into training and test datasets for future prediction (based on all network snapshots). Assuming that the graph data represented by the snapshot set of the temporal network is G = (V, E), since G is composed of snapshots from different periods, the snapshot subgraphs contained in G can be denoted as G = {G 0 , G 1 , · · · , G n }, and the corresponding edge and node dataset can be represented by G T = (V T , E T ). Therefore, we take the subset G of elements G 0 to G n−1 from the set G as the training data E tr , including {E 0 , E 1 , · · · , E n−1 }. In order to predict the link status of the next snapshot, we set the edge set E n of the G n subgraph (i.e., the latest snapshot) as the test dataset E te . We use the AUC to evaluate the prediction performance of individual snapshots and the entire temporal network at future. The definition of the AUC is given in Equation (19). The AUC randomly selects pairs of edges, one from the dataset of actual existing edges E te , and another from the dataset of actual non-existing edges E , and compares their predicted score multiple times. n is the number of times that the predicted score of an actual existing edge is greater than an actual non-existing edge in experiments. n is the number of times that the predicted score of an actual existing edge is equal to an actual non-existing edge. n is the total number of selections in the validation experiments. The value of the AUC indicator will range between 0.5 and 1, and the closer it is to 1, the more accurate is the method's prediction. Performance Comparison Firstly, we compared the AUC of our proposed method with the existing GR, CN, AA, JC, PA, and RA indictors on a single snapshot with different time granularities. These selected indicators are based on local information to construct the similarity between nodes and have been widely applied in link prediction. Therefore, they are similar to the OESMNIE method in terms of the topological structure and are representative of the field. The results are presented in Table 4. In Table 4, our proposed OESMNIE method demonstrates competitive performance across single-layer snapshots of temporal networks at varying time granularities. As the temporal granularity L increases, all link prediction methods based on local information show improved accuracy. This can be attributed to the fact that a larger time granularity L allows for capturing more interaction information within the snapshot, thereby enhancing the information richness of the snapshot. However, for temporal network link prediction, increasing the temporal granularity disrupts the temporal attributes of node interaction information, thus hindering future link prediction. In contrast, the OESMNIE method leverages the fine-grained interactions of the current snapshot and exploits the weak ties between nodes, resulting in a significant improvement in accuracy while maintaining the same snapshot conditions. This approach neither compromises the snapshot's temporal features nor their ability to characterize the node similarity across snapshots accurately. Secondly, we conducted 100 experiments and averaged the results to compare the predictive performance of the OESMNIE method with selected time series-based temporal link prediction methods. Figure 7 illustrates the AUC results of these eight methods on the temporal network. mmetry 2023, 15, x FOR PEER REVIEW 16 In Table 4, our proposed OESMNIE method demonstrates competitive perform across single-layer snapshots of temporal networks at varying time granularities. A temporal granularity L increases, all link prediction methods based on local informa show improved accuracy. This can be attributed to the fact that a larger time granul L allows for capturing more interaction information within the snapshot, thereby enh ing the information richness of the snapshot. However, for temporal network link pr tion, increasing the temporal granularity disrupts the temporal attributes of node int tion information, thus hindering future link prediction. In contrast, the OESMNIE me leverages the fine-grained interactions of the current snapshot and exploits the weak between nodes, resulting in a significant improvement in accuracy while maintainin same snapshot conditions. This approach neither compromises the snapshot's temp features nor their ability to characterize the node similarity across snapshots accurat Secondly, we conducted 100 experiments and averaged the results to compare predictive performance of the OESMNIE method with selected time series-based temp link prediction methods. Figure 7 illustrates the AUC results of these eight method the temporal network. In the temporal network with weekly and monthly time granularities, the OESM method outperforms other methods, achieving the highest scores of 0.9181 and 0.9 respectively. The other methods show a significant decrease in accuracy under diffe time granularities. Two factors cause the decreased accuracy of these methods. First node similarity matrix is constructed based on limited adjacency matrix information the network snapshots, and thus limiting the accuracy of node similarity within a si snapshot. Second, relying on the attenuation constant of the GR or using the expone smoothing model can cause the algorithm to deviate from the actual network interac In the temporal network with weekly and monthly time granularities, the OESMNIE method outperforms other methods, achieving the highest scores of 0.9181 and 0.9384, respectively. The other methods show a significant decrease in accuracy under different time granularities. Two factors cause the decreased accuracy of these methods. First, the node similarity matrix is constructed based on limited adjacency matrix information from the network snapshots, and thus limiting the accuracy of node similarity within a single snapshot. Second, relying on the attenuation constant of the GR or using the exponential smoothing model can cause the algorithm to deviate from the actual network interactions during the prediction process, leading to other methods providing incorrect reference scores at the wrong snapshot period. Parameters Analysis The time granularity L used to construct network snapshots is a critical factor influencing the performance of the temporal network link prediction methods. In addition, we also consider that the smoothing coefficient α in the improved exponential smoothing model plays an important role on the algorithm's accuracy. Therefore, we explored the impact of the smoothing coefficient on the algorithm's effectiveness by incrementally varying α. Based on the network snapshots with different time granularities for different networks, we can observe the relationship between the change in the number of interactions within each snapshot and the smoothing coefficient α's (α = 0.8) reference ratio, as shown in From Figures 8-10, we can observe that the smoothing coefficient in the exponential smoothing model varies and adjusts with the interaction patterns within the snapshots. This provides more moderate weights for the predicted node similarity values. The experimental results are consistent with our initial hypothesis. To further determine the impact of the smoothing coefficient on the prediction ac- From Figures 8-10, we can observe that the smoothing coefficient in the exponential smoothing model varies and adjusts with the interaction patterns within the snapshots. This provides more moderate weights for the predicted node similarity values. The experimental results are consistent with our initial hypothesis. To further determine the impact of the smoothing coefficient α on the prediction accuracy of the OEMSNIE method, we conduct a comparative analysis with similar methods for varying α values. Figures 11-13 shows the detailed comparison results. (c) Figure 10. (a) The graph describes the relationship between the sum of interactions of weekly snapshots generated in the Email-EU-core network and the reference score. (b) The graph depicts the relationship between the sum of interactions of snapshots generated in the Email-EU-core network with a time granularity of month and the reference score. (c) The graph depicts the relationship between the sum of interactions of snapshots generated in the Email-EU-core network with a time granularity of quarter and the reference score. From Figures 8-10, we can observe that the smoothing coefficient in the exponential smoothing model varies and adjusts with the interaction patterns within the snapshots. This provides more moderate weights for the predicted node similarity values. The experimental results are consistent with our initial hypothesis. To further determine the impact of the smoothing coefficient on the prediction accuracy of the OEMSNIE method, we conduct a comparative analysis with similar methods for varying values. Figures 11-13 shows the detailed comparison results. As shown in Figures 11 and 12, prediction accuracy fluctuations are observed for all algorithms with the increase in the smoothing coefficient and the snapshot time granularity . However, the OESMNIE method demonstrates overall stability, indicating its superior stability to the other algorithms. This further validates the improved prediction weights' effectiveness in adapting to network structure changes. (a) As shown in Figures 11 and 12, prediction accuracy fluctuations are observed for all algorithms with the increase in the smoothing coefficient and the snapshot time granularity . However, the OESMNIE method demonstrates overall stability, indicating its superior stability to the other algorithms. This further validates the improved prediction weights' effectiveness in adapting to network structure changes. As shown in Figure 13a, the accuracy of the OESMNIE method is weaker than other indicators, but this is only limited to the case of a small snapshot period. From the overall experimental results, the method proposed in this paper shows better robustness and accuracy than other methods under different time granularities and smoothing coefficients. Discussion Although our proposed method has achieved certain results, it is essential to acknowledge its limitations. Firstly, our approach focuses on extracting more link information from the snapshot to establish the similarity between nodes within each period, which is used for link prediction. As a result, the structure of each snapshot is represented in a matrix. However, for large networks, the data processing approach based on the adjacency matrix of network snapshots can result in larger sparse matrices, thereby increasing storage burden. Hence, it is more suitable for small-sized and medium-sized networks. Secondly, in constructing node similarity, we use the shortest path length between nodes as the distance in the gravity model. Therefore, the node similarity matrices are symmetric, indicating that the current method only applies to unweighted and undirected temporal networks. Finally, the OESMNIE algorithm requires snapshots to contain fine-grained interaction information among nodes to distinguish the similarity between nodes. Therefore, when dealing with real-time link prediction, the OESMNIE method can only record the real-time data and transform it into a near real-time approach for prediction. In future research, we can explore graph embedding techniques to alleviate the sparsity issue in large networks and improve the measurement of node similarity to make the method applicable to a wider range of network types and application scenarios. As shown in Figures 11 and 12, prediction accuracy fluctuations are observed for all algorithms with the increase in the smoothing coefficient α and the snapshot time granularity L. However, the OESMNIE method demonstrates overall stability, indicating its superior stability to the other algorithms. This further validates the improved prediction weights' effectiveness in adapting to network structure changes. Conclusions As shown in Figure 13a, the accuracy of the OESMNIE method is weaker than other indicators, but this is only limited to the case of a small snapshot period. From the overall experimental results, the method proposed in this paper shows better robustness and accuracy than other methods under different time granularities and smoothing coefficients. Discussion Although our proposed method has achieved certain results, it is essential to acknowledge its limitations. Firstly, our approach focuses on extracting more link information from the snapshot to establish the similarity between nodes within each period, which is used for link prediction. As a result, the structure of each snapshot is represented in a matrix. However, for large networks, the data processing approach based on the adjacency matrix of network snapshots can result in larger sparse matrices, thereby increasing storage burden. Hence, it is more suitable for small-sized and medium-sized networks. Secondly, in constructing node similarity, we use the shortest path length between nodes as the distance in the gravity model. Therefore, the node similarity matrices are symmetric, indicating that the current method only applies to unweighted and undirected temporal networks. Finally, the OESMNIE algorithm requires snapshots to contain fine-grained interaction information among nodes to distinguish the similarity between nodes. Therefore, when dealing with real-time link prediction, the OESMNIE method can only record the real-time data and transform it into a near real-time approach for prediction. In future research, we can explore graph embedding techniques to alleviate the sparsity issue in large networks and improve the measurement of node similarity to make the method applicable to a wider range of network types and application scenarios. Conclusions In this paper, we propose a temporal network link prediction method based on network snapshots, which addresses the problems of insufficient information in snapshot representations and the continuity of connection time information which is easily destroyed by the multi-layer network model. We conduct experiments on single and overall network snapshots with different time granularities. The experimental results verify that the OESMNIE method outperforms its counterparts in time series-based temporal link prediction based on the local similarity of nodes. Subsequently, we analyzed the effects of the smoothing coefficient and the time granularity on the prediction, validating the effectiveness of the weights in changing with the variations in the snapshot structure. Finally, we comprehensively compared the AUC metric, snapshot time granularity, and exponential smoothing coefficient. This comparison confirmed the stability and robustness of our method. In conclusion, our approach can effectively predict future linkages within the given periods. Author Contributions: S.T. designed and conceived the experiments; S.T. and X.X. performed the experiments; X.X. constructed the snapshot data construction; S.T. wrote the paper; S.Z., H.M. and R.L. reviewed the paper and gave some suggestions for improvement. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article. Acknowledgments: The numerical calculations in this paper were performed on the computing server of the Information Engineering College of Nanchang Hangkong University. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript or in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: Variables The following variables are used in this manuscript: L The temporal granularity (or time window) for generating snapshots. (i, j, t) The format of node interaction record in temporal network, (source node, target node, timestamp of the interaction occurrence). n The nth network snapshot in chronological order. α Smoothing coefficient. A n The adjacency matrix (or network snapshot) in [(n)L, (n + 1)L]. U n The set of fully connected edges among nodes within A n . C n The nodes similarity matrix of snapshot A n . C n ij The similarity score between node i and node j in A n . C The nodes similarity tensor. C n The snapshot interaction frequency matrix in [(n)L, (n + 1)L]. τ(i) The neighbor nodes of node i in the current snapshot. V(A n ) The set of nodes in snapshot A n . F(v i ) The influence indicator of node v i . C n (i, j) The sum of interactions between node i and node j in [(n)L, (n + 1)L]. A n (i, j) The adjacency status between node i and node j in [(n)L, (n + 1)L]. EC n (i) The eigenvector centrality of node i in [(n)L, (n + 1)L]. P n (i, j) The probabilistic value of the interaction frequency between node i and node j in [(n)L, (n + 1)L]. I n (i) The node interaction entropy of node i in [(n)L, (n + 1)L]. I n The sum of node interaction entropy in snapshot A n . D n (j) The degree feature of node j in [(n)L, (n + 1)L]. M n i The quality of node i in the gravity model. The shortest distance between node i and node j on A n . W n The reference ratio of node similarity in exponential smoothing model for snapshot A n . S n ij Similarity between node i and node j in [(n)L, (n + 1)L]. S n+1 The predicted score (or similarity) for link prediction in future snapshot A n+1 in future.
v3-fos-license
2022-11-12T06:18:15.675Z
2022-11-01T00:00:00.000
263953250
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9059/10/11/2888/pdf?version=1669274318", "pdf_hash": "81d44420faa9b5e7b33155f762266dc766c5ea40", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43973", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3dbdc188ab2193f8c25ee668808c1cc5bfa9edb8", "year": 2022 }
pes2o/s2orc
Arrhythmic Burden in Cardiac Amyloidosis: What We Know and What We Do Not Cardiac amyloidosis (CA), caused by the deposition of insoluble amyloid fibrils, impairs different cardiac structures, altering not only left ventricle (LV) systo-diastolic function but also atrial function and the conduction system. The consequences of the involvement of the cardiac electrical system deserve more attention, as well as the study of the underlying molecular mechanisms. This is an issue of considerable interest, given the conflicting data on the effectiveness of conventional antiarrhythmic strategies. Therefore, this review aims at summarizing the arrhythmic burden related to CA and the available evidence on antiarrhythmic treatment in this population. Introduction Cardiac amyloidosis (CA) is an infiltrative disease characterized by the extracellular deposition of insoluble amyloid fibrils in the heart that lead to increased left ventricular (LV) wall thickness, impaired LV relaxation and a reduction in LV systolic function. Recent studies have clearly shown that CA, particularly transthyretin-related amyloidosis (ATTR), is a leading cause of heart failure (HF), affecting approximately 15% of subjects with HF with preserved ejection fraction (HFpEF) [1]. While several studies have clarified the morphological and functional consequences of amyloid deposition on cardiac structures [2,3], the impact of amyloid infiltration on the electrical conduction system of the heart and the arrhythmic profile of patients with CA has often been overlooked. In CA patients, various arrhythmias can be detected ( Figure 1) that are caused by several mechanisms, including inflammatory cell damage, cellular degradation and the separation of myocytes by amyloid fibrils [4]. Specifically, arrhythmias are the result of a combination of amyloid accumulation and the involvement of closed structures that might influence the cardiac complex balance. According to the different altered proteins, the type and the prevalence of a single arrhythmia can change. This is the case for atrial fibrillation (AF), which appears to be more frequent in wild-type ATTR (wtATTR) [5], while ventricular arrhythmias (VAs) have been mostly described in association with light-chain amyloidosis (AL) [6]. The presence of arrhythmias in CA patients is associated with a poorer prognosis, reflecting a higher risk of HF progression and mortality [7]. The aim of our review is to summarize the state of the art on the arrhythmic burden in CA and, most importantly, to highlight the controversies related to antiarrhythmic treatment in this population. The presence of arrhythmias in CA patients is associated with a poorer prognosis, reflecting a higher risk of HF progression and mortality [7]. The aim of our review is to summarize the state of the art on the arrhythmic burden in CA and, most importantly, to highlight the controversies related to antiarrhythmic treatment in this population. Atrial Arrhythmias Atrial function is heavily influenced by the direct toxic effect of amyloid accumulation, even in the early stages of the disease [8]. Histological findings of extreme amyloid infiltration in the atria support the "adverse remodelling hypothesis" determining the loss of atrial architecture, the remodeling of the vessels, capillary disruption and an upregulation of collagen synthesis at the level of the atria [9]. Amyloid infiltration is associated with the deterioration of the three atrial phases: reservoir, conduit and contraction. Specifically, in CA, the atrial chamber behaves as a non-compliant reservoir during ventricular systole and acts as a poorly efficient contractile chamber during late ventricular diastole. Prevalence Atrial involvement in AL and ATTR is associated with a high burden of cardiac arrhythmias. The prevalence of AF in patients with CA is variable among studies, with the most recent reports assessing a very high prevalence, reaching almost two-thirds of Atrial Arrhythmias Atrial function is heavily influenced by the direct toxic effect of amyloid accumulation, even in the early stages of the disease [8]. Histological findings of extreme amyloid infiltration in the atria support the "adverse remodelling hypothesis" determining the loss of atrial architecture, the remodeling of the vessels, capillary disruption and an upregulation of collagen synthesis at the level of the atria [9]. Amyloid infiltration is associated with the deterioration of the three atrial phases: reservoir, conduit and contraction. Specifically, in CA, the atrial chamber behaves as a non-compliant reservoir during ventricular systole and acts as a poorly efficient contractile chamber during late ventricular diastole. Prevalence Atrial involvement in AL and ATTR is associated with a high burden of cardiac arrhythmias. The prevalence of AF in patients with CA is variable among studies, with the most recent reports assessing a very high prevalence, reaching almost two-thirds of the population [6,7]. Some differences among studies might be attributed to the presence of implanted cardiac devices determining a higher success in detecting subclinical AF [10]. However, nearly half of patients with AL or ATTR show concomitant AF at the time of diagnosis [11], and during follow-up, wtATTR was revealed to be a stronger predictor of AF over hereditary ATTR (hATTR), likely due to the lower age and earlier disease detection and treatment in hATTR [12]. Pathogenesis Because of the increasingly high prevalence of AF in CA patients, it is reasonable to assume that AF and CA have a causal relationship. There are several mechanisms for the development of AF in CA. Firstly, amyloid deposition within atrial tissue electroanatomically disrupts homogeneous electrical conduction, causing large areas of voltage attenuation [13]. Secondly, the direct toxic effect of amyloid fibrils on cardiomyocytes results in fibrosis and oxidative stress, which are powerful substrates for AF [14,15]. Thirdly, small vessel disease due to perivascular amyloid infiltration represents a likely substrate for myocardial ischemia [16]. Finally, it seems that AF itself contributes to progressive amyloid deposition, promoting LA myopathy [17]. Arrhythmia Detection A proposed scheme for the follow-up of patients with CA suggests performing a yearly 24 h Holter electrocardiogram (ECG) [18]; this is based mainly on clinical practice, and an optimal follow-up scheme has yet to be defined. Almost half of patients manifest AF before the diagnosis of CA [11], while others might have later thromboembolic events without showing any previous symptoms [19]. In this context, patient monitoring with a prolonged Holter ECG or implantable loop recorders (ILRs) could be useful in the early identification of AF and, subsequently, in the prescription of anticoagulation. Prognostic Implications Whether or not AF impacts overall or cardiovascular mortality in CA is still unsolved [5,10,17]. The maintenance of sinus rhythm, obtained by either cardioversion or ablation performed in the early stages of the disease, seems to be more effective in improving symptoms and reducing hospitalization among CA patients [17]. However, the presence of normal sinus rhythm does not guarantee preserved atrial function. In this context, the detection of myocardial deformation by speckle tracking at echocardiography allows the identification of one-fifth of patients who show a severe impairment of contractility despite remaining in sinus rhythm on the electrocardiogram [20]. This condition is also known as atrial electromechanical dissociation, proven to be associated with a poor prognosis [9,21]. Moreover, speckle tracking allows the detection of differences not only in overt disease manifestation but also in the subclinical setting. Specifically, if, on one side, left atrial dysfunction proves to be more pronounced in ATTR (compared to AL) patients, despite left atrial volumes being comparable [22], on the other side, the detection of left atrial abnormalities in carriers with a transthyretin valine-to-isoleucine substitution underlines subtle left ventricular remodeling [23]. Evaluating atrial function before the possible detection of atrial arrhythmias might also be helpful in detecting an increased risk of thromboembolic events, which may happen even in sinus rhythm [24]. All of this evidence underlines the distinctive nature of AF in CA and the importance of a comprehensive and multiparametric evaluation of atrial function. Stroke Risk and Anticoagulation Patients with CA have been found to have an increased risk of developing intra-cardiac thrombi, even in sinus rhythm, likely due to atrial mechanical dysfunction, endothelial dysfunction and relative hypercoagulability [8,25]. Feng et al., in a large study carried out at the Mayo Clinic involving 116 autopsies of patients with CA (AL, ATTR and serum amyloid A), observed a prevalence of intracardiac thrombi of 33%, compared to none in the control group. The combination of AL and AF was associated with the highest risk of thrombus detection [26]. In addition, in a cohort of 156 patients with CA who underwent transesophageal echocardiography (TEE), intracardiac thrombi were detected in 27% of patients. In this population, AL patients more frequently showed intracardiac thrombi compared to ATTR patients (35% vs. 18%; p = 0.02) despite being younger and having less AF [27]. The prevalence of intracardiac thrombi assessed by cardiac magnetic resonance (CMR) was 6.2% in a study by Martinez-Naharro et al. [28] including 324 amyloidosis patients, both ATTR and AL. Favoring factors were biventricular systolic dysfunction, atrial dilation, AF, higher extracellular volume and AL subtype. Several other studies have demonstrated an increased incidence of intracardiac thrombi in patients with CA despite the absence of AF/flutter [25,29]. The incidence of arterial thromboembolic events in CA was described by Cappelli et al. [24] in a cohort of 406 patients, both AL and ATTR. Thirty-one patients (7.6%) suffered from thromboembolism, mainly cerebrovascular, of whom ten (32.2%) were in sinus rhythm and had no history of AF. In a larger, international, multicentric study, Vilches et al. [30] confirmed the high prevalence of embolic events in patients with ATTR, either with or without AF. In their cohort, CHA2DS2-VASc did not predict embolic events, suggesting its limited role in estimating the risk of thromboembolism in CA. Additionally, they did not find meaningful differences in the rate of embolism between patients with AF treated with a vitamin K antagonist (VKA) and those treated with novel oral anticoagulants (NOACs) [30]. There are limited data on the optimal anticoagulant strategy; specifically, little is known about the safety of NOACs in this population and whether differences exist in the occurrence of embolic events. Considering the advanced age of the subjects, both bleeding and thrombotic risks are generally perceived as high. Moreover, the VKA response is limited by inter/intra-patient variability and compliance with a complex medical regimen and diet, making treatment difficult in this population. In a recent study, Mitrani et al. [31] found no difference in the combined outcome of stroke, transient ischemic attacks (TIA), major bleeding or death in patients with ATTR and AF treated with either VKA or NOACs. Maintaining the international normalized ratio (INR) in the normal range appears to be crucial, since all patients on VKA with a stroke or TIA showed a labile INR. Additionally, the higher bleeding risk was confined to the same subset of patients with a labile INR. Cariou et al. [32] compared 147 (54%) vs. 126 (46%) patients receiving VKA and NOACs, respectively. In the wtATTR subgroup, patients receiving VKA had a higher bleeding risk compared to patients on NOACs (major bleeding events in 14 vs. 2%, respectively; p < 0.001), but there was no significant difference in ischemic events. In the AL subgroup, the bleeding risk was similar between groups, and not a single stroke was registered. In contrast to the study by Mitrani et al. [31], Cariou et al.'s cohort [32] showed a higher bleeding risk in patients on VKA, which may have been driven by their more impaired renal function; however, both these retrospective studies showed that NOACs can be used safely in CA. In conclusion, there is a high prevalence of atrial thrombosis in both AL and ATTR. CA has been shown to expose patients to an increased risk of embolic events, and this risk is not limited to patients with clinical AF [25]. At present, there are no strong data to recommend which oral anticoagulant to prefer, although NOACs have proven to be safe. Our Point of View Patients with CA show a high propensity to develop intracardiac thrombi and embolic events, even without evidence of AF. However, no clear guidelines have been provided so far that will help the clinician in the management of anticoagulation therapy in patients with CA and no AF. Patients with CA are often old and frail, predisposing them to an increased risk of bleeding; however, an embolic event may dramatically reduce their clinical and performance status and their quality of life. In many situations, clinicians will face decisions on anticoagulation therapy, and thus, we would like to provide a little insight into our clinical practice that may provide some suggestions to other physicians (Table 1). [39,42] Firstly, routine Holter monitoring is fundamental to screen patients for concealed episodes of AF. As per standard practice, we perform a Holter ECG on a 6-month basis to detect asymptomatic AF, allowing us to introduce anticoagulation irrespectively of the CH A 2 DS 2 -VASc score. In addition, at diagnosis, we usually perform cardiac magnetic resonance, which has been revealed to be a useful tool in both providing tissue characterization and identifying possible intracardiac thrombi. During follow-up evaluation, it is not infrequent to identify patients presenting severe atrial enlargement or "atrial standstill" or the presence of spontaneous echo contrast within the atria at echocardiography. Atrial standstill might be defined by the absence of mechanical activity in the atria, as assessed visually at echocardiography, or by using atrial strain, and it might be associated with the presence of a low mitral inflow A-wave amplitude. In a few patients, these features are associated with the presence of a clear P wave on the electrocardiogram (in the absence of a clinical history of AF). These conditions raise many concerns about the risk of thrombus formation. In these cases, we actively try to identify evidence for anticoagulation treatment, i.e., reducing the interval between Holter ECG evaluations, implanting a loop recorder or providing a home monitoring device for patients with a pacemaker or ICD, and, when renal function allows us, we repeat cardiac magnetic resonance with the aim of atrial thrombus identification. In conclusion, we think that in the absence of clear guidelines, it is still controversial to initiate anticoagulation therapy without evidence of AF or without identifying a thrombus. However, in the presence of echocardiographic signs of an increased risk of thromboembolism, a more aggressive and proactive approach in order to identify asymptomatic AF or signs of atrial thrombosis could be reasonable. Rate Control Rate control is particularly challenging in CA, mostly due to the coexistence of autonomic dysfunction and restrictive cardiac physiology with a low and relatively fixed stroke volume. In this scenario, a higher heart rate is often necessary to maintain an adequate cardiac output [33]. Non-dihydropyridine calcium channel blockers are contraindicated in CA for their negative inotropic/chronotropic effect and the high risk of hypotension [14]. Beta-blockers may also be poorly tolerated; however, low doses of beta-blockers may be an option to achieve rate control in AF with a rapid ventricular response [34]. The role of digoxin in CA remains controversial. Historically, Rubinow et al. [35] showed that digoxin binds avidly to amyloid fibrils in vitro, suggesting a higher risk of digoxin toxicity. A more recent study re-evaluated digoxin's utility in the rate control strategy in 69 patients with CA. Although suspected digoxin-related arrhythmias and toxic events occurred in 12% of patients, no deaths were attributed to digoxin toxicity [36]. Thus, low-dose digoxin with close monitoring is a possible alternative for rate control in selected patients, especially when other therapeutic strategies are limited by hypotension. Finally, in the case of failure to obtain rate control with medical treatment, atrioventricular nodal ablation and a permanent pacemaker implant may be considered [37]. Rhythm Control The loss of the atrial contribution to ventricular filling in AF often leads to the patient's clinical deterioration. Rhythm control by means of direct current cardioversion (DCCV) has been recently described with variable success and recurrence rates [5,11]. In the study by El-Am et al. [43], patients with scheduled DCCV and CA suffered from a significantly higher DCCV cancellation rate compared to patients with AF without CA, mostly due to the identification of intracardiac thrombi. Thus, TEE should be performed before DCCV in all patients with CA, regardless of the duration of AF or anticoagulation status [25][26][27]43]. The rate of success of DCCV was high (90%) and similar between patients with and without CA. Furthermore, the incidence of arrhythmia recurrence during a 1-year follow-up was also high but similar between the two groups (48% vs. 55%; p = 0.75). However, the procedural complication rate was significantly higher in the CA group (14% vs. 2%, respectively, p = 0.007), reflecting the underlying advanced myopathic and electrical disturbances in CA [43]. Similarly, Donnellan et al. [17] reported a retrospective analysis on 256 patients with ATTR and AF: 119 (45%) patients underwent DCCV, and sinus rhythm was initially restored in 113 (95%) of them and appeared more effective when performed earlier in the disease course. One year after DCCV, 49 (42%) patients remained in sinus rhythm, and, interestingly, the maintenance of sinus rhythm was significantly associated with lower mortality (43% vs. 69%, p = 0.003). In summary, although DCCV is very effective in restoring sinus rhythm, the recurrence rate of atrial arrhythmias is high. However, DCCV appears to be an appealing approach in the early stages of the disease. Given the limitations of rate control strategies, a rhythm control strategy may be considered for the management of AF, particularly for earlier disease stages. In a retrospec-tive analysis of wtATTR, Mints et al. described 33 patients who received antiarrhythmic treatment, mainly amiodarone, for AF and found no survival benefit from rhythm control compared to the rate control strategy [5]. Little is known about the safety and efficacy of AF or flutter ablation in patients with ATTR. Only a few small retrospective studies have examined the role of catheter ablation of atrial arrhythmias in patients with CA, with inconsistent results [12,44,45]. Tan et al. [44] reported results on a retrospective cohort including 13 patients, both AL and ATTR, who underwent atrial arrhythmia ablation, of whom 5 had AF; the 3-year recurrence-free rate was 60% for all atrial arrhythmias and 40% for AF, and ablation was associated with a reduction in the New York Heart Association functional class (NYHA) in 70% of cases. More recently, Donnellan et al. [45] reported the largest cohort of radiofrequency ablation in 24 patients with ATTR. During a mean follow-up of 39 months, the overall recurrence rate of AF was 58%, and among patients who developed recurrent arrhythmias, the AF-free mean time from ablation was 23 months. Ablation appeared less effective in those with a higher ATTR stage, older age and higher NYHA class. However, the rate of hospitalization for AF or HF was markedly lower in patients who underwent atrial arrhythmia ablation, and after a follow-up of more than 3 years, ablation was associated with improved survival. After catheter ablation, the long-term maintenance of sinus rhythm appears to be frequently difficult, especially in more advanced stages of the disease. However, in a selected group of patients with an early stage of ATTR, AF ablation might be reasonable to reduce recurrent hospitalization for symptomatic AF or HF. Moreover, it appears reasonable to associate CA target therapy with catheter ablation. From a sub-analysis of data from the Transthyretin Amyloidosis Cardiomyopathy Clinical Trial (ATTR-ACT), tafamidis-a medication used to delay disease progression in adults with ATTR-CA-contributed to reducing hospitalization due to arrhythmias [46]. Specifically, the percentage of patients in SR was higher in those who took tafamidis after ablation therapy than those who did not [16]. All of the above studies encountered many limitations, such as the small number of patients and the few data comparing patients with CA and those without. Therefore, currently, we do not have therapeutic approaches designed selectively for the patient with CA. Atrial Fibrillation in Heart Failure with Preserved Ejection Fraction and Cardiac Amyloidosis Atrial fibrillation and HFpEF are strictly related and considered "vicious twins" [47]. Indeed, patients with AF present a 4.8 times higher risk of developing HFpEF compared to patients in sinus rhythm [48], and the prevalence of AF in HFpEF is high, ranging between 15% and 41% [47]. Furthermore, as we have seen in wtATTR, two-thirds of patients with solely HFpEF experience AF over time [49]. As in cardiac amyloidosis, AF and HFpEF are manifestations of a common atrial and ventricular myopathy. While in CA, the deposition of amyloid fibrils plays a pivotal role, in patients with solely HFpEF, systemic inflammation and metabolic disorders may lead to microvascular dysfunction and fibrosis of both atria and ventricles, which in turn trigger diastolic dysfunction and AF [47]. Furthermore, HFpEF and AF feed off each other. As happens in CA, the presence of diastolic dysfunction and elevated LV filling pressure contributes to LA enlargement and electrical remodeling and eventually leads to AF [50]. Moreover, the presence of AF in both solely HFpEF and CA may worsen HF symptoms, probably due to the loss of LV filling mediated by the atrial kick [51,52]. Atrioventricular Conduction Diseases Conduction system diseases are frequent in CA, and the prevalence of pacemaker implantation at diagnosis ranges between 8.9% and 10% [53,54]. From an electrophysiological standpoint, patients with CA present a prolonged Hissventricle (HV) conduction interval compared to patients without CA [12], and the HV conduction delay is more profound in ATTR than in AL. On a surface electrocardiogram at diagnosis, first-degree atrioventricular (AV) block was more frequent in wtATTR compared to AL, while the presence of an intraventricular conduction delay was more common in both wtATTR and hATTR compared to AL [54]. The frequency of right bundle branch block is similar between ATTR and AL, whereas left bundle branch block seems to be uncommon in AL [7]. Indeed, the long slender right bundle branch may be more vulnerable to amyloid deposition compared to the left, and therefore, even a low amount of amyloid deposition, characteristic of AL-CA, may impact its electric impulse conduction [55,56]. Pacemaker and Loop Recorder Fifteen percent of patients with hATTR and 30% of patients with wtATTR already had a pacemaker implanted at diagnosis compared to only 1% of patients with AL [54]. However, in a multicenter retrospective study including 405 patients, during a median follow-up of 33 months, the incidence of pacemaker implantation was similar among amyloidosis subtypes (8.9% during a median follow-up of 33 months), raising the suspicion that the pathophysiology underlying conduction disturbances may be different between AL and ATTR. In ATTR, the main mechanism leading to conduction abnormalities seems to be progressive amyloid deposition that alters the myocardial structure and undermines electrical conduction [57]. In a CMR study from the National Amyloidosis Centre in London, patients with ATTR presented greater LV mass and amyloid deposits, as expressed by the extracellular volume, compared to AL [56]. Yet, patients with AL showed higher native T1 mapping compared to ATTR due to a greater amount of myocardial edema [56]. These findings were confirmed by a following study in which patients with untreated AL showed the greatest increase in myocardial T2 (a CMR biomarker of myocardial edema) compared to treated AL and ATTR [58]. Therefore, the cytotoxicity of free light chains [59] may lead to conduction disturbances, following the model of myocarditis in which edema plays an important role in arrhythmogenicity [60]. Independent predictors of PM implantation include a history of AF, PR interval > 200 ms and QRS > 120 ms. The highest risk of PM implantation emerged with the coexistence of all three parameters in both AL and ATTR (hazard ratio 6.26, CI 1.9-20.6). ATTR patients presenting these electrocardiographic predisposing factors showed signs of more advanced disease, such as a greater LV thickness and worse biventricular systolic function, whereas no differences emerged in the distribution of the Mayo score for AL patients [53]. Data from the longitudinal pacemaker interrogation in patients with CA showed a progressive increase in the mean ventricular pacing, and, while the pacing burden was 56% at 1 year post-implantation, most patients at 5 years showed near 100% ventricular pacing [61]. Furthermore, over time, the right ventricular sensing amplitudes decreased, but lead impedances and capture thresholds were stable in the absence of device malfunction [61]. The role of internal loop recorders (ILRs) for the early detection of bradyarrhythmias has still to be clearly defined. Sayed et al. implanted ILRs in 20 consecutive patients with symptoms of syncope or presyncope and advanced AL. Interestingly, death was preceded by bradycardia, complete atrioventricular block and the development of pulseless electrical activity (PEA). A pacemaker was implanted in four patients due to AV block, yet three of them, who were previously resuscitated from PEA, died anyway [62]. They hypothesized that severe bradycardia and AV block may further reduce an already impaired cardiac output, resulting in ischemic damage that may lead to further decompensation and PEA. Presumably, a narrow time window for intervention exists; indeed, the only patient who received a pacemaker before a significant reduction in cardiac output survived. Interestingly, in a recent study, the role of a prophylactic pacemaker was tested in patients with hATTR and slowed AV conduction, and the pacemaker prevented major cardiac events in 25% of them during a follow-up of 45 months [63]. However, high-grade AV block was not independently associated with mortality after adjusting for the disease stage and the presence of coronary artery disease [64]. In conclusion, patients with CA often require pacemaker implantation due to progressive amyloid deposition that alters the electrical conduction system, and advanced conduction system disease may represent a relevant competing cause of death in CA. Resynchronization Therapy In ATTR, right ventricular pacing > 40% has been shown to be associated with worsening mitral regurgitation, reduced LV ejection fraction and worsening HF symptoms compared to patients with biventricular pacing [65]. Furthermore, cardiac resynchronization therapy has been associated with reduced all-cause mortality and cardiovascular hospitalization. However, more data are needed to confirm these preliminary findings and to define the subgroup of patients that benefit more from this treatment [18]. Ventricular Arrhythmias Data regarding VAs in CA are scarce if compared to the other electrophysiological manifestations of the disease and are mainly derived from small retrospective studies. Prevalence In ATTR, the estimated prevalence of non-sustained ventricular tachycardias (NSVT) is between 17% and 20% on Holter monitoring [66,67], while in AL-CA, the prevalence ranges from 5 to 29% [34,66,68]. However, in AL-CA, NSVTs might be more frequent, especially during the stem-cell transplantation period, as demonstrated by a small study conducted on 24 patients with telemetry monitoring during autologous stem-cell transplantation: NSVT was recorded in all patients and was the most common arrhythmia, and one patient experienced sustained VAs that required direct current defibrillation [69]. The prevalence of NSVT, in both ATTR and AL, increases up to 74% when analyzing long-term monitoring devices, such as a pacemaker or ICD, while approximately 20% of patients with a pacemaker or ICD experienced sustained VAs [70]. Pathogenesis Patchy amyloid fibril deposition in the myocardium leading to an inflammatory response and oxidative stress results in a separation of myocytes, resulting in LV fibrosis, which progressively develops arrhythmogenic potential. In combination with this, amyloid fibril deposition at the conduction system level could potentiate arrhythmias, favoring the development of re-entrant circuits [71,72]. Additionally, microvascular ischemia (due to amyloid perivascular infiltration) and the direct cytotoxic effect of amyloid fibrils are held responsible for the genesis of VAs in CA [71,72]. The potential synergistic effect of the AL toxic effect along with drug-induced cardiac toxicity occurring during chemotherapy could further contribute to the genesis of VAs in patients with AL [6]. Myocardial amyloid infiltration can be easily identified with MRI imaging with increased T1 mapping, ECV and areas of LGE (subendocardial or transmural). All of these parameters have been demonstrated to have both diagnostic and prognostic implications, with transmural LGE and higher ECV linked to a greater risk of all-cause mortality. Unlike other forms of cardiomyopathy, though, evidence of a correlation between LGE or ECV and arrhythmic risk in CA is lacking [73][74][75]. Prognostic Implications Although VAs are common in CA, their effect on cardiovascular mortality is still a matter of debate. Patients with CA mostly die of worsening HF, and the mechanism of sudden cardiac death has traditionally been attributed to electromechanical dissociation rather than VAs [38,62,71], questioning the benefits of ICDs. The prognostic role of NSVT in CA is controversial. Some studies suggested their association with sudden cardiac death [68,70], while others hypothesized that NSVT may represent a marker of disease severity rather than a predictor of sudden arrhythmic death [62,66]. Nonetheless, recent evidence suggests that the impact of VAs on CA patients may have been undervalued, and they may represent a frequent competing cause of death in both AL and ATTR. In a cohort of 5585 patients hospitalized for CA, 2020 (36%) had concurrent arrhythmias, and ventricular tachycardia was the second most common arrhythmia identified (14.9%), after AF (72.2%). All-cause mortality and HF were significantly higher in patients with CA hospitalized with concurrent arrhythmias compared to those without [76]. Regarding the specific subset of AL, in a retrospective cohort of 56 patients, 8 experienced sudden cardiac death (interestingly, almost all episodes occurred during chemotherapy), with VAs being the presenting rhythm in 4 cases; PEA was observed in just 1 patient, while the presenting rhythm of the remaining 3 patients was unknown [77]. Sudden Cardiac Death, Pharmacological Treatment and ICD The pharmacological management of VAs in CA is essentially limited to amiodarone and, if tolerated, to small doses of beta-blockers [40]. Non-pharmacological therapy is mainly based on ICDs; however, the indications and timing for a primary prevention ICD are still a matter of debate. Patients with CA (both AL and ATTR) tend to have a worse prognosis than other forms of HF, and traditional thresholds for a primary prevention ICD, such as ejection fraction <35%, are scarcely adequate in the context of CA, where systolic dysfunction is a hallmark of very advanced disease with often limited life expectancy [41,75]. Furthermore, the most recent European Guidelines on the management of ventricular arrhythmias refer specifically to CA only for ICD implantation in patients with hemodynamically non-tolerated VT and stress the importance of careful discussion with patients about other possible causes of cardiac and non-cardiac death [78]. A registry study of 472 patients with CA and an ICD found a mortality rate of 26.9% at 1 year after ICD implantation compared with 11.3% among a propensity-matched cohort of patients with other non-ischemic cardiomyopathies, and CA was also associated with a significantly higher risk of all-cause mortality. A history of syncope, VAs, diabetes mellitus and cerebrovascular disease were factors associated with a higher risk of death within 1 year from ICD implantation [42]. A case-control study comparing 23 patients with CA and a primary prevention ICD to patients with CA without an ICD and patients with a primary prevention ICD for ischemic or non-ischemic cardiomyopathies showed comparable rates of appropriate ICD therapies between amyloid and non-amyloid patients. However, the presence of an ICD was not associated with longer survival when compared to CA patients without an ICD. Furthermore, patients with CA and an ICD had a significantly higher mortality rate than the non-amyloid ICD recipients [39]. Similarly, in a cohort of 130 patients with mainly hATTR (67%) and a high rate of systolic HF (62%), the incidence of VAs was high (53%, mostly NSVT). In the 32 patients with an ICD implanted for primary prevention, the rate of appropriate ICD therapy was 25%. However, no significant survival benefit was found upon comparison with similar ATTR groups without ICDs [79]. In conclusion, an ICD does not seem to have a pivotal role in extending life expectancy in CA; however, new and effective therapies are becoming progressively available for the treatment of both AL and ATTR. Life expectancy in CA patients will hopefully increase in the near future, likely making ICDs more impactful on survival. For now, careful patient selection and shared decision making are of the outmost importance when deciding on ICD implantation in a patient with CA. Conclusions The presented data underline a unique phenotype of cardiac remodeling associated with CA, with the progressive involvement of conduction tissue and corresponding arrhythmic expression. With the advancement of the CA stage, AF, conduction disorders and ventricular arrhythmias become more pronounced and are associated with worse survival. In this regard, it is important to detect any predictable electrical disorders early and also define a treatment based on comorbidities and symptoms. Whereas clinicians rely on device therapy for bradyarrhythmia, unfortunately, the optimal treatment strategy for AF, stroke or the ventricular arrhythmic burden remains an issue of high clinical relevance that needs to be addressed. Amyloid-specific and disease-modifying therapies could potentially play a key role in this context, possibly changing the electrical phenotype associated with CA and improving outcomes. Abbreviations AF = atrial fibrillation; AL = light-chain amyloidosis; ATTR = transthyretin-related amyloidosis; AVN = atrioventricular node; CA = cardiac amyloidosis; CMR = cardiac magnetic resonance; DCCV = direct current cardioversion; HV = Hiss-ventricle; HF = heart failure; HFpEF = heart failure with preserved ejection fraction; ICD = implantable cardioverter defibrillator; INR = international normalized ratio; LA = left atrium; LV = left ventricle; NOACs = novel oral anticoagulants; NSVT = nonsustained ventricular tachycardias; NYHA = New York Heart Association; PEA = pulseless electrical activity; PM = pacemaker; SR = sinus rhythm; TEE = transesophageal echocardiography; TIA = transient ischemic attacks; VAs = ventricular arrhythmias; VKA = vitamin K antagonist; VT = ventricular tachycardia; wtATTR = wild-type transthyretin-related amyloidosis.
v3-fos-license
2022-04-23T15:10:57.547Z
2022-04-21T00:00:00.000
248341709
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2227-9067/9/5/587/pdf?version=1650510926", "pdf_hash": "0e16792864768f23d78c500c330f3ed9f1e65239", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43974", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1aa987ce9ba0b5f7516f3b5254167ebf20a0f724", "year": 2022 }
pes2o/s2orc
Percutaneous Anorectoplasty (PARP)—An Adaptable, Minimal-Invasive Technique for Anorectal Malformation Repair Background: Anorectal malformations comprise a broad spectrum of disease. We developed a percutaneous anorectoplasty (PARP) technique as a minimal-invasive option for repair of amenable types of lesions. Methods: Patients who underwent PARP at five institutions from 2008 through 2021 were retrospectively analyzed. Demographic information, details of the operative procedure, and perioperative complications and outcomes were collected. Results: A total of 10 patients underwent the PARP procedure during the study interval. Patients either had low perineal malformations or no appreciable fistula. Most procedures were guided by ultrasound, fluoroscopy, or endoscopy. Median age at PARP was 3 days (range 1 to 311) days; eight patients were male. Only one intraoperative complication occurred, prompting conversion to posterior sagittal anorectoplasty. Functional outcomes in most children were highly satisfactory in terms of continence and functionality. Conclusions: The PARP technique is an excellent minimal-invasive alternative for boys born with perineal fistulae, as well as patients of both sexes without fistulae. The optimal type of guidance (ultrasound, fluoroscopy, or endoscopy) depends on the anatomy of the lesion and the presence of a colostomy at the time of repair. Introduction Anorectal malformation affects around 1:5000 liveborn infants and comprises a wide spectrum of conditions concerning the distal anus, rectum, as well as urogenital tract. Half of all anorectal malformations are considered low anorectal malformations [1][2][3]. In 1982, Peña and de Vries introduced the posterior sagittal anorectoplasty (PSARP), which has become the standard of open repair [1,4]. The procedure, however, is associated with a relatively high risk of dehiscence, wound infection, stricture, and long-term continence issues [3,5,6]. Some forms of anorectal malformations may be amenable to a less invasive repair in which the neorectum is reconstructed through the intact sphincter. For high lesions, including rectoprostatic and rectovestibular fistulas, the laparoscopyassisted anorectoplasty (LAARP) procedure, first introduced by Georgeson et al. in 2000, has become routine in many centers around the world [7]. Lower lesions, however, such as rectoperineal fistulas in boys, and imperforate anus without a fistula, do not require or benefit from laparoscopy [8]. We therefore devised a minimal-invasive percutaneous procedure that avoids opening of the pelvic floor or splitting the sphincter, the so-called PARP. Over the course of more than a decade, this procedure has undergone several important modifications. The aim of this study is to retrospectively describe and evaluate all patients who underwent a PARP procedure in terms of feasibility, effectiveness, complications, and outcome. Study Design This is a retrospective analysis of all patients who underwent the PARP procedure at 5 different institutions from 2008 through 2021. In order to be eligible for inclusion, the patients had to be either male with a typical perineal fistula or of either sex without a fistula. All other forms of anorectal malformation were excluded. Ethics The study was approved by the ethics board of the Ludwig Maximilian University Faculty of Medicine (registration number . The parents or caregivers gave their explicit and written consent on having their child operated on using this novel method. Alternatives were described and offered. The potential risks and benefits were discussed in detail. Operative Technique 2.3.1. PARP without Image Guidance (nPARP) The percutaneous anorectoplasty (PARP, Figure 1, Video S1: Description of the PARP procedure without image guidance in a newborn male with anorectal malformation and a rectoperineal fistula) is applicable in patients born with pure imperforated anus covered by a skin tag in the form of a bucket handle. A Wangensteen-Rice invertogram radiograph confirms the presence of a low imperforated anus. The bucket handle is dissected off the underlying tissue in a modified lithotomy position. Subsequently, the bucket handle is divided and resected. The posterior skin is excised in a wedge-type fashion. Then, the muscle complex is located at the center of the neo-anus using an electronic muscle stimulator. Once the sphincter complex has been detected, a needle or an eight French dilator is passed through the sphincter complex into the rectal pouch. A guidewire is passed through the rectal pouch to secure the tract. Once secured, the tract is dilated sequentially from eight to twenty French. At that point, the guidewire and dilator are removed. Following the evacuation of meconium, Hegar dilators are used to dilate the tract further to approximately eleven millimeters or an age-appropriate size. Finally, retractors are inserted. The rectal mucosa is retracted down to the level of the skin and is then sewn to the skin using interrupted polyglactin sutures resulting in a cosmetically pleasing, inverted, and orthotopic anus with an intact sphincter complex. Figure 1. Screenshot of Video S1. Typical perineal fistula with bucket-handle in a boy Figure 1. Screenshot of Video S1. Typical perineal fistula with bucket-handle in a boy. Ultrasound-Guided PARP (uPARP) When performing an ultrasound-guided percutaneous anorectoplasty (uPARP), the previously described PARP operative technique is complemented by real-time ultrasound imaging of the perineum in the operating room to localize the rectum and muscle complex ( Figure 2). Fluoroscopy-Guided (Interventional) PARP (iPARP) During a fluoroscopy-guided (interventional) percutaneous anorectoplasty (iPARP), the patient is placed in a prone position with the buttocks elevated, much like during a conventional posterior sagittal anorectoplasty (PSARP). The fluoroscopy unit is positioned in a cross-table lateral configuration. The center of the muscle complex is identified using an electronic stimulator. A needle is advanced through the center of the sphincter into the air-filled rectal pouch under fluoroscopic guidance and the guidewire is advanced through the needle (Figure 2a,b). A 12 mm balloon dilator is advanced over the needle and the tract is dilated (Figure 2c,d). Thereafter, the mucosa is retracted down to the skin using hooks and sutured circumferentially as described for the PARP above. An endoscopically-guided percutaneous anorectoplasty (ePARP, Video S2: Description of the ePARP procedure in a 6 month old girl with Down syndrome who had a transverse colostomy in an outside hospital) requires a previous colostomy and is thus per- Fluoroscopy-Guided (Interventional) PARP (iPARP) During a fluoroscopy-guided (interventional) percutaneous anorectoplasty (iPARP), the patient is placed in a prone position with the buttocks elevated, much like during a conventional posterior sagittal anorectoplasty (PSARP). The fluoroscopy unit is positioned in a cross-table lateral configuration. The center of the muscle complex is identified using an electronic stimulator. A needle is advanced through the center of the sphincter into the air-filled rectal pouch under fluoroscopic guidance and the guidewire is advanced through the needle (Figure 3a,b). A 12 mm balloon dilator is advanced over the needle and the tract is dilated (Figure 3c,d). Thereafter, the mucosa is retracted down to the skin using hooks and sutured circumferentially as described for the PARP above. Fluoroscopy-Guided (Interventional) PARP (iPARP) During a fluoroscopy-guided (interventional) percutaneous anorectoplasty (iPARP), the patient is placed in a prone position with the buttocks elevated, much like during a conventional posterior sagittal anorectoplasty (PSARP). The fluoroscopy unit is positioned in a cross-table lateral configuration. The center of the muscle complex is identified using an electronic stimulator. A needle is advanced through the center of the sphincter into the air-filled rectal pouch under fluoroscopic guidance and the guidewire is advanced through the needle (Figure 2a,b). A 12 mm balloon dilator is advanced over the needle and the tract is dilated (Figure 2c,d). Thereafter, the mucosa is retracted down to the skin using hooks and sutured circumferentially as described for the PARP above. An endoscopically-guided percutaneous anorectoplasty (ePARP, Video S2: Description of the ePARP procedure in a 6 month old girl with Down syndrome who had a transverse colostomy in an outside hospital) requires a previous colostomy and is thus per- Endoscopically-Guided PARP (ePARP) An endoscopically-guided percutaneous anorectoplasty (ePARP, Figure 4, Video S2: Description of the ePARP procedure in a 6 month old girl with Down syndrome who had a transverse colostomy in an outside hospital) requires a previous colostomy and is thus performed in children without a fistula, usually patients with Down syndrome. The patient is placed supine in a way that allows a distal colonoscopy from the mucous fistula. A fluoroscopy unit is placed to allow cross-table lateral imaging. At the blind end of the colon, a typical star-shaped scar is always detected and marks the center of the future tract ( Figure 5). The center of the muscle complex is identified from the outside with a stimulator and a needle is advanced through the sphincter complex into the rectum under X-ray and endoscopic guidance. A guidewire is placed. Then, a twelve-millimeter balloon dilator is inserted over the guidewire and inflated to dilate the tract. After the balloon is deflated, the tissue tract can be inspected endoscopically to the outside. Subsequently, the endoscope is retracted back inside. The next step involves bringing the rectal mucosa down to the skin. This is accomplished by introducing two sharp hooks, one anteriorly and one posteriorly, which gently retract the mucosa. From the outside, circular stay sutures are placed on the mucosal sleeve. The exact placement of the sutures can be verified endoscopically. Thereafter, a colocutaneous anastomosis is performed using circular braided absorbable sutures. Correct placement of the sutures can be verified endoscopically to confirm that the mucosa circularly anastomoses with the skin. This is important to prevent stricture. Finally, the stay sutures are cut leaving a watertight anastomosis. At the end of the procedure, the neo-anus is calibrated using a ten-millimeter Hegar dilator. is placed supine in a way that allows a distal colonoscopy from the mucous fistula. A fluoroscopy unit is placed to allow cross-table lateral imaging. At the blind end of the colon, a typical star-shaped scar is always detected and marks the center of the future tract ( Figure 3). The center of the muscle complex is identified from the outside with a stimulator and a needle is advanced through the sphincter complex into the rectum under Xray and endoscopic guidance. A guidewire is placed. Then, a twelve-millimeter balloon dilator is inserted over the guidewire and inflated to dilate the tract. After the balloon is deflated, the tissue tract can be inspected endoscopically to the outside. Subsequently, the endoscope is retracted back inside. The next step involves bringing the rectal mucosa down to the skin. This is accomplished by introducing two sharp hooks, one anteriorly and one posteriorly, which gently retract the mucosa. From the outside, circular stay sutures are placed on the mucosal sleeve. The exact placement of the sutures can be verified endoscopically. Thereafter, a colocutaneous anastomosis is performed using circular braided absorbable sutures. Correct placement of the sutures can be verified endoscopically to confirm that the mucosa circularly anastomoses with the skin. This is important to prevent stricture. Finally, the stay sutures are cut leaving a watertight anastomosis. At the end of the procedure, the neo-anus is calibrated using a ten-millimeter Hegar dilator. Data Acquisition The data were retrospectively collected from operative reports and hospital records. Pertinent demographic information, comorbidities, operative time, type of PARP, perioperative and postoperative complications, as well as short-and long-term outcome were extracted into a database. Data Acquisition The data were retrospectively collected from operative reports and hospital records. Pertinent demographic information, comorbidities, operative time, type of PARP, perioperative and postoperative complications, as well as short-and long-term outcome were extracted into a database. Patients During the study interval, a total of 10 patients were included. Eight of those patients were male. Half of the patients presented with anorectal malformation with a perineal fistula; the other half did not exhibit an appreciable fistula. Only three patients did not show comorbidities. Three patients were diagnosed with Down syndrome, one patient suffered from VACTERL, and one patient presented with Currarino triad, Spina bifida, as well as congenital heart disease. Furthermore, two patients were born prematurely, one of whom experienced a pneumothorax and underwent chest tube placement preoperatively. This patient was later diagnosed with Duchenne muscular distrophy. Moreover, four patients had received a colostomy prior to the PARP procedure. The median age for colostomy placement was 1.5 days (range 0 to 2) ( Table 1). Operations Overall, ten percutaneous anorectoplasties were performed between 2008 and 2021. The median age at the PARP was three days (range 1 to 311 days). The median operative time amounts to approximately 60 min (range 25 to 183). The OP times of the final two PARP procedures were not included in this calculation as patients underwent multiple concomitant surgical procedures. Apart from the initial two percutaneous anorectoplasties without image guidance, the procedures were generally guided: one uPARP, three iPARPs, and four ePARPs were performed (Table 2). Complications There was one complication in the second child who was operated on without image guidance. The procedure was initiated in the supine position with the legs raised and hips flexed. Preoperatively, a Foley bladder catheter was placed, but it could not be advanced all the way and no urine was obtained. It was left in place without inflating the balloon. After punctuation of the rectum and dilation, the Foley catheter was visible through the rectum, prompting us to abort the procedure. The patient was turned prone, prepped, and draped. Then, a posterior sagittal anorectoplasty (PSARP) was performed. Subsequently, the Foley catheter was removed, a new catheter was placed through the urethra under vision into the bladder, and the bulbar urethral opening, where the first catheter had passed into the rectum, was repaired using interrupted resorbable sutures. No other peri-or postoperative complications were noted in this series ( Table 2). There were no wound infections. Outcomes The median follow-up lasted approximately 16 months (range 0 to 43. Table 3). Two out of ten patients dealt with constipation postoperatively, one of which required oral macrogol (polyethylene-glycol) treatment. None of the patients suffered from incontinence following the PARP procedure. Four patients required further dilations. Overall, outcomes were highly satisfactory in most patients in terms of functionality and continence. Discussion This is the largest case series on percutaneous anorectoplasty to date. Over the course of the last decade, the technique has undergone evolution using additional image guidance, to the point where it can be safely performed and recommended for certain anorectal malformations. Despite the heterogeneous pattern of anorectal malformations (ARMs) [1], posterior sagittal anorectoplasty (PSARP) has been the main approach for repair across the board. The drawback of PSARP is the division of the sphincter in two halves through the midline, with later reconstruction [1,9]. Despite the argument that this allows accurate visualization of anatomical structures, thus allowing the most accurate surgical correction and preservation of blood vessels and nerve structures, current studies in the literature increasingly consider that the invasiveness of this method may not be necessary in certain cases [2,10,11]. Laparoscopy can assist with repair of high forms of ARMs while leaving the sphincter intact [7,10], although the intuitive hypothesis that this could improve the functional prognosis in terms of decreasing cases of fecal incontinence and constipation has not yet been conclusively confirmed [9,10,12,13]. Nevertheless, the laparoscopic, sphincter-sparing approach indeed has been shown to significantly reduce postoperative wound complications and hospital stay [9,10,12,14]. Since wound complications have a negative impact on functional prognosis, the advantage of minimally invasive techniques is increasingly evident [1,3,4,9,10,12,14,15]. Other approaches to reduce wound complications such as preoperative bowel preparation, prolonged postoperative fasting and antibiotics, as well as application of a vacuum-assisted pump have also been described with varying degrees of success [4,[16][17][18]. While laparoscopy is useful for high forms of anorectal malformations, it is not as helpful for low lesions. To date, there are only a few reports describing minimal invasive techniques for low lesions. Pakarinen et at. described the "Transanal Endoscopic-Assisted Proctoplasty (TEAPP)" [2,11]. They performed a sigmoidostomy in seven patients with ARM without a fistula in term of a staged surgical approach. Via colostomy, the absence of a fistula was confirmed (high-pressure colostogram) before implementing the TEAPP procedure. A retrograde endoscopy through the sigmoid mucous fistula was performed to visualize the termination of the rectum. In case a low malformation was confirmed by using translumination of the endoscope light from the rectum to the anal dimple within the external sphincter, correction via TEAPP was performed (successful in four of the seven patients). The rectum was incised from below and the neoanus was created under endoscopic visual control, similar to the ePARP procedure described in our report. They suggested that this technique allows anatomical reconstruction of the anorectum, by placing the anorectum within the sphincter complex under endoscopic control [11]. In this study, the TEAPP procedure was aborted and converted to a PSARP in three of the seven recruited patients, mainly because transillumination could not be positively confirmed. The question of transillumination raises the question of the maximal distance between skin and pouch in those without a fistula that is repairable by ePARP. In our series, the maximal distance was 3 cm. Using the hooks, it was still feasible to bring the mucosa down to the anus without difficulties for anastomosis. Nevertheless, the distance between the pouch and the skin may be a limitation of the PARP technique, making it applicable only to low-type lesions where the mucosa can be retracted downward and anastomosed to the skin. This approximation, however, results in a nicely inverted skin rosette and may prevent prolapse, which we have not seen as a complication in our series. While another option may be to perform a limited perineal skin incision to access the distal rectal pouch under direct vision, we believe that using ultrasound, radiography, or endoscopy allows us to penetrate through the center of the sphincter complex with a needle, limiting dissection and associated damage, much like during the laparoscopic approach for higher lesions. In contrast to the generally accepted concept that even in low forms of ARMs without fistula there is intimate contact between the rectal blind sac and the posterior urethra [1], Pakarinen et al. describe the midpoint of the distal rectal termination to be right above the anal site within the sphincter muscle complex and not intimately related to the urethra. This finding may disprove the argument that the close relationship between the rectum and the urethra justifies the need for PSARP in low forms of ARMs [2,11]. The results regarding the percutaneous anorectoplasty procedure (PARP) described in this article show comparable advantages to the TEAPP procedure. The minimally invasive approach may help avoid potential complications associated with PSARP in select, eligible patients. Furthermore, the high success rate in our study (90%; only one patient was converted to a PSAPR procedure) indicates that suitable cases can be reliably identified preoperatively. In contrast to TEAPP, the PARP procedure allows, in addition to minimal invasive correction of patients with low ARMs, the correction of male patients with a perineal fistula (anocutaneous, rectoperineal outside the sphincter complex). These types of malformations are currently still recommended to be reconstructed by posterior sagittal anoplasty [19]. However, there is evidence suggesting that overall functional outcome is comparable after minimally invasive anoplasty and PSARP for perineal fistula in boys [20]. Additionally, in contrast to the TEAPP, the ePARP in our series is performed not only using endoscopic guidance, but under concomitant fluoroscopic control. In our opinion, this is essential for a safe, precise reconstruction of the anorectum. Obviously, the ePARP procedure requires a prior colostomy for antegrade endoscopy, but also for the preoperative exclusion of an occult rectourethral or rectovesical fistula by high-pressure distal colostogram [19]. However, the iPARP procedure does not require a colostomy and therefore may be an option when the invertogram clearly shows the blind-ending rectum and there are no signs of a fistula. Relevant complications of colostomies in newborns include wound complications, prolapse, leakages, parastomal hernias, or bowel obstruction [21]. Therefore, colostomies should be avoided if possible, particularly in males with perineal fistulas. Another argument in favor of a one-stage procedure is the so-called "brain-defecation reflex" that may remain intact following the "use it or lose it" principle [22,23]. Finally, there is evidence of one-stage procedures affording similar outcomes compared to multi-stage procedures. This raises the question whether liberal placement of a colostomy is generally warranted [9,16,24]. In our series, only one perioperative complication occurred during the PARP procedure, namely, the presence of the Foley catheter in the rectum upon visualizing the rectum from the perineum. The unanswered question remains whether the posterior urethra was injured during the procedure or whether the patient had a low rectourethral fistula in addition to the perineal fistula in the first place. According to the literature, such H-type anorectal malformations have an incidence of around 3 percent [25], ranging from 0.1 to 16 percent [26]. Therefore, pediatric surgeons should have a high index of suspicion when performing any of these procedures. Conversion to a PSARP in our case 2 afforded the patient a good outcome. Surgeons attempting a PARP procedure should maintain a high index of suspicion for rectourethral fistulae and should convert to PSARP if there is any indication that anatomy is not as preoperatively suspected. In our case, the patient did not have a micturating cysturethrogram, which would have been helpful. To ensure patient safety, accurate preoperative evaluation of the underlying anatomy and, accordingly, the selection of the appropriate surgical technique is crucial. This refers to the level of the ARM, the relation of the rectal pouch to the muscle complex as well as the evaluation of a rectogenitourinary communication [1,2,19]. These aspects may be estimated by a lateral pelvic radiograph, ultrasound, cystoscopy, or micturating cystourethrogram (MCUG), even though the results of these examinations may be inaccurate in some cases [2,10,27]. There were no complications during the ePARP procedures throughout our study. In our opinion, the ePARP procedure, including employing intraoperative fluoroscopy, offers the safest technique, especially in cases where preoperative diagnostics have not provided complete clarity regarding the exact type of ARM. The relevance of accurate preoperative diagnostics also applies to perioperative guidance. Using a percutaneous technique without some kind of image guidance (nPARP) has a high potential risk of creating false tracks and causing complications in neighboring structures such as the urethra, as seen with patient number 2 in this series. We therefore do not recommend performing the nPARP procedure. Functional outcomes in most children were highly satisfactory in terms of continence and functionality, with only two cases of constipation and four patients with the need of anal dilations. We are aware that the follow-up in this study was too short to draw any conclusion concerning long-term functional outcomes. However, there is evidence showing that long-term results in low malformations are good in most patients if perioperative complications are prevented [1,2,19]. Furthermore, long-term follow-up of these patients in terms of functionality remain controversial and is generally influenced by confounding factors, including a high incidence of associated anomalies [20,28]. The study is limited by its relatively small sample size, the retrospective design and heterogeneous population. Furthermore, the technique was implemented by multiple surgeons. However, all surgeons had comparable experience in the field of pediatric surgery and had discussed the exact technique prior to the interventions. These factors mean that our data can only demonstrate a trend and that, so far, no precise statement can be made about certain secondary endpoints, such as the operating room time. This is the first study investigating the clinical outcome after PARP procedure as well as describing the different options of visual guidance. The PARP procedure seems to offer a safe and individually tailored minimally invasive surgical approach to avoid unnecessary invasive surgery in eligible patients. Prospective studies with larger populations are needed to confirm these findings. Supplementary Materials: The following videos can be downloaded at: https://www.mdpi.com/ article/10.3390/children9050587/s1, Video S1: Description of the PARP procedure without image guidance in a newborn male with anorectal malformation and a rectoperineal fistula, Video S2: Description of the ePARP procedure in a 6-month-old girl with Down syndrome who had a transverse colostomy in an outside hospital. Informed Consent Statement: Informed written consent was obtained from all parents to have their children operated on using the respective new techniques. Data Availability Statement: All data on which this publication is based are available from the corresponding author (OM) upon reasonable request.
v3-fos-license
2019-12-13T14:09:15.956Z
2019-12-13T00:00:00.000
209324603
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2019.01296/pdf", "pdf_hash": "16e3e844aaf9ee396c85a169e3590c93e5d0420a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43975", "s2fieldsofstudy": [ "Medicine" ], "sha1": "16e3e844aaf9ee396c85a169e3590c93e5d0420a", "year": 2019 }
pes2o/s2orc
Longitudinal Neuroimaging in Pediatric Traumatic Brain Injury: Current State and Consideration of Factors That Influence Recovery Traumatic brain injury (TBI) is a leading cause of death and disability for children and adolescents in the U.S. and other developed and developing countries. Injury to the immature brain varies greatly from that of the mature, adult brain due to numerous developmental, pre-injury, and injury-related factors that work together to influence the trajectory of recovery during the course of typical brain development. Substantial damage to brain structure often underlies subsequent functional limitations that persist for years following pediatric TBI. Advances in neuroimaging have established an important role in the acute management of pediatric TBI, and magnetic resonance imaging (MRI) techniques have a particular relevance for the sequential assessment of long-term consequences from injuries sustained to the developing brain. The present paper will discuss the various factors that influence recovery and review the findings from the present neuroimaging literature to assess altered development and long-term outcome following pediatric TBI. Four MR-based neuroimaging modalities have been used to examine recovery from pediatric TBI longitudinally: (1) T1-weighted structural MRI is sensitive to morphological changes in gray matter volume and cortical thickness, (2) diffusion-weighted MRI is sensitive to changes in the microstructural integrity of white matter, (3) MR spectroscopy provides a sensitive assessment of metabolic and neurochemical alterations in the brain, and (4) functional MRI provides insight into the functional changes that occur as a result of structural damage and typical developmental processes. As reviewed in this paper, 13 cohorts have contributed to only 20 studies published to date using neuroimaging to examine longitudinal changes after TBI in pediatric patients. The results of these studies demonstrate considerable heterogeneity in post-injury outcome; however, the existing literature consistently shows that alterations in brain structure, function, and metabolism can persist for an extended period of time post-injury. With larger sample sizes and multi-site cooperation, future studies will be able to further examine potential moderators of outcome, such as the developmental, pre-injury, and injury-related factors discussed in the present review. Traumatic brain injury (TBI) is a leading cause of death and disability for children and adolescents in the U.S. and other developed and developing countries. Injury to the immature brain varies greatly from that of the mature, adult brain due to numerous developmental, pre-injury, and injury-related factors that work together to influence the trajectory of recovery during the course of typical brain development. Substantial damage to brain structure often underlies subsequent functional limitations that persist for years following pediatric TBI. Advances in neuroimaging have established an important role in the acute management of pediatric TBI, and magnetic resonance imaging (MRI) techniques have a particular relevance for the sequential assessment of long-term consequences from injuries sustained to the developing brain. The present paper will discuss the various factors that influence recovery and review the findings from the present neuroimaging literature to assess altered development and long-term outcome following pediatric TBI. Four MR-based neuroimaging modalities have been used to examine recovery from pediatric TBI longitudinally: (1) T 1 -weighted structural MRI is sensitive to morphological changes in gray matter volume and cortical thickness, (2) diffusion-weighted MRI is sensitive to changes in the microstructural integrity of white matter, (3) MR spectroscopy provides a sensitive assessment of metabolic and neurochemical alterations in the brain, and (4) functional MRI provides insight into the functional changes that occur as a result of structural damage and typical developmental processes. As reviewed in this paper, 13 cohorts have contributed to only 20 studies published to date using neuroimaging to examine longitudinal changes after TBI in pediatric patients. The results of these studies demonstrate considerable heterogeneity in post-injury outcome; however, the existing literature consistently shows that alterations in brain structure, function, and metabolism can persist for an extended period of time post-injury. With larger sample sizes and multi-site cooperation, future studies will be able to further examine potential moderators of outcome, such as the developmental, pre-injury, and injury-related factors discussed in the present review. INTRODUCTION Recent estimates suggest that a child under the age of 14 sustains a traumatic brain injury (TBI) every 60 s in the United States (1). While TBI-related deaths in children have substantially decreased over the past decade, TBI remains the leading cause of death among children and adolescents (2). Of the survivors, ∼62% of children who sustained moderate-to-severe injuries, and 14% of those with milder injuries suffer from long-term disability (3). The trajectory of recovery from an injury sustained to the developing brain differs greatly from that of the mature, adult brain (4,5). Thus, an understanding of the long-term effects of early brain injury on subsequent neurodevelopment is vital for the accurate understanding and prediction of a child's outcome and recovery. Currently, falls, sports-and recreation-related blunt force trauma, and motor vehicle accidents are the leading causes of pediatric TBI (1,3). These mechanisms commonly give rise to acceleration-deceleration injuries that result in diffuse axonal injury (DAI), which refers to the extensive structural damage that occurs to otherwise highly organized neural tissue due to the abrupt stretching, twisting, and shearing of axons in the event of a mechanical blow. DAI is critically related to functional outcomes following early brain injury, as it leads to reductions in white matter integrity, disrupting the connectivity of the neural networks that give rise to behavioral and cognitive function (6). Plasticity moves anteriorly during typical neural development, where the frontal and temporal regions of the brain are among the last to develop (7,8). Due to the close proximity of these brain regions to the bony structure of the anterior and middle fossa of the skull (9), they are the most vulnerable in accelerationdeceleration injuries. For this reason, early brain insult likely affects the maturation of the frontal and temporal cortices, as well as the white matter pathways connecting them to other areas of the brain. Such disruption is known to have detrimental and long-term consequences on the development of critical neurobehavioral functions localized within these regions, such as executive function (10,11), learning and memory (12), emotional control (13), behavioral self-regulation (14), and social adaptive behavior (15). Despite the fact that injury to the developing brain has potentially more devastating long-term consequences than injury to the adult brain (16), there is substantially less literature on the long-term consequences of pediatric TBI compared to adults with TBI. In particular, relatively few studies have utilized these MRI methods for the longitudinal assessment of recovery from early brain injury. In addition to the great Abbreviations: AD, axial diffusivity; ADC, apparent diffusion coefficient; Cho, choline; Cr, creatine; CVR, cerebrovascular responsiveness; DAI, diffuse axonal injury; dMRI, diffusion-weighted magnetic resonance imaging; DTI, diffusion tensor imaging; FA, fractional anisotropy; fMRI, functional magnetic resonance imaging; IHTT, inter-hemispheric transfer time; MD, mean diffusivity; MRI, magnetic resonance imaging; MRS, magnetic resonance spectroscopy; msTBI, moderate-to-severe traumatic brain injury; mTBI, mild traumatic brain injury; NAA, N-acetyl aspartate; RD, radial diffusivity; ROI, region-of-interest; SES, socioeconomic status; sMRI, structural magnetic resonance imaging; SRC, sportrelated concussion; TBI, traumatic brain injury; TBSS, tract-based spatial statistics. number of developmental, pre-injury, and injury-related factors that influence outcome after pediatric TBI (17), a major reason for the lack of longitudinal research lies in the substantial cost associated with clinical imaging. Additionally, sensitive measurement tools that are suitable for the prediction of outcome in a pediatric population have only recently become available. The literature on early brain development has increased 10fold over the last 25 years as technological advances have been made in neuroimaging techniques, specifically those involving the use of magnetic resonance imaging (MRI; see Figure 1). Likewise, advances in the field of neuroimaging have established an important role in identifying the sequelae and determining the acute management of pediatric TBI over the last two decades (18). Although computed tomography (CT) is necessary for the rapid evaluation of primary head trauma complications that require immediate intervention (e.g., extra-axial hemorrhage, skull fracture, etc.), its clinical utility beyond this is generally somewhat limited, as the extent of axonal damage due to DAI is commonly underestimated with CT. Structural MRI is not only more sensitive to identifying DAI than CT, but it also has a particular relevance for the sequential assessment of the longitudinal consequences of brain injury (19). While structural MRI (sMRI) has greater prognostic utility in TBI than CT, it is unable to fully account for the complexity of pediatric TBI neuropathology when only used in the first weeks following the injury (20). Serial volumetric analysis, however, can provide insight into long-term neurodegeneration that occurs as a result of ongoing secondary injury pathology. More advanced techniques, such as diffusion-weighted MRI (dMRI), magnetic resonance spectroscopy (MRS), and functional MRI (fMRI) have greater sensitivity to the primary and secondary injuries after TBI, therefore establishing increased value for predicting long-term outcome after injuries are sustained to the developing brain (21,22). Diffusion tensor imaging (DTI) is a dMRI technique that is sensitive to the long-term pathological effects of DAI on the microstructural integrity of white matter (23,24); however, DTI is unable to reveal the underlying processes for such effects. MRS allows for the non-invasive measurement of metabolites in the brain, which vary by anatomic region and change rapidly as the brain develops through the adolescent years. Through the examination of intracellular metabolic status, MRS is able to detect several metabolites that are sensitive to the pathology associated with secondary brain injury cascades and is therefore capable of providing direct evidence of microscopic neuronal injury. Substantial damage to brain structure can occur in pediatric TBI, and such damage often underlies subsequent functional limitations in physical, emotional, cognitive, behavioral, adaptive, and academic abilities that persist for years following the injury (25). Adults who suffered from childhood TBI 15-years prior report significantly poorer perceptions of their health-related quality of life due to ongoing functional limitations, regardless of injury severity (26). There is an intimate relationship between brain structure and function, and this complimentary relationship extends toward the use of functional and structural neuroimaging modalities in the evaluation of long-term outcome following brain injury. Individual points indicate the number of papers published per a given year, and the solid line indicates the total number of publications over all time. Publications were found using the search terms (brain AND development OR neurodevelopment) AND (childhood OR adolescent OR pediatric) AND (structure OR function) in PubMed. dMRI, diffusion magnetic resonance imaging; fMRI, functional magnetic resonance imaging; MRI, magnetic resonance imaging; MRS, magnetic resonance spectroscopy; NMR, nuclear magnetic resonance; T, tesla. Prior to our review of the present longitudinal neuroimaging literature, we will discuss several developmental, injury-related, and pre-injury factors that are known to influence development and recovery from damage to an immature brain. We believe that such a discussion is vital, as the field currently lacks a complete understanding of the way in which these factors interact with each other to further complicate the trajectory of recovery in the presence of injury-induced alterations to brain development. The complex interaction that occurs among such influences may have unpredictable negative consequences for otherwise adaptive structural and functional neuroplasticity that occurs in response to tissue damage (5). While there is an appreciation in the field for the importance of considering such factors in relation to planning rehabilitation and estimating the overall recovery period these factors are less often included in neuroimaging analyses. There is considerable heterogeneity postinjury, which can lead to inconsistent results in the neuroimaging literature. Understanding the ways in which developmental, injury-related, and pre-injury factors may impact neuroplasticity in the developing brain may be key to explaining more of this variance. Following this discussion, we will briefly review the findings from longitudinal studies employing MR-based neuroimaging modalities to increase our current understanding of altered development and long-term outcome following pediatric TBI. Recent reviews of studies utilizing various neuroimaging modalities in children with moderate-to-severe TBI [msTBI; (27)] and of studies utilizing sMRI (28) or dMRI (29) to evaluate outcome from mild-to-severe pediatric TBI have been published. However, no reviews of the current literature, have focused solely on longitudinal neuroimaging studies (i.e., imaging at least two points in time) using various MR-based modalities to characterize outcome following mild-to-severe pediatric TBI; henceforth, such studies will be the focus of the present review. Furthermore, we will discuss whether the developmental, injuryrelated, and pre-injury factors known to influence plasticity and recovery from early TBI are considered in analyses conducted by the longitudinal studies reviewed here. Finally, we will conclude with a discussion of the current gaps in the literature and provide suggestions for future directions that should be taken in the field. FACTORS THAT INFLUENCE PLASTICITY AND RECOVERY An important distinction between pediatric and adult TBI is that the primary cause of pediatric injuries vary significantly by age group and can present in various ways (1,30,31). Various types of head injuries (e.g., blunt, penetrating, acceleration/deceleration) are closely related to the circumstantial mechanisms that caused them (32), yet great heterogeneity exists in the clinical presentation manifested by TBI, suggesting that similar heterogeneity exists in the underlying pathological features of the damaged brain. No two brain injuries are equal due to the complex interaction of inter-individual differences in the timing and circumstances in which the injury occurred, the severity, biomechanics, and nature of the injury itself, and intra-individual factors such as age, sex, and quality of the pre-injury environment (33)(34)(35)(36). For this reason, it is essential that such factors are considered in the assessment of outcome and prediction of recovery following pediatric brain injury. Developmental Factors The central nervous system is inherently plastic in its capacity to respond to the environment and experience in a dynamic manner through modification of its neural circuitry (37). The phenomenological nature of neuroplasticity is linked to the development of the brain and function across the lifespan, and it is a beneficial property in the context of healthy development (17,38). In the context of early brain injury, however, the influence of plasticity on brain development may be detrimental, as the interrupted developmental processes can be altered permanently or cease entirely (39,40). Age at Injury A major difference between a mature and a developing brain is the presumed capacity for heightened plasticity in the latter (40,41). Historically, research of the effects that age at the time of injury has on outcome suggests that worse cognitive outcomes are associated with a younger age when injury occurred, and injuries sustained before age eight were believed to have the worst prognosis (33,42,43). More recently, a complex, non-linear relationship between age at injury and cognitive outcome has been demonstrated in the literature (17,44,45). Developmental research suggests that there are critical periods for the acquisition of specific skill sets (46). A critical period in development designates a maximally-sensitive, developmental phase of enhanced experience-expectant plasticity when the brain is heavily influenced by environmental demands (37). During critical periods, the brain undergoes significant structural and functional growth as it learns, adapts, and makes connections with other parts of the brain. This heightened sensitivity, however, simultaneously increases the brain's vulnerability to disruption in the environment, therefore heightening its susceptibility to insult. Brain damage that occurs during a critical period can have a more profound effect on skill acquisition relative to the effects of injuries that occur during non-critical periods (42,47), as predetermined developmental processes are derailed, natural resources are depleted, and the typical developmental course that guides recovery is no longer available (48,49). The timing of a brain injury is therefore important to consider, because injuries that occur during critical brain development periods tend to result in more extensive damage to whatever region is currently undergoing accelerated maturation, and subsequently, greater deficits can occur in whatever functions are to be localized in that region (50). An in-depth examination of the impact of age-at-injury on brain maturation is needed, however, as "critical periods" for injury likely depend on the outcome measure used (e.g., language function vs. executive functions). Longitudinal studies suggest that the cognitive skills of children with early brain insults (before age 10) tend to develop slower than that of non-brain injured children (17,50), due to a heightened vulnerability in skill acquisition during critical periods (51,52). Time Since Injury Time since injury is also an important factor to consider, as atypical timing of neural development may result in progressive functional deterioration that occurs due to a child's inability to effectively interact with the environment (42). Alternatively, the child may grow into the deficit in later years, where certain functional deficits do not emerge until the child reaches the appropriate stage of development for some skill to develop (17,(53)(54)(55). For example, higher-order executive functions develop later in adolescence and deficits may not be evident until the child reaches the appropriate age at which those functions typically emerge. Over time, dysfunction can become more apparent as the child grows into the deficit and fails to acquire the same skills his or her peers are developing. The result is arrested functional maturation over time, in which deficits become more apparent as the child ages (17,53,56). Severity of the Injury More severe injuries are associated with worse physical and cognitive performance in the subacute (≤7 days post-injury), acute (≤90 days post-injury), and chronic (>90 days post-injury) periods of recovery from pediatric TBI (57). TBI severity is typically categorized as mild, moderate, or severe based on the patient's initial clinical presentation, and the primary measures used to classify injury severity in children include the Glasgow Coma Scale [GCS; (58)] and the Pediatric Coma Scale (59), where scores between 13-15, 9-12, and 3-8 indicate mild, moderate, and severe injuries, respectively. Complicated mild TBI is sometimes used to designate the severity of an injury when GCS is between 13 and 15 but abnormal day-of-injury imaging results are present [e.g., skull fracture, intracranial lesion; (60,61)]. Other common severity indices include the duration of posttraumatic amnesia, the duration of loss of consciousness or coma, and the length of hospital stay. In a large study of 2,940 children who sought medical treatment for TBI, 84.5% suffered from mild injuries, 13.2% suffered from moderate injuries, and 2.3% suffered from severe injuries (62). The results of a metaanalysis (63) demonstrated no statistically significant effects on overall neurocognitive outcome after mild TBI (mTBI), whereas children with complicated mTBI and msTBI have been shown to recover at slower rates and have poor cognitive outcomes up to several years post-injury (44,60,64). Lesion Characteristics The heterogeneity of various biomechanical, pathological, and physiological mechanisms of primary and secondary injuries are reflected by structural imaging abnormalities observed in chronic TBI [for a review, see (19,65)]. Size, laterality, location, and extent of the lesion can all impact post-injury outcome. Poorer outcomes after pediatric TBI have consistently been seen in patients with larger, more diffuse, and bilateral injuries (66). Smaller, unilateral lesions tend to demonstrate the greatest plasticity, resulting in relatively good recovery (67,68), and it is suggested that such focal damage forces the interhemispheric transfer of function, resulting in minimal impact on functional abilities (69). Other studies investigating laterality have reported that right vs. left hemisphere lesions to the frontal lobe lead to somewhat different impairment profiles across cognitive domains (53,56,70,71). In contrast to that of adult TBI, the relationship between lesion location and outcome following pediatric TBI is not consistently documented (72). Intact (i.e., undamaged) frontal and parietal cortices are important for functional reorganization or the recruitment of additional cerebral regions to compensate for the functions localized in the damaged region (73); however functional outcome has not been linked to functional reorganization, per se (19,74,75). It is possible that incomplete functional localization of the immature brain during early childhood leads to a less severe impact of lesion location or laterality on recovery. Rather, the extent of damaged tissue is the strongest predictor of outcome, suggesting that the integrity of the whole brain network may be necessary for efficient functioning in children (17). Mechanism of Injury Unintentional blunt force trauma to the head involves contact forces that produce focal lacerations, fractures, and contusions to the brain, scalp, and skull, and often results in epidural hemorrhages. Acceleration-deceleration injuries involve inertial forces, which cause excessive movement of the brain, yielding more diffuse injuries such as concussion, subdural hematoma, and diffuse vascular damage (76,77). Anthropometric development and age-specific biomechanical properties of a growing child's skull, face, brain, and neck muscles make children more or less susceptible to specific injuries that are less often or not seen in adults [see Figure 2; (78)]. Children have a greater head-to-body ratio, which increases the probability that damage will occur to the head in the event of trauma. Furthermore, a greater head-to-body ratio contributes to the relative weight of the head compared to the rest of the body, which results in different dynamics of head acceleration between children and adults. Relative to adults, children also have a greater relative proportion of cerebral blood volume and water content due to the degree of myelination that has occurred, making the brain softer and more susceptible to accelerationdeceleration injury. Because a child's brain tissue, skull, and neck musculature is not fully developed, they are more susceptible to posttraumatic edema, ischemic insult, and DAI when exposed to the inertial forces and direct blows associated with falls, sports-related injuries, and motor vehicle accidents, which are the most common mechanisms of injury in children (22,35). Sex Biological sex is likely to affect neurodevelopment after early brain injury, and MRI research in typical human development demonstrated differential rates of cortical development in males and females, where gray matter density peaks around age 10 in females but not until age 12 in males (7,79,80). There is also evidence for greater dendritic volume in the left hemisphere (81) and increased bilateral cortical activation (82) in the young female brain. The animal literature suggests that sex-related differences in the development of gray and white matter is influenced by endogenous hormones, specifically the increased progesterone levels in females. Research using rodent TBI-models report that females recover better than males [for a review, see (83)], and many studies have provided evidence for the neuroprotective effects of progesterone against secondary mechanisms of brain injury, although results have been somewhat conflicting. Increased levels of progesterone have been shown to reduce brain edema, increase neuronal survival, and impact the expression of genes involved in the regulation of inflammatory responses and apoptosis in brain injured rats (83,84). Progesterone has also been implicated as a promotor of axon regeneration and remyelination (85,86), and this is supported by research demonstrating sex differences in neuroplasticity following early brain injury to rats (87). In human research, biological sex plays an important role in psychosocial development, where females have an increased risk for developing emotional and psychiatric disorders, and males have an increased risk for social and behavioral problems within the first 6-12 months following childhood TBI (88)(89)(90). In a longitudinal study of quality of life following mildto-severe TBI sustained during childhood (91), female sex significantly predicted poorer outcomes across the majority of health-related quality of life measures as well as overall satisfaction with perceived quality of life. This conflicts with the findings above regarding hormones, suggesting that this issue may require a more nuanced approach. In particular, the impact of puberty on outcome is relatively understudied even though it plays a major role in brain development. These findings underscore the importance of considering demographic characteristics, including intersections between these variables, when assessing outcome and predicting recovery from early brain injury. Socioeconomic Status There is substantial evidence supporting the beneficial effects of an enriched environment on brain development (92)(93)(94)(95), and more recent research has further demonstrated the effect on outcome following childhood brain injury. In a longitudinal study of functional outcome for children with TBI vs. orthopedic injury, home environment was found to moderate the effects of TBI on 7-year outcome (96). In particular, the results of this study demonstrated significantly poorer outcomes in those with TBI vs. orthopedic injury when the home environment had low enrichment (e.g., less access to educational resources and familial support), whereas both patient groups from more facilitative and enriching home environments recovered similarly well. Similar findings have been shown in other longitudinal studies, where children from higher functioning families with greater resources and more enriching home environments have better psychosocial, behavioral, and overall functional outcomes years after suffering from early brain injury (88,97,98). Socioeconomic status (SES) is a major determinant of how enriching one's environment is, and recent research provides evidence for the direct impact of SES on typical brain development (99,100). Significant associations between low SES and long-term psychosocial and behavioral outcomes after pediatric TBI have also been demonstrated in the literature (101,102). FIGURE 2 | Anthropometric differences between children and adults. The image on the left demonstrates the decreasing ratio of head-to-body size from birth to adulthood, which increases the likelihood of traumatic brain injury in children relative to adults. The image in the middle reflects the increasing ratio between facial and cranial size between ages 2, 6, and 25, demonstrating the greater risk for skull trauma in children relative to adults. The image on the right reflects the difference in T 2 -weighted contrast hyperintensity (indicated by white arrows) due to less myelination and a greater concentration of water in a 2-year-old brain relative to a fully developed 25-year-old brain. The immaturity of the white matter in the newborn makes the brain "softer" and more prone to acceleration-deceleration injury. Adapted from Pinto et al. (78). LONGITUDINAL NEUROIMAGING OF PEDIATRIC TRAUMATIC BRAIN INJURY The PubMed database was searched for English-language articles focusing on longitudinal MRI studies in young patients with a history of TBI using the following search criteria: [In Title: (pediatric OR adolescent OR child OR children OR youth)] AND [In Title: (traumatic brain injury OR brain injury OR TBI OR concussion)] AND [All Fields: (longitudinal OR chronic OR long-term OR outcome)] AND [All Fields: (MRI OR neuroimaging OR imaging)]. No time period restrictions were applied, and the latest search was undertaken on July 21, 2019. Additional searches in the references of previously published studies were conducted in an attempt to identify further articles. We excluded published study protocols, conference abstracts, articles not available in English, and experiments involving nonhuman subjects. The title and abstract of the retrieved articles were examined against all inclusion criteria, and the full text article was retrieved if all criteria were met. The assessment of eligibility was performed by two investigators (HML and EAW) independently with the requirement of consensus. In case of disagreement, a third expert was consulted (ELD or KC). Four longitudinal studies of sport-related concussion (SRC) in children or adolescents were excluded due to the failure to provide details or adequate definitions of concussion. In total, we identified 19 research articles that met the following four inclusion criteria: (a) the studies involved children or adolescents who sustained a TBI prior to the age of 19; (b) MRI-based methods were employed to measure brain structure and/or function; (c) changes in brain structure and/or function were assessed over at least two separate points in time (i.e., longitudinal studies). One additional study (103) was published after the initial search date and was considered for inclusion at that time, bringing the total included studies in this review to 20. Of these studies, 6 collected sMRI, 12 collected dMRI, 4 collected MRS, and 2 collected fMRI data for their longitudinal analyses. In the following sections, we briefly describe the longitudinal studies from the current literature that evaluate change in brain structure and/or function over time using sMRI, dMRI, MRS, and/or fMRI. We will begin with a summary of the characteristics of the included studies and provide a basic description of the methods used for analysis and the outcome measures of interest. We will then summarize the overall findings of the studies for each respective imaging modality. A summary of the imaging modalities, their clinical utility in TBI populations, common outcome measures used, and the included studies that utilized them can be found in Table 1. The following data was extracted from each article, and details are summarized in Tables 2-5: patient and control group demographic characteristics (age and sex distribution), age and developmental stage at injury, post-injury time interval, injury severity, MR-based outcome measure(s) assessed, and analysis method(s) used. We extracted additional information regarding the racial/ethnic distribution and SES of patient and control groups, mechanism of injury, primary injuries (determined by day-of-injury CT scan), MR image acquisition details (including field strength, scanner model), and functional/behavioral domains assessed, and these details are summarized in Supplementary Tables S1-S4. Structural Magnetic Resonance Imaging There is a relatively liThere is a relatively limited number of longitudinal studies using structural, T 1 -weighted MRI to evaluate outcome following pediatric TBI. Six published studies were found (see Table 2), and these studies evaluated samples who were injured between early childhood and adolescence (ages 5-18). All samples were first evaluated within the subacute or acute post-injury periods, and follow-up time points occurred between 4-and 36-months post-injury. Three analysis methods were used across the six studies: four studies utilized semior fully automatic ROI region-of-interest (ROI) approaches to measure longitudinal changes in gray matter density (105)(106)(107)(108), one study measured volumetric change longitudinally using tensor-based morphometry [TBM; (104)], and one study used surface-based morphometry (SBM) to measure changes in cortical thickness (109). Methodology and Outcome Measurement ROI approaches are typically used to address a specific anatomical hypothesis and involve manual or automatic segmentation of the specific region(s) to be compared between subjects. TBM (123) is an advanced whole-brain approach that involves the non-linear registration of individual subject data to a template brain space using deformation tensor fields, in which differences in the anatomical structure of each individual brain is preserved and quantified from the properties of the deformation fields via Jacobian determinants. Concerns associated with multiple comparisons exist for both whole-brain and ROI approaches (when more than one ROI is analyzed), and it is necessary to correct for this by including the appropriate statistical correction procedures. SBM is used for cortical thickness analyses and involves the initial nonlinear alignment of cortical curvature across subjects and the spatial normalization data from each subject is registered to template space. Cortical thickness comparisons can then be made between subjects at homologous locations along the cortex. Summary of Longitudinal Findings Overall, the results of morphometric studies evaluating longitudinal change after pediatric TBI demonstrated widespread volumetric differences and cortical thinning over time, when compared to the rates of change in typically developing or orthopedically injured children of the same age. Volumetric differences, indicative of greater atrophy or cortical thinning between acute and chronic periods after pediatric TBI were consistently shown in the corpus callosum (104, 105, 108), superior and middle frontal gyri, middle temporal gyri, postcentral gyri, and lateral or middle occipital gyri (104, 106). One study, however, found no differences in morphometry between TBI and controls groups across time (107 see (124,125)]. Over the first year post-injury, relative volume increases were seen in several gray matter regions in the TBI-Slow group, including the superior frontal gyrus, cingulate cortex, superior parietal lobe, parietal operculum, precuneus, cuneus, and inferior occipital gyrus. Decreased volume in several white matter regions was also seen in the TBI-Slow group, including the internal capsule (extending into the right thalamic region) and superior corona radiata. Volumetric changes in the TBI-Normal group, however, were similar to those seen in the healthy control group over the same period of time (see Figure 4). After controlling for SES, supplementary subgroup analyses revealed further volumetric decreases in the anterior corona radiata, posterior thalamic radiation, superior temporal gyrus, and precentral gyrus, and further volumetric increases in the inferior frontal and supramarginal gyri of the TBI-Slow group, relative to the healthy controls. In light of these findings, the authors suggest that trajectories of outcome might be divergent, where a subset of patients experienced relatively good recovery, characterized by developmentally-expected decreases in gray matter volume and IHTT rates that are comparable to typically developing children of the same age, whereas the other subset of patients experienced relatively poor recovery, marked by decreased white matter volume, which is reflected in slow IHTT. Although no evidence was found in support of a specific moderator for good or poor recovery trajectories, these findings clearly support a relationship between structural change and functional outcome following pediatric TBI. Diffusion-Weighted Magnetic Resonance Imaging Twelve studies assessing longitudinal changes in white matter after pediatric TBI were found (see Table 3), and all of them utilized diffusion tensor imaging (DTI) to do so. All studies evaluated children who were injured between the ages of 5-18; two of these studies focused on injuries that occurred during early childhood through pre-adolescence (112,113), while the remainder focused on children who were injured later. Apart from two studies, where children were evaluated before and after the implementation of an intervention one or more years postinjury (103,117), all studies enrolled children during the acute or subacute phase of injury and evaluated them again between 3-and 24-months post-injury. Several analytical approaches were used to assess longitudinal change in white matter integrity across these studies: two studies used ROI analysis (106,114), wholebrain approaches were used by four studies, including tractbased spatial statistics [TBSS; (112,113,116)] or fixel-based analysis [FBA; (103)], which was supplemented by ROI analysis and probabilistic tractography. The remaining seven studies used deterministic tractography (107,108,110,111,115,117), and one of these studies (117) also implemented graph theoretical analysis to investigate longitudinal differences in structural connectivity between children with TBI and healthy controls following 10weeks of cognitive training. Methodology and Outcome Measurement The most basic of the approaches used is ROI analysis, which involves the quantification of diffusion metrics within a specific area by extracting the mean parameter of interest from the voxels that fall within that region. As in volumetric analyses, ROI analyses of diffusion data are often used to address an a priori hypothesis but can also be used in a whole-brain approach. ROI analysis can be used to measure diffusion properties of both gray and white matter, and it can be sensitive to small changes, particularly if analyses are focused on a specific region that is prone to pathology. TBSS (126) is a whole-brain approach that involves the initial registration of subject data to template space, but this is followed by an additional step where averaged FA values from the major white matter tracts of all subjects are projected onto an alignment-invariant tract representation, called the FA skeleton. FBA is a whole-brain approach used to evaluate the organization of multiple fiber populations, or fixels within a single voxel (127). Fixel-based measures, such as fiber cross-section (FC), a measurement of fixel diameter, can identify tracts that are affected by regions with crossing fibers, overcoming this inherent limitation of the diffusion tensor model (128). A more recent approach toward analyzing white matter microstructure is tractography, which is used to reconstruct individual white matter pathways from tensor field data embedded within the underlying voxels. Measures of anisotropy and diffusivity are be sampled from various regions or across the entire reconstructed tract. Tractography has advantages over the other approaches described, majorly due to the fact that it does not necessarily rely on the registration of subject data to template space. Rather, subject data can be analyzed individually, and this allows for the assessment of interindividual differences in white matter pathology that cannot be obtained through whole-brain, voxel-wise approaches toward diffusion data analysis. Finally, tractography can also be used in combination with cortical parcellation maps obtained through morphometric analyses to create structural connectivity maps using graph theory (129). Whole brain structural connectivity can be modeled as a complex structural network and depicted as graphs composed of nodes and edges, where the nodes represent anatomical regions or voxels, and the edges are reflected in white matter fiber bundles representing the structural connectivity between nodes. Graph theory allows for the analysis of complex networks in which multimodal neuroimaging can be used to characterize the topological properties of brain connectivity through commonly used measures of local and global network connectivity, which are described in Table 1 [for a review, see (130)]. Common diffusion metrics used in tensor-based approaches (i.e., ROI analysis, TBSS, tractography) include fractional anisotropy (FA), apparent diffusion coefficient (ADC, also called mean diffusivity-MD), and axial and radial diffusivity (AD and RD, respectively). FA is highly sensitive to the presence of disorganized white matter; however, it cannot identify specific changes in shape or distribution of the diffusion tensor ellipsoid and should therefore not be considered a biomarker of white matter integrity when interpreted alone (131). ADC/MD reflect the degree of overall diffusion magnitude, and changes in ADC/MD reflect variations in the ratio of intra-to extracellular water concentrations, whereas AD and RD more precisely describe the directional magnitude of diffusion. Variations in ADC/MD in white matter are suggestive of changes in fiber density, axonal diameter, myelination, and neuronal or glial loss, whereas decreased ADC/MD in the gray matter has been attributed to cytotoxic edema. Increases in FA that result from decreases in both AD and RD are suggestive of axonal degeneration (132,133), whereas decreases in FA resulting from increased RD without change in AD suggests demyelination or Wallerian degeneration (134,135). When considering brain maturation or recovery, increases in AD accompanied by decreases in RD are suggestive of axonal restoration that is preceded by remyelination, and such processes are often shown to occur with a gradual increase in FA (136,137), though the specificity of these metrics, and their relation to specific forms of pathology requires additional investigation. Summary of Longitudinal Findings Overall, mixed results were seen in terms of longitudinal changes in white matter integrity following pediatric TBI. Mayer et al. (106) conducted a vertex-wise ROI analysis to investigate microstructural changes in gray matter regions. Despite longterm changes in gray matter density (see the summary of sMRI findings), no changes in FA from 3-weeks to 4-months postinjury were seen in the thalamus or hippocampi of the mTBI group, relative to the healthy controls. The authors suggest that these results, albeit inconclusive, might indicate differential timecourses of recovery for FA and gray matter density following pediatric mTBI. In an earlier study conducted on the same sample, Mayer et al. (114) used ROI analysis to investigate diffusion abnormalities in white matter following pediatric mTBI and found significant FA increases between 3-weeks to 4months post-injury for the mTBI patients, relative to healthy controls, in the genu, body, and splenium of the corpus callosum, the right anterior thalamic radiation, and bilaterally in the superior corona radiata, internal capsules, cingulum bundles, and cerebral peduncles. Ewing-Cobbs et al. (112) used TBSS to evaluate change in FA, AD, and RD between 3-and 24-months post-injury in younger (∼8 years), middle (∼10 years), or older (∼13.5 years) children at the time of injury. In the sample of children who completed the scans at both time points, a significant increase in FA was seen over time in the left corticospinal tract, where FA was consistently lower in children who sustained a TBI at a younger age, but FA increased at a lesser rate over time in the children who sustained a TBI at an older age. These results suggest that children who are injured at an earlier age recover more quickly than those injured at later ages. Genc et al. (113) used TBSS to address the impact of injury severity on changes in FA and diffusivity over the first 2 years post-injury. Their results demonstrate that injury severity predicts increases in MD of the genu of the corpus callosum, right superior longitudinal fasciculus, retrolenticular internal capsule, and anterior and posterior corona radiata over time. Injury severity also predicted increased AD in the genu of the corpus callosum and left anterior corona radiata, as well as increased RD in the right posterior corona radiata, although no associated were seen between injury severity and changes in FA of any pathway. In contrast to the results of Ewing-Cobbs et al. (112), longitudinal changes in diffusivity were not moderated by age at injury; however, a positive relationship between age at evaluation and rate of increase in MD, AD, and RD over time was seen. Wilde et al. (116) used TBSS to evaluate changes in FA and ADC of white matter and subcortical structures in pediatric msTBI vs. patients with orthopedic injury over a period of 3-to 18-months post-injury, and their results suggest different rates of change over time across several structures. In the msTBI group, decreased FA occurred along with increased ADC in the anterior temporal white matter, genu of the corpus callosum, and parietal white matter, which suggests continued degeneration in these regions. Decreases in both FA and ADC were seen over time in the frontal and parietal white matter, splenium of the corpus callosum, brainstem, and cerebellum of those with msTBI, which may be attributed to ongoing changes that result from secondary brain injury mechanisms. These findings are compared to those seen in the orthopedic injury group over the same period of time, where general increases in FA and decreases in ADC were seen across the majority of regions, presumably reflecting developmental myelination, which is typical of healthy individuals within this age group. Using tractography, Van Beek et al. (115) found decreased FA and increased RD in the genu and splenium of the corpus callosum of children with mTBI, relative to controls, over the first 8 months post-injury. These changes occurred along with relatively poorer verbal working memory abilities in the mTBI group, which suggests a deficit in the development of these skills over time, possibly due to the slow maturation of commissural white matter fibers relative to healthy children of the same age. Similar results were seen in a study by Wu et al. (107), where decreases in FA and increases in ADC were seen over the first 3-months post-injury in the splenium and total corpus callosum of adolescents with sports-related concussion compared to those with orthopedic injury and healthy adolescents. A similar examination in a group of children or adolescents with complicated mild-to-severe TBI by Wu et al. (108) revealed increases in ADC of the splenium of the corpus callosum in those with TBI over a period between 3-and 18-months postinjury; however, FA increased at a similar rate across both TBI and orthopedic injury groups during this same period. These findings suggest that while similar rates of maturation occurred over time in the corpus callosum for all participants, some level of atrophy also occurred in this region for those with pediatric TBI relative to controls (see Figure 5). Although no significant differences were observed in processing speed abilities of these two groups, a negative relationship between increased ADC in the splenium and decreased processing speed abilities was evident in the TBI group. The relationship between white matter integrity and processing speed over time is supported by the results of subgroup analyses based on IHTT differences in adolescents with TBI [see (124,125)]. For example, Dennis et al. (111) found a decline in white matter integrity, marked by increased MD, RD, and AD, in the anterior midbody, posterior midbody, isthmus, and splenium of the corpus callosum, fornix, left cingulum, left arcuate, and bilateral anterior thalamic radiations, inferior fronto-occipital fasciculi, and inferior longitudinal fasciculi in the TBI-Slow group over the first year post-injury; these changes were not seen in the TBI-Normal or healthy control groups (see Figure 6). In a different experiment using a subset of the same sample, Dennis et al. (110) used DTI tractography along with MRS to demonstrate that decreases in FA, due to increases in MD and RD, were only present in tracts of the TBI-Slow group. The TBI-Slow group also showed longitudinal abnormalities in metabolic levels of N-acetyl aspartate, which is indicative of poorer neuronal health (as discussed in the next section). Two studies used dMRI to evaluate changes in microstructural integrity in pediatric TBI that occur following a cognitive intervention. Verhelst et al. (103) used a whole-brain FBA approach to assess the effects of restorative cognitive training on white matter integrity in children and adolescents who sustained a TBI at least 12 months prior. Their results indicated no significant differences in FA, MD, or FC in any white matter tracts of interest in the patient group following 8weeks of participation in an intervention designed to improve attention, working memory, and executive function. In terms of the relationship between structural and functional change following training, however, improvement on a task of verbal working memory was significantly associated with reduced MD in the left superior longitudinal fasciculus, and improvement on a task of visual processing speed was significantly associated with increased FA in a cluster of fixels in the right precentral gyrus. Based on their overall findings, the authors suggest that functional recovery may precede structural recovery, and longer periods of cognitive training may be necessary for underlying structural changes to occur. The results of a similar investigation of network changes in structural connectivity following 10-weeks of attention and executive function training do not fully support this idea, however. Using graph theoretical analysis, Yuan et al. (117) found that initially elevated small-worldness and normalized clustering coefficient were significantly reduced following training in children with TBI, such that small-worldness more closely approximated that of the healthy controls, and these structural changes occurred along with improved performance on measures of attention and executive function. Considering the training-induced reductions in normalized clustering coefficient that were also seen in the TBI group, the authors argue that the network response to the intervention was likely driven by small, local (rather than long-distance) changes in structural connectivity that occurred throughout the network. The resulting reduction in small-worldness suggests that a partial normalization of the balance between segregation and integration throughout the structural network, which is crucial for efficient communication between brain regions, may be triggered by cognitive training several years after pediatric TBI. While the results of these ample intervention studies are not consistent in terms of the extent to which structural changes may occur following a short-term intervention, important clinical implications for the effectiveness of cognitive rehabilitation long after pediatric TBI are nonetheless demonstrated. The results of these studies shed light on the potential benefit of restorative cognitive training for improving long-term outcome and recovery. However, future research is required to determine whether such effects reliably extend beyond functional restoration and contribute to the reorganization of underlying brain structure following pediatric TBI. Magnetic Resonance Spectroscopy Four longitudinal studies (see Table 4) used MRS to evaluate changes in metabolic levels following pediatric TBI. Participants were enrolled during the post-acute phase following injury and follow-up visits took place at 4-, 12-, or 18-months after TBIs sustained during childhood or adolescence (age at injury ranging from 4 to 18 years). Several analysis methods exist for MRS, however only two are used in the studies covered in the present review. Two of the four studies (119, 120) implemented multivariate (MV) single slice approaches through an automated spectra quantification method, whereas the other two studies utilized a whole-brain approach (110,118). Methodology and Outcome Measurement MV single slice approaches involve the simultaneous acquisition of multiple spectroscopic voxels arranged in a grid across a predetermined volume of interest (VOI), from which the spectrum of metabolites can be mapped. The MV single slice approach has the advantage of simultaneous assessment of multiple tissues or multiple lesions present in a specified VOI. Furthermore, this approach is capable of showing changes in the composition of metabolites across the included voxels, which allows for good predictive value in determining the margins surrounding a lesion. Due to difficulties that arise in the shimming procedure that is required for the acquisition of a robust metabolic spectrum, the precision of voxels is degraded, and partial volume errors commonly occur. Such disadvantages have led researchers to use other approaches toward analyzing MRS data, and whole-brain approaches have recently been implemented as an alternative. Whole-brain approaches use similar processes as those used in TBM for structural MR data. The Metabolite Imaging and Data Analysis System [MIDAS; (138)] pipeline allows users to generate robust, spectrally fit data from Fourier transform reconstruction and automated spectral fitting. A water-reference MRS dataset is then used to calibrate the spectrally fit data before it is normalized and registered to a common template space. This procedure is capable of maintaining the accuracy of the acquired neurochemical concentrations across the entire brain and includes a quality assurance check to correct for CSF partial-volume signal loss, which gives the whole-brain approach an advantage over MV single slice methods. Key metabolites measured by MRS include N-acetyl aspartate (NAA), choline (Cho), creatine (Cr), and lactate. NAA is an amino acid produced by neuronal mitochondria that is believed to be an indicator of neuronal metabolism and integrity. In the developing brain, NAA is involved in myelin synthesis (139). In adults, NAA is involved in axonal repair, thus it is a good marker of axonal or neuronal integrity (140). Decreases in NAA are generally suggestive of neuronal death (141) and have been used as an indicator of disrupted myelin in damaged, developing brains (142). Cho levels are elevated postnatally, but decrease rapidly as the brain matures, and increased levels after birth are suggestive of inflammation, demyelination, or membrane synthesis/repair (140,142). Both Cr and lactate are markers of energy metabolism. Imbalances in Cr concentration have been seen in mTBI (143,144) and msTBI (145), although the directional nature is inconsistent in the literature. Furthermore, the causality of this imbalance has not been determined, though it has been suggested that changes in Cr concentrations are related to maintaining various equilibriums in the brain (146). Elevations in lactate, however, have been shown to indicate tissue damage due to ischemia, hypoxia, or inflammation (147). Increased Cho in white matter can result from cellular breakdown from shearing injuries or astrocytosis, suggesting DAI, and decreased NAA is typically the result of axonal damage. Further, an increased ratio of Cho/Cr commonly accompanies subarachnoid hemorrhage (148) and is related to poor long-term outcome after pediatric msTBI (149,150). Summary of Longitudinal Findings Overall, the results of the four MRS studies reviewed here consistently demonstrate subacute decreases in NAA or NAA/Cr with simultaneous increases in Cho or Cho/Cr across white matter, gray matter, and subcortical regions, which likely reflects the primary injury-induced metabolic cascade that reduces the integrity of affected neurons and axons, leading to inflammation or alterations in membrane metabolism. Likewise, studies consistently demonstrate that the initial metabolic changes generally return to normal levels during the chronic phase of recovery (between 6 and 12 months). Yeo et al. (120) evaluated recovery at 5-, 13-, and 24-weeks post-injury and found that this trajectory of metabolic recovery was only present in the subset of patients who followed up at 24-weeks, and no significant changes had yet occurred in those who followedup at earlier time points. Similar results were reported by Holshouser et al. (119), where acutely altered metabolic levels returned to normal after 1 year across all gray and white matter regions in patients who had sustained early complicated mTBI or moderate TBI. In the severe TBI group, however, metabolic levels only returned to normal in cortical gray matter regions, whereas NAA/Cr and NAA/Cho ratios remained significantly lower in hemispheric white matter and, to a somewhat lesser extent, in subcortical regions. The authors suggest that these findings may be a reflection of neuroinflammation or an indication of recovery with cellular proliferation. Further investigation revealed that, when considered together, acute subcortical NAA/Cr ratios and length of hospital stay are accurate predictors of long-term neurological and neuropsychological recovery from early TBI (R² = 47.6) and can be used for the successful classification of TBI with 71.4% sensitivity and 96% specificity. In addition to supporting the overall recovery of metabolic activity over the first year using a whole-brain approach, Babikian et al. (118) further employed a subgroup analysis in their TBI sample based on IHTT differences [see (124,125)]. While the TBI-Normal group's metabolic levels of Cho returned to normal levels chronically, NAA levels in the corpus callosum were increased above those of the healthy control group, supporting a relationship between the recovery of metabolic activity in the commissural white matter and faster IHTT. In contrast, metabolic levels in the TBI-Slow group, who suffer from significantly slower IHTT, did not recover over a period of 3-to 18-month post-injury. Rather, the TBI-Slow group was shown to have lower levels of Cho globally and lower levels of NAA in the corpus callosum, relative to the TBI-Normal group. These findings suggest that the acute metabolic abnormalities, reflective of initial neuronal loss and impaired oligodendrocyte/myelin function, do not recover over time in those with functional impairments evidenced by slower IHTT; furthermore, the lower levels of Cho longitudinally in this group suggest a lack of ongoing membrane repair. These results are extended by Dennis et al. (110), who used multimodal MRI imaging to investigate the relationship between long-term metabolic differences in relation to white matter microstructure between the same IHTT subgroups of this pediatric TBI sample. Using MRS in combination with DTI tractography, the authors replicated the previous findings of Babikian et al. (118), but extended them by demonstrating that the specific white matter pathways with lower NAA in the TBI-Slow group also showed lower FA resulting from higher MD and/or RD, which is indicative of demyelination (134,135). Such findings highlight the utility of multi-modal investigations of recovery from early TBI. Functional Magnetic Resonance Imaging Currently, only two longitudinal fMRI studies have been published in the pediatric TBI literature (see Table 5). Both studies investigated adolescents who were injured around four months prior to the initial visit, and follow-up visits occurred around 8-or 16-months post-injury. Cazalis et al. (121) implemented a spatial working memory paradigm in their taskbased fMRI analysis of 6 adolescents with complicated mild-tosevere injuries, whereas Mutch et al. (122) used CO 2 stress testing and fMRI to assess whole-brain CVR in 6 adolescents with mild SRC relative to 24 healthy individuals between the ages of 13 and 25. Methodology and Outcome Measurement Functional MRI measures signal variations in the bloodoxygen-level-dependent (BOLD) hemodynamic response, which indicates active regions of the brain during task-based fMRI paradigms. Basic task-based fMRI designs include block and event-related designs. Block designs involve the constant presentation of some stimulus or task during a specific block of time, followed by a period of rest; this pattern is repeated several times in an alternating fashion. Event-related designs are similar, but the stimulus or task occurs at random intervals and varies in the duration of presentation time. While block designs are more powerful, event-related designs are more flexible and more sensitive to the shape of the hemodynamic response; for this reason, event-related designs are more commonly used in the present literature. Impairments in the system involved in the control and regulation of cerebral blood flow have been noted in TBI (151), and any change in cerebral blood flow in response to a vasodilatory stimulus, or cerebrovascular responsiveness (CVR), can be used to measure the functional status of this system (152,153). Recent work by Mutch et al. (154) has led to the development of MR-based CO 2 stress testing, in which CO 2 , a quantifiable and reliable vasoactive stimulus (155), is administered during BOLD fMRI, allowing for the standardized measurement of CVR longitudinally. Summary of Longitudinal Findings In line with the results of the majority of the studies using other imaging modalities that have been covered in this review, studies using fMRI have found general patterns of normalization in brain function over the course of time following pediatric TBI, although several factors appear to be involved in the degree and extent to which recovery occurs. Using task-based fMRI, Cazalis et al. (121) found that as patients with complicated mildto-severe TBI progressed into the chronic phase of recovery, a partial normalization of acutely increased anterior cingulate cortex activity occurred along with a simultaneous increase in left sensorimotor cortex activity during participation in a difficult working memory task, which better represented the activations patterns seen in the healthy adolescents at the initial visit. These longitudinal changes in brain activity in the patients with TBI were accompanied by improvements in processing speed, although no improvements were seen in working memory ability. It is important to note that covarying for task performance is recommended if it differs between groups. Following an in-depth discussion of conflicting models in the literature for the role of the anterior cingulate cortex after pediatric msTBI, the authors suggest that, based on the results of their study, the anterior cingulate cortex may play a compensatory role in recovery from pediatric TBI, where it is recruited when the executive system is overloaded during participation in a difficult task or when structural disconnection has occurred. In their longitudinal investigation of whole-brain CVR following SRC, Mutch et al. (122) found predominantly increased patterns of CVR in the subacute phase; however, during the chronic phase, significantly decreased levels of CVR were seen in all adolescents with SRC, relative to healthy individuals. Interestingly, a stable pattern of decreased CVR was seen in two patients with chronic vestibulo-ocular and psychiatric symptoms, whereas slight improvements, that nevertheless remained persistently abnormal relative to healthy individuals, were seen in the remaining four patients, who either fully recovered or demonstrated relatively mild post-concussive symptomology. The findings of this pilot study highlight the potential utility of CVR as a marker of long-term recovery from SRC, in which the stability of CVR patterns during the chronic phase may be indicative of the degree of recovery that has occurred. Methodological Considerations A major challenge in studying the changes in brain structure following injury is characterizing how damage to pathway microstructure evolves over time and interacts with ongoing developmental changes. Unmyelinated axons are highly vulnerable to injury, and the rapid, ongoing myelination of most pathways may confer particular vulnerability when injury is sustained during the early stages of development (112). Age at the time of the injury and the amount of time that has elapsed post-injury interact, complicating the changing trajectories of anisotropy and diffusion. The trajectory of change over time must be compared to what is expected at different developmental stages, thus longitudinal studies are necessary to emphasize the dynamic and disruptive interplay of early brain injury and the subsequent development of neuronal processes, such as axonal thinning and increased myelination (19). Due to the initial increase and subsequent decrease in gray matter volume and the steady increase in white matter maturation that occurs during typical brain development across childhood and adolescence, the interpretation of longitudinal structural and functional changes after pediatric TBI is inherently more complex than that of recovery from adult TBI (156,157). Care must be taken to ensure that appropriate factors, such as age at injury, age at enrollment, sex, time-since injury, and scan interval are considered in the longitudinal analysis of structural brain changes; differences in intracranial volume (ICV) are also necessary to control for when assessing morphometric changes. While all sMRI studies included ICV as a covariate, none of the 20 studies presently reviewed controlled for the effects of time-since-injury in their analyses. The effects of age at the time of injury were controlled for in one study (112), the effects of age at the time of enrollment were controlled for in six studies (103,104,110,111,113,119), the effects of sex were controlled for in four studies (104,110,111,113), and the effects of scan interval were controlled for in two studies (104,111). In addition to the necessity of controlling for the factors specified above, it is necessary that data is collected regarding other factors known to influence recovery and quantitative neuroimaging, per se; in particular, detailed documentation of SES, injury severity classification, mechanism of injury, and lesion characteristics for primary injuries found on initial neuroimaging should be obtained and reported when publishing pediatric TBI research. Although racial or ethnic background was not discussed as a factor influencing outcome, epidemiological studies suggest that disparities exist in the prevalence, severity, and mechanism of injuries sustained by children from different racial or ethnic groups. According to these studies, African American, Hispanic, and Native American children are more likely to be hit by motorized vehicle as a pedestrian or cyclist, experience msTBI, and have higher rates of mortality than Caucasian children, regardless of SES (158)(159)(160). For reasons such as these, it is important to report the racial or ethnic distribution of pediatric TBI samples in research (refer to the Supplementary Material for details regarding the reporting of such information in the included studies). While eleven studies included a measure of SES [e.g., parental education or SES composite indices; (104-109, 112-114, 116, 118)], two of these studies did not include SES results in their sample description (104,107); however both studies reported no differences between SES of their TBI and control samples. The distribution of various injury mechanisms were reported for the pediatric TBI sample and orthopedic injury samples in all but four studies (117,118,120,121), and specific information regarding the abnormal results found on day-of-injury neuroimaging was reported by all but eight studies (107-109, 115, 117-119). Additionally, one study reported complications seen on susceptibility weightedimaging obtained at the initial evaluation, which occurred at least 12-months post-injury (103). While all studies provided the criteria used to classify injury severity, six studies did not comprehensively report the results of the measures (i.e., descriptive statistics) used to determine the injury severity in their pediatric TBI or SRC samples (103,106,107,114,115,122). Finally, of the twenty longitudinal studies presently reviewed, only five provided information regarding the racial or ethnic distribution of their samples (108,109,112,116,117). Several other methodological considerations must be addressed among the studies included the present review. In a field that often publishes findings from studies with small sample size, it is important that sample characteristics are reported in adequate detail so that meta-analytic studies can be performed, and meaningful results can be derived from the published data. Detailed descriptive statistics for all demographic characteristics across all samples, and for the injury characteristics of the injured samples, must be provided; this is especially true of longitudinal studies in children, in which attrition often leads to non-random differences in sample characteristics between initial and follow-up evaluations and samples continue to develop over time (known as "attrition bias"). For example, SES (including both educational and occupational attainment, level of income, and social class) has been cited as a contributing factor for continued participation in long-term studies generally (161,162) and in studies of pediatric TBI in particular (163). In studies of children, complex health, motivational, and lifestyle factors for both parent and child may also affect continued participation, and these factors may or may not change over long follow-up periods (162). Estimates of attrition are variably reported, but non-imaging studies of pediatric TBI have reported attrition estimates that range from 20% to over 60% (164)(165)(166). Small sample sizes, the failure to report sufficient descriptive statistics, and attrition rates are sources of potential bias that may threaten internal and external validity, and necessary steps to avoid them must be undertaken during study design and data collection in future longitudinal research. Additionally, it is recommended that effect sizes and confidence intervals are reported along with p-values, as statistically significant differences are often misleading when reported alone (167), especially in underpowered studies, such as those with small sample sizes. Of the twenty longitudinal studies reviewed here, only eight provided detailed sample information, including sample size, sex distribution, and age-at-follow up visits (104, 109-111, 114-116, 121). While one study provided great detail on the sample characteristics of their longitudinal sample in their Supplementary Material, age at evaluation was not provided at the initial or follow-up visits (112). While all studies provided an approximate time interval between injury and MRI for the initial and follow-up visits, five studies did not specify details regarding average time-since-injury intervals for their pediatric TBI samples at the initial and/or follow-up visits (103-105, 111, 117). Finally, half of the studies reviewed presently included some measure of effect size with their results (103, 104, 106-108, 112-114, 117, 119). It is important to note that, while these methodological considerations are meant to address areas that could be improved in future research, the studies included in this review are the first and only to address longitudinal outcome after pediatric TBI from a neuroimaging perspective and must be applauded for doing so. Furthermore, all of the studies reviewed here that included clinical assessments of neuropsychological or functional outcome included one or more measures recommended as basic or supplemental Common Data Elements (CDEs) by the Pediatric TBI Outcomes Working Group [see (168)]. The working groups within the TBI CDE project also suggest standardized reporting of MR image acquisition parameters (169), and all but four studies provided sufficient detail in this regard (refer to Supplementary Material for details). GAPS IN THE LITERATURE As reviewed in this paper, there have been a small number of studies published to date using neuroimaging to examine longitudinal changes after TBI in pediatric patients. Existing studies generally reveal dynamic changes in the months and years post-injury, but additional studies are needed to more comprehensively examine factors that may influence outcome. Severity plays an important role in outcome prediction but does not fully explain outcome heterogeneity. Additionally, the longitudinal neuroimaging studies that have been published to date are limited to investigations of pediatric TBI populations with injuries sustained no earlier than the early childhood stage of development (ages 4-6), which is likely due to difficulties associated with scanning infants and toddlers. The literature would greatly benefit from investigations of children who sustained accidental injuries at younger ages, however, and current efforts to develop multi-step procedures that ensure the comfort of young children in the MRI environment [see (170)]. The body of literature in this area is small but becomes even smaller when we consider how many individual cohorts have been examined: the tables included in this review indicate that 20 articles on 13 longitudinal cohorts have been published to date. There are a number of gaps in the literature that we hope will be addressed in the coming years. Small sample size is the primary limitation of most neuroimaging studies of pediatric TBI. This substantially limits the ability of researchers to identify potential moderators of outcome. While the literature reviewed here suggests an effect of factors such as age, sex, and SES on outcome, these need to be examined in larger cohorts for reliability, and sources of attrition bias need to be carefully examined, disclosed and corrected for, particularly where there is a loss-to-follow up rate >20% (163,171) or where the follow-up period is particularly long, as a relation between attrition and length of the study follow-up has been demonstrated (163). Large samples will allow for machine learning approaches to cluster demographic, clinical, and imaging variables and may reveal subpopulations within the larger patient population. There may be patterns of brain structural and functional disruption that are associated with particular cognitive, psychological, or somatic complaints. This has important implications for treatment and may help identify patients in need of more targeted treatment. The large amount of unexplained heterogeneity in post-injury outcome is a key gap in the field. There are also important outcomes that are relatively unexplored. Secondary psychiatric disorders are common post-injury but there have been very few investigations linking these to altered brain structure and function (172). It is important to note that what constitutes an optimal comparison group for children with TBI remains an area of controversy within the field. While samples of healthy, typicallydeveloping children are the most frequently utilized comparison group, some have argued that the use of such a comparison group fails to account for TBI-related risk factors, including predisposing neurobehavioral characteristics, such as attentiondeficit/hyperactivity disorder (173) and associated impulsivity, risk-taking behavior, and substance use (174), or non-specific effects of traumatic injury, like posttraumatic stress (175,176). Additionally, factors that may influence cognitive and functional assessment, including stress, pain, and medication effects, as well as prolonged absences from school are also not well-accounted for in TBI-related studies that use healthy comparison groups. Alternatively, other pediatric TBI studies have included children with extra-cranial or orthopedic injuries as a comparison group to account for some of the factors associated with the use of healthy children. In a recent DTI study of adolescents and FIGURE 7 | Summary of longitudinal changes in pediatric traumatic brain injury (TBI) across magnetic resonance-based neuroimaging modalities. Results are organized according to general brain region or white matter pathway. Arrows reflect changes in TBI group over time (increase, decrease, or no change). Mixed changes across the TBI group, or mixed results reported across studies, are reflected in crossed arrows. Dashes indicate no reported differences. ACC, anterior cingulate cortex; AD, axial diffusivity; ATR, anterior thalamic radiation; CC, corpus callosum; CG, cingulum; Cho, choline; CR, corona radiata; Cr, creatine; CST, corticospinal tract; CT, cortical thickness; FA, fractional anisotropy; IC, internal capsule; MD, mean diffusivity (includes ADC results); NAA, N-acetyl aspartate; RD, radial diffusivity; SLF, superior longitudinal fasciculus; SMC, sensorimotor cortex; TBI, traumatic brain injury; UF, uncinate fasciculus; Vol, volume; WM, white matter. young adults with mTBI (177), both typically-developing and orthopedically injured persons were included in comparison groups. Interestingly, the results of this study revealed that, relative to the typically-developing comparison group, both of the traumatically injured patient groups demonstrated similar patterns of altered white matter integrity at subacute and chronic post-injury periods, regardless of whether the injuries sustained occurred to the head. While acknowledging the strengths and limitations of each group, the authors conclude that the selection of a single comparison group may contribute to the inconsistency in dMRI findings reported in the literature. Wilde et al. (177) suggest that conclusions drawn from studies utilizing a typicallydeveloping comparison group may be different if the studies had instead included an orthopedically-injured comparison group and therefore recommend the use of both comparison groups, if possible; however, further investigation of this issue is clearly warranted. Advanced imaging methods and multi-modal approaches have the potential to yield important new information. Diffusion MRI is one of the most commonly used modalities in TBI neuroimaging studies, but DTI has a number of limitations. Crossing fibers can lead to inaccurate diffusion calculations when a single tensor model is used. Higher angular resolution partially addresses this, but more advanced modeling allowing for multiple fiber orientations within a voxel is also necessary. Multishell diffusion MRI sequences permits researchers to model both intracellular and extracellular diffusion, leading to more accurate modeling and allows for measurements of neurite density and orientation dispersion (178). CONCLUSIONS Here we review longitudinal neuroimaging studies of pediatric traumatic brain injury. See Figure 7 for a summary of the results of all longitudinal neuroimaging studies. While there is considerable heterogeneity in post-injury outcome, the literature consistently shows that alterations in brain structure, function, and metabolism can persist for an extended period of time post-injury. Longitudinal studies are particularly important for assessing changes in a developing sample, but small sample sizes have limited most studies to date. With larger sample sizes and multi-site cooperation, future studies will be able to examine potential moderators of outcome, such as the quality of the pre-injury environment, and may identify clinically meaningful patient subtypes. AUTHOR CONTRIBUTIONS HL, EW, KC, and ED contributed the conception and design of the review. HL reviewed the literature for relevant articles to include and wrote the first draft of the manuscript. EW, ED, and KC wrote sections of the manuscript. All authors contributed to the revision of the manuscript, and all authors read and approved the final submitted version.
v3-fos-license
2016-03-22T00:56:01.885Z
2015-05-26T00:00:00.000
16710104
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/16/6/12014/pdf", "pdf_hash": "f26bd65f7d5263cc2aca8e481e7cc275c9598ec3", "pdf_src": "Crawler", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43977", "s2fieldsofstudy": [ "Biology" ], "sha1": "f26bd65f7d5263cc2aca8e481e7cc275c9598ec3", "year": 2015 }
pes2o/s2orc
The CYP51F1 Gene of Leptographium qinlingensis: Sequence Characteristic, Phylogeny and Transcript Levels Leptographium qinlingensis is a fungal associate of the Chinese white pine beetle (Dendroctonus armandi) and a pathogen of the Chinese white pine (Pinus armandi) that must overcome the terpenoid oleoresin defenses of host trees. L. qinlingensis responds to monoterpene flow with abundant mechanisms that include export and the use of these compounds as a carbon source. As one of the fungal cytochrome P450 proteins (CYPs), which play important roles in general metabolism, CYP51 (lanosterol 14-α demethylase) can catalyze the biosynthesis of ergosterol and is a target for antifungal drug. We have identified an L. qinlingensis CYP51F1 gene, and the phylogenetic analysis shows the highest homology with the 14-α-demethylase sequence from Grosmannia clavigera (a fungal associate of Dendroctonus ponderosae). The transcription level of CYP51F1 following treatment with terpenes and pine phloem extracts was upregulated, while using monoterpenes as the only carbon source led to the downregulation of CYP5F1 expression. The homology modeling structure of CYP51F1 is similar to the structure of the lanosterol 14-α demethylase protein of Saccharomyces cerevisiae YJM789, which has an N-terminal membrane helix 1 (MH1) and transmembrane helix 1 (TMH1). The minimal inhibitory concentrations (MIC) of terpenoid and azole fungicides (itraconazole (ITC)) and the docking of terpenoid molecules, lanosterol and ITC in the protein structure suggested that CYP51F1 may be inhibited by terpenoid molecules by competitive binding with azole fungicides. Introduction "Bark beetles that colonize living conifers are frequently associated with specific fungi that are carried in specialized structures or on the body surface" [1]. The conclusion repeated in the literature suggests that pathogenic blue-stain fungi are primarily responsible or required for the mortality of trees attacked by bark beetles followed a logical thread, beginning with the observations that the sapwood of beetle-killed trees is stained [1][2][3]. Leptographium sp. associated with the Chinese white pine beetle (Dendroctonus armandi) helps the bark beetles to overcome the resistance system of the host trees [4]. Leptographium qinlingensis is an active participant in the death of the Chinese white pine (Pinus armandi) on Qinling Mountain of China [5,6]. Symbiotic fungi destroyed the bleeding cells, blocked up resin canals of the host trees and killed epithelial cells, resulting in disorders of the nutrient and water metabolisms of the host and the death of host trees [7,8]. The three toxins (6-methoxymethyleugenin, maculosin and cerevisterol) synthesized by L. qinlingensis are phytotoxic to P. armandi seedlings [9]. In addition, inoculation with L. qinlingensis increases the concentrations of monoterpenes and sesquiterpenes in the phloem and xylem of the P. armandi seedlings [10]. The Grosmannia clavigera vectored by Dendroctonus ponderosae has caused a rapid, large-scale decline of Pinus contorta in western North America [11]. Lodgepole pine also dies when inoculated at a high density of pathogenic fungus, like Leptographium longiclavatum, without the beetles [12]. The genome and transcriptome of G. clavigera induced by exposure to lodgepole pine phloem extract (LPPE) or oleoresin terpenoids were reported by DiGuistini [13]. The comparative analyses of the expression profiles of G. clavigera grown on monoterpenes, triglycerides or oleic acid showed that cytochrome P450 (CYP) may be involved in the utilization of triglycerides, oleic acid and monoterpenes as a carbon source and that the CYPs may detoxify pine defense compounds [14]. There are many similarities between L. qinlingensis and G. clavigera, as they are associated fungi of Dendroctonus beetles. Here, we identified a CYP51 gene from L. qinlingensis and compared its homology with other fungal CYP51s, including the lanosterol 14-α demethylase sequence from G. clavigera. The CYP expression profiles of G. clavigera grown on monoterpenes, triglycerides or oleic acid were identified, but did not include CYP51 [14]. The expression profiles of CYP51F1 in L. qinlingensis mycelia grown on monoterpenes, oleic acid, oleoresin terpenoids and Chinese white pine phloem methanol extract (CWPPE) were significantly influenced by terpenoids. A cytochrome P450 homology model for CYP51F1 has been constructed for molecule docking to understand the interaction between the CYP51F1 protein and its ligands (terpenoid, lanosterol and itraconazole). Identification of the Cytochrome P450 Gene The CYP gene set of the CYP51 family, which had bootstrap values >99%, was found by Maximun Likelihood phylogenetic (ML-phylogenetic) analysis of the putative full-length amino acid sequences ( Figure 1). BLAST searches indicated that CYP genes expressed in L. qinlingensis were similar to members of the gene family CYP51 reported in other species (Table 1). The full-length sequence from the CYP51 gene shared the highest level of amino acid sequence identity with variants from the fungal species G. clavigera kw1407, N. crassa OR74A, N. tetrasperma FGSC 2509, M. thermophila ATCC 42464, T. terrestris NRRL 8126 and O. piceae UAMH 11346 (Table 1). Amino acid sequence identity between partial-length sequences within each gene ranged from 86.3%-96.9% with respect to matched GenBank sequences (inter-variant). The sequence identity between the full length sequence of each gene and GenBank reference sequences was 77.5%-91.1%. The lanosterol 14-α demethylase sequence from G. clavigera kw1407 had the highest sequence identity. Figure 1. A maximum likelihood tree of cytochrome P450 gene from L. qinlingensis with partial sequences was performed using the amino acidic substitution model WAG model with a Gamma (−lnL = 2883.91). The CYP51F1 from L. qinlingensis is shown with an underline. The values of the bootstrap after 500 pseudoreplicates are shown at the nodes. Physicochemical Properties and Bioinformatics Analysis The full-length sequences of CYP51F1 (named by the P450 nomenclature committee) gene was 1996 bp with a 1590-bp open reading frame (ORF), which encoded 529 amino acids ( Table 2). The CYP51F1 gene was flanked by 5ʹ and 3ʹ untranslated regions (UTRs) varying in length from 206-200 bp. The predicted molecular mass was 59.31 kDa with an isoelectric point of 6.82 ( Table 2). The predicted subcellular localization of the P450 protein revealed a putative code for a typical membrane protein containing approximately 20 hydrophobic residues that are likely membrane anchors in the endoplasmic reticulum ( Table 2). The alignment and comparison of the deduced amino acid sequence of CYP51F1 from L. qinlingensis with the S. cerevisiae YJM789 lanosterol 14-α demethylase protein sequence allowed for the identification of the substrate recognition sites (Figure 2). RT-qPCR To determine if L. qinlingensis CYP51F1 had a possible role in the detoxification of pine defense chemicals, we analyzed the expression profiles of CYP51F1 from mycelia grown on complete medium treated with a terpenoid blend (CM + T) or with Chinese white pine phloem methanol extract (CWPPE; CM + CW) for 12 and 36 h. Statistically-significant differences were found among treatments and time (one-way ANOVA, treatments: F = 54.536, df = 2, p < 0.001; time: F = 25.399, df = 2, p < 0.001). CYP51F1 was upregulated after being exposed to the terpene blend for 12 and 36 h, and at 36 h, the transcription level was lower than at 12 h (Figure 3). At 12 h following CWPPE treatment (CM + CW, 12 h), the expression of CYP51F1 was significantly affected compared to the methanol treatments; however, one day later (CM + CW, 36 h), the expression of the CYP51F1 gene was significantly downregulated (Figure 3). To determine whether CYP51F1 was involved in the utilization of different carbon sources, we analyzed CYP gene expression profiles of L. qinlingensis grown on minimal medium with a single carbon source: a monoterpene blend for 10 days (yeast nitrogen base (YNB) + MT) and long-chain fatty acids (oleic acid; YNB + OA) for five days. A statistically-significant difference was found only between YNB + MT (10 days) and YNB + Ma (mannose) (3 days) (one-way ANOVA, F = 41.181, df = 1, p = 0.003). In mycelia grown on monoterpenes as the sole carbon source (YNB + MT), CYP51F1 was significantly downregulated (Figure 4). The expression of CYP51F1 displayed almost no change between YNB + OA (five days) and YNB + Ma (five days) (one-way ANOVA, F = 0.249, df = 1, p = 0.644) ( Figure 4). Modeling Structure of CYP51F1 and Molecule Docking There were 50 protein sequences used as CYP51F1's templates, from which we selected four proteins as representative sequences, based on the highest identity and sequence coverage with the available template structures. The protein sequence of CYP51F1 shared 44.29% and 39.77% identity with the four template structures ( Table 3). The selected structures of CYP51F1 were then verified using various scoring methods. The QMEAN4, VERIFY_3D score, ERRAT score and the Ramachandran plot score of the best protein model were −4.67, 86.02%, 78.125% and 87.0%, respectively (Table 3). We determined the fold conservation of our generated models. The superposed structures of CYP51F1 upon 4k0f.1 chain A resulted in a Root-mean-square deviation (RMSD) value of 10.805 Å ( Figure 5). The 50 N-terminal amino acids revealed two helices oriented at approximately 60° to each other ( Figure 6). The N-terminal helix (membrane helix 1 (MH1), residues 8-14) is amphipathic and has extensive crystal contacts with symmetry-related molecules ( Figure 5). MH1 connects to TMH1 via a short turn (residues 15-23), a slightly kinked helix 36.99 Å long (residues 24-47) that is of sufficient length to traverse the lipid bilayer ( Figure 5). The cavity volume at the binding sites was calculated, using DS (Binding Site module) for the CYP51F1 protein structure. A binding site with a maximum volume (x: 22.488, y: 8.09, z: 13.729; volume: 836.5) was selected manually, as it should have the appropriate volume for a molecule to adopt minimal energy. For monoterpenes (limonene, 3-carene and pinene) and sesquiterpenes (β-caryophyllene and longifolene), one pose of each was generated using "LibDock." Seven DS scoring functions (Ligscore1, Ligscore2, -PLP1, -PLP2, Jain, -PMF and -PMF04) and consensus scoring functions were used to re-evaluate the position of docked molecules ( Table 4). The position of terpenoid molecules were shown in the binding pocket of CYP51F1 colored green ( Figure 6). The interactions of CYP51F1-terpenoid molecules were shown in a 2D diagram ( Figure S1). For lanosterol and ITC, the most suitable docking mode for each molecule received a consensus score of seven and six, respectively. More than twelve amino acid residues participated in the interactions between CYP51F1 and the ligands lanosterol and ITC (Figure 7). Three (Ala 302, 306, Leu 303) and six (Ala 306, Met 70, His 373, Gly 69, Ser 374, Hem 601) amino acid residues formed hydrogen bonds, static electricity and polarity interactions between CYP51F1 and the ligands lanosterol and ITC, respectively ( Discussion We report one fungal CYP51 gene of L. qinlingensis and its expression profile in mycelia treated with a terpenoid blend and pine phloem extracts. This gene is similar to variants of 14-α demethylase reported in G. clavigera kw1407 using genome and transcriptome analyses. Under experimental conditions, L. qinlingensis is similar to some other species through gene cloning and real-time fluorescent quantitative PCR [13]. Meanwhile, we also performed homology modeling of the structure of the CYP51F1 gene and molecule docking of some terpenoids, lanosterol and ITC. In this study, we found that CYP51F1 partial and full-length amino acid sequences of D. armandi have sequence identity >75% with those of G. clavigera, N. crassa, N. tetrasperma, M. thermophila, T. terrestris and O. piceae. The lanosterol 14-α demethylase sequence from G. clavigera has the highest identity with 91.1% of the full-length sequence ( Table 1). The ML-phylogenetic analysis of the putative full-length amino acid sequences shows that CYP51F1 has the highest homology with the lanosterol 14-α demethylase sequence from G. clavigera, as well ( Figure 1). CYP51 is considered one of the most ancient families, which presumably evolved before the divergence of the major eukaryotic groups, and it exists in all biological kingdoms, but was lost in certain lineages, including insects and nematodes, that are heterotrophs with respect to sterols [27,28]. As G. clavigera is one of the major fungal associates of D. ponderosae [29], L. qinlingensis has some degree of homology with G. clavigera, as they are associates of two bark beetles in the Dendroctonus genus, even though they are in different genera. Except the two insect-dispersed ophiostomatoid species (G. clavigera and O. piceae), the CYP genes of many other pathogen species of ophiostomatoid, like Dutch elm disease pathogens Ophiostoma ulmi [30] and O. novo-ulmi [31], and Ceratocystis moniliformis and C. manginecans [32] were discovered with the genome sequence. Multiple sequence alignments of these CYP51 genes to the S. cerevisiae lanosterol 14-α-demethylase protein sequence shows that they have similar regions, such as the heme-binding region (FXXGXRXCXG), PERF domain (PXRX) and K-helix (EXXR) (Figure 2) [33]. The likely substrate-binding site has been identified in CYP51F1 based on the analysis of multiple sequence alignment, some of which interact with azole inhibitors [34]. The theoretical analysis to infer the cellular localization of the deduced cytochrome P450 enzyme in L. qinlingensis indicates that it is most likely anchored to the outer face of the endoplasmic reticulum (Table 2). This inference is similar to that obtained from the homology modeling structure based on the crystal structure of the S. cerevisiae lanosterol 14-α-demethylase protein ( Figure 5). Previous studies have suggested that CYP51 proteins have a preserved narrow function of removing the 14-methyl group of sterol precursors and have retained high substrate specificity throughout evolution [17]. For fungus, we always focus on two aspects of CYP51 genes: the biosynthesis of ergosterol, a sterol specifically found in fungal membranes that mediates their permeability and fluidity [35], and azoles that interfere with fungal lanosterol 14-α-demethylase to affect the function of essential membrane-bound enzymes [36][37][38][39]. To colonize pine trees, L. qinlingensis must cope with host defense chemicals, including terpenoids and phenolics, that are toxic to many fungal species. The pathogenic fungus of pine trees must retrieve nutrients, primarily carbon, from its host by accessing sugars, triglycerides and organic nitrogen, to develop its mycelia and reproductive structures [14]. Pine defense chemicals induced abundant transcription of many genes of G. clavigera, including CYP genes, changed significantly following exposure to either a complex terpenoid blend or lodgepole pine extract (LPPE) containing phenolics and other metabolites [13,26]. CYP51F1 transcripts overexpressed at 12 and 36 h in mycelia, after treatment with a terpenoid blend and CWPPE, had similar transcript levels in untreated mycelia (Figure 3). The terpenoid blend was more influential than CWPPE. The significant downregulation of CYP51F1 due to the monoterpene blend as the single carbon source also suggests that terpenoid metabolism may be connected with CYP51F1. Molecule docking of monoterpenes (limonene, carene and pinene) and sesquiterpenes (β-caryophyllene and longifolene) shows that small terpenoid molecules can occupy the binding pocket of CYP51F1 ( Figure 6). Almost every amino acid residue that is involved in the VDW interactions of CYP51F1-terpenoid molecules ( Figure S1) is involved in the interactions of CYP51F1-lanosterol. The antifungal triazole drug ITC extends from the active site to just beyond the mouth of the entry channel, similar to ITC in the S. cerevisiae and posaconazole in the T. brucei CYP51 structure [40,41]. The space occupied by ITC ( Figure 7B) fits closely with that occupied by lanosterol and O2 ( Figure 7A), with the triazole head group displacing the O2 and the di-halogenated headgroup replacing the first sterol ring. The MIC determination of the azole fungicides and monoterpenes suggests that they can inhibit the reproduction of L. qinlingensis. In brief, the results of this research provide important information suggesting that terpenoids from the host tree P. armandi impede the CYP51F1 enzyme from playing a role in lanosterol oxidation. The action of these terpenoids to inhibit L. qinlingensis is similar to azole fungicides. Fungal Media and Growth Conditions L. qinlingensis was grown on the medium containing 0.83% Oxoid malt extract agar and 0.75% technical agar (Oxoid Ltd., Basingstoke, Hampshire, UK) overlaid with cellophane, and the pH was adjusted to 5-6. Mycelia used for extracting RNA were collected from solid media inoculated with a suspension containing 5 × 10 5 spores and were incubated for 5-7 days (depending on the library) at 28 °C in the dark. RNA Isolation and cDNA Synthesis Total RNA was isolated from mycelia according to the protocol supplied with the E.Z.N.A™ Fungal RNA Kit (Omega Bio-Tek, Norcross, GA, USA). Its integrity was assessed on 1% agarose gels, and quantification was performed by spectrophotometry with a NanoDrop 2000 (Thermo Scientific, Pittsburgh, PA, USA). The purity was estimated by the means of the A260/A280 equation (μg/mL = A260 × dilution factor × 40). The cDNA was synthesized using the EasyScript™ First-Stand cDNA Synthesis SuperMix (TransGen Biotech, Beijing, China) according to the manufacturer's instructions. Amplification of Genes, Cloning and Sequence Analyses The synthesized cDNA obtained from the sample was used as a template in PCR reactions. A pair of degenerate primers was designed to screen the putative P450 cDNA from the CYP51 family (Table S1). PCR amplifications were performed in a C1000 thermocycler (Bio-Rad, Hercules, CA, USA). The PCR products were visualized on 1% agarose gels stained with 1× DuRed and compared with a 2K plus DNA marker (TransGen Biotech, Beijing, China). Amplicons were purified using the Gel Purification Kit (Spin-column) (Bio Teke, Beijing, China), and the reaction product was cloned using the pMD™ 18-T Vector (TaKaRa, Dalian, China). Cloning reactions were transformed into DH5α chemically-competent cells of Escherichia coli, and the transformants (blue-white colonies) were selected on Amp/LB/X-gal/IPTG plates. A total of 10 clones with inserts were sequenced directly by GenScript USA Inc. (Nanjing, China). The sequences were manually edited with DNAMAN to obtain the insert sequences. Blastx searches of partial-length sequences (approximately 500 bp) were made against the NCBI database. The sequences were translated into amino acid sequences with the ExPASy Translate Tool (http://www.expasy.org/tools/dna.html) and subjected to a BlastP search against the GenBank database [23]. A multiple sequence alignment of the P450 proteins was performed with ClustalX v2.0.10 using default parameters [42]. End Sequence Determination and Cloning of Full-Length cDNAs The complete sequence of the CYP51 gene identified above was achieved using the SMARTer™ RACE cDNA Amplification Kit (Clontech Laboratories Inc., Mountain, CA, USA). The total RNA of mycelia was obtained, following the protocol described in the E.Z.N.A™ Fungal RNA Kit (Omega Bio-Tek, Norcross, GA, USA); its integrity was assessed on 1% agarose gels. Partial sequences were used in the primer design, and PCR was performed following the protocol described in the SMARTer™ RACE cDNA Amplification Kit (Clontech Laboratories Inc., Mountain, CA, USA). The amplicons were purified, cloned and sequenced as previously described. The complete sequences were compared using a BlastP search with those deposited in GenBank [23]. To avoid chimera sequences, we designed specific primers (Table S1) based on the complete sequence obtained for the CYP51 gene with RACE; the specific primers were used to amplify the complete DNA for each gene. Amplification reactions were carried out in 20-μL volumes containing: 1 μL cDNA from a 1:5 dilution, 0.25 μM of each primer and 1× EcoTaq PCR SuperMix (Beijing TransGen Biotech Co., Ltd., Beijing, China). The PCR reactions were performed as follows: 94 °C for 5 min, 30 cycles of 94 °C for 30 s, 68 °C for 30 s and 72 °C for 2 min, with a final extension for 10 min at 72 °C. PCR products of approximately 2000 bp were visualized on 1% agarose gels, purified and cloned, and both strands were sequenced as previously described. The deduced amino acid sequences were submitted to the P450 nomenclature committee, and a name was assigned based on their criteria for the classification of CYP51 genes (David Nelson Department of Molecular Sciences, University of Tennessee, personal communication). The sequence was deposited in GenBank (Accession Number KJ569144). Analysis of the Full-Length Cytochrome P450 Sequence To identify the different CYP variants expressed in fungus, a phylogenetic inference analysis by maximum likelihood of the full-length CYP sequence was performed with MEGA5 [43]. CYP topology was used to identify groups, but not to establish a phylogenetic relationship. The WAG model was supported by the test (−lnL = −2883.910) with a gamma parameter value of G = 0.39. Finally, MATGAT v2.01 software was used to determine the identity percentages among partial-length amino acid sequences, full-length sequences and GenBank sequences from other fungi (interspecific identity) [22]. The molecular mass (kDa) and isoelectric point (pI) of the sequence were determined using the ProtParam program [24]. All putatively functional L. qinlingensis P450 proteins were examined for likely sub-cellular localization using the TargetP program (http://www.cbs.dtu.dk/services/TargetP/) with the default parameters [25]. Treatments for RT-qPCR We generated and analyzed transcript level data from two sets of growth conditions. For the first set of conditions, mycelia were generated from a suspension of 5 × 10 5 spores spread on cellophane on the surface of complete media (CM: 0.17% yeast nitrogen base without amino acids (YNB; BD Difco, Sparks, MD, USA), 1.5% agar, 1% maltose, 0.1% phthalate, 0.3% asparagine). The spores were grown for 3 days at room temperature before being treated with either a crude Chinese white pine phloem methanol extract (CWPPE; CM + CW) or with a terpenoid blend (CM + T) and further incubated for 12 and 36 h. The CWPPE was prepared using DiGuistini's method for preparing the lodgepole pine phloem methanol extract (LPPE) [13]. The CWPPE contained methanol-soluble phenolic chemicals, sugars and possibly other metabolites, while the complex terpenoid blend included monoterpenes ((+)-limonene, (±)-α-pinene, (−)-β-pinene, (+)-3-carene) and turpentine (mainly consists of terpenes). The relevant controls for CM + CW and CM + T treatments were mycelia grown on CM with methanol and CM, respectively. In the second set of conditions, we tested the utilization of different carbon sources by L. qinlingensis. Again, the mycelia were generated from spores grown on 1% MEA (0.83% malt extract agar and 0.75% technical agar (BD Difco, Sparks, MD, USA )) overlaid with cellophane for 3 days. The young germinating mycelia were transferred to minimal media (YNB: 1.5% agar with 0.67% yeast nitrogen base without amino acids; BD Difco, Sparks, MD, USA) with either a mixture of monoterpenes (YNB + MT; (+)-limonene (95%), (+)-3-carene (90%), (±)-α-pinene (98%) and (−)-β-pinene (99%) at a ratio of 5:3:1:1), 0.5% oleic acid (YNB + OA) or 1% mannose (YNB + Ma). While the oleic acid and mannose were incorporated into the media, the monoterpene mixture (200 μL) was sprayed onto the surface of the media. Mycelia on oleic acid were incubated for 5 days, while mycelia with monoterpenes were incubated for 10 days. We used mycelia grown on YNB + Ma for 3 days as a control for YNB + MT and mycelia grown on YNB + Ma for 5 days as a control for YNB + OA. These time points were chosen when the mycelial growth approximately reached confluence for the treatment and control conditions. Monoterpenes were from Sigma Aldrich (St. Louis, MO, USA), and other chemicals were analytically pure and made in China. RNA Isolation and cDNA Synthesis for Expression Analyses The total RNA isolation of the fungi was performed following the protocol described in the E.Z.N.A™ Fungal RNA Kit (Omega Bio-Tek, Norcross, GA, USA), and its integrity was verified in 1% agarose gels. The cDNA synthesis was performed using the protocol described in the FastQuant RT Kit (with gDNase) (Tiangen Biotech Co., Beijing, China) using 2 μg total RNA in a 20-μL final reaction volume. The cDNA synthesis program was as follows: 42 °C for 15 min and 95 °C for 3 min. A non-reverse transcription assay was performed to evaluate the DNA absence in the RNA extraction. The cDNA was stored at −20 °C. RT-qPCR For each target gene and reference gene, specific primers were designed using Primer Premier 5.0 (Table S1). The reaction was carried out under the following conditions: each PCR reaction contained 0.4 μM of each primer, 12.5 μL FastStart Essential DNA Green Master (Roche Diagnostics GmbH, Mannheim, Germany) and 2 μL of the diluted cDNA sample in a final volume of 25 μL. All of the samples were place in the CFX96™ Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA). A 3-step amplification condition with a hot-start step was used: 95 °C for 10 min, 95 °C for 30 s and 40 cycles at 95 °C for 5 s, 60 °C for 30 s and 72 °C for 30 s. PCR contaminations were not detected in the no template control (NTC). The experiment was replicated three times (biological replicates), and each of the replicates was performed with three technical replications. To estimate the qPCR efficiency and validation for each gene, a linear regression analysis was performed between the mean values of the quantification cycles (Cq) of different dilutions (1.0, 10 −1 , 10 −2 , 10 −3 , 10 −4 ) of cDNAs and the initial concentration. These dilutions were made from a cDNA pool, and 2 μL of each dilution were used as a qPCR template. The PCR was performed three times for each gene, and its efficiency was estimated with the equation: efficiency = (10 −1/slope − 1) × 100, where the E value was 100% ± 5%. The PCR validation was estimated directly from the R 2 values, which were >0.90. Moreover, a melting curve reaction was performed to evaluate their specificity. Reference Gene Validation Experiment We used the same method to generate three gene partial sequences (28s rRNA, EF1 and calmodulin) as reference genes. The obtained three gene sequences were deposited in GenBank (Accession Number KJ541045-KJ541047). Calculations to estimate the expression stability were performed with the geNorm program [44]. EF1 was the most stable gene, so the expression levels of the gene were normalized to the expression levels of EF1. Statistical Analysis Relative expression values for all of the genes were determined using the Ct (ΔΔCt) method and analyzed with Microsoft Excel 2003 (v.11.0.5612) [45]. Outlier values identified by a PCR machine were excluded from our analysis. To evaluate significant differences in the expression for each gene, 2 −ΔΔCt values transformed at log2 were subjected to one-way ANOVA to determine if the gene expression was different among the treatments. The 2 −ΔΔCt values and standard error (SE) were transformed at log2 to generate graphs. All of the statistical analyses were performed with SPSS 18.0 (IBM SPSS Statistics, Chicago, IL, USA) and plotted with SigmaPlot 12.0 software (Systat Software Inc., San Jose, CA, USA). Homology Modeling Four proteins (S. cerevisiae YJM789 lanosterol 14-α demethylase PDB code: 4k0f.1 chain A, 4lxj.1 chain A, Homo sapiens lanosterol 14-α demethylase PDB code: 3ld6.1 chain B, 3juv.1 chain A) (identity > 35%) were selected from the SWISS-MODEL Repository (http://swissmodel.expasy.org/ repository/) as suitable templates. Homology modeling was performed with the SWISS-MODEL program for the CYP51F1 protein [46]. The four modeled protein structures were verified using the Structural Analysis and Verification Server (SAVES) (http://nihserver.mbi.ucla.edu/SAVES/), which uses different programs, such as ERRAT to evaluate the statistics of non-bonded interactions between different atom types [47], VERIFY 3D to determine the compatibility of the 3D atomic model with its own amino acid sequence and PROCHECK to assess the stereo chemical quality of a protein structure by analyzing residue-by-residue geometry and overall structural geometry [48,49]. Protein-Ligand Interaction Study Discovery Studio v2.5 (DS 2.5) (Accelrys, San Diego, CA, USA) was used to dock monoterpenes (limonene, 3-carene and pinene), sesquiterpenes (β-caryophyllene and longifolene), lanosterol and itraconazole (ITC) to our refined model. Two of the preferred ligands were lanosterol and ITC. The terpenoid molecules were built using "Molecular Window" and optimized using "Prepare Ligands" in the DS for docking. The lanosterol and ITC molecule structures were retrieved from DrugBank 4.0 (http://www.drugbank.ca/) with Accession Numbers DB01167 and DB03696. "LibDock" and "LigandFit" in the DS were used to dock ligand molecules into the refined model [50,51]. The terpenoid molecules were docked into the refined model using only "LibDock." For lanosterol and ITC, 10 poses of each were generated using "LigandFit" and scored using the DS scoring functions, which include Ligscore1, Ligscore2, -PLP1, -PLP2, Jain and -PMF. Among these poses, the most suitable docking mode for each molecule with a high score from the consensus scoring functions was finally selected. Furthermore, protein-ligand interactions of all of the molecules were shown in a 2D diagram. Determination of the MIC of Terpenoid and Azole Fungicides The CYP51 inhibitors ITC and epoxiconazole were selected as representative agricultural azoles for a MIC screening. ITC and epoxiconazole were dissolved in dimethyl sulfoxide (DMSO) to obtain stock solutions of 1600 μg/mL. All drugs were stored at −20 °C. A 1% malt extract microdilution susceptibility assay was performed according to the Clinical and Laboratory Standards Institute M38-A2 protocol in order to evaluate the initial MIC of ITC and epoxiconazole. The final drug concentration ranged from 0.03125-4 μg/mL for both ITC and epoxiconazole. An equal volume of 1 × 10 5 spores was mixed with the 1% malt extract microdilution susceptibility assay. The MIC of azoles was defined as the lowest concentration of the drug that produced no visible growth following 72 h of incubation at 27 °C. The MIC determination was repeated five times. Conclusions The transcription level of CYP51F1 following treatment with terpenes and pine phloem extracts was upregulated, while using monoterpenes as the only carbon source led to the downregulation of CYP5F1 expression. The homology modeling structure of CYP51F1 is similar to the structure of the lanosterol 14-α demethylase protein of Saccharomyces cerevisiae YJM789, which has an N-terminal membrane helix 1 (MH1) and transmembrane helix 1 (TMH1). The minimal inhibitory concentrations (MIC) of terpenoid and azole fungicides (itraconazole (ITC)) and the docking of terpenoid molecules, lanosterol and ITC in the protein structure suggested that CYP51F1 may be inhibited by terpenoid molecules by competitive binding with azole fungicides.
v3-fos-license
2016-05-10T11:56:03.188Z
2013-03-17T00:00:00.000
2816761
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2013/310406", "pdf_hash": "74cab03d08691b8f4b7a1066155c6c70b612e2b9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43978", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "0bc6d2b5802c4148f79c0e2b4afae59d4d520313", "year": 2013 }
pes2o/s2orc
Transfected Early Growth Response Gene-1 DNA Enzyme Prevents Stenosis and Occlusion of Autogenous Vein Graft In Vivo The aim of this study was to detect the inhibitory action of the early growth response gene-1 DNA enzyme (EDRz) as a carrying agent by liposomes on vascular smooth muscle cell proliferation and intimal hyperplasia. An autogenous vein graft model was established. EDRz was transfected to the graft vein. The vein graft samples were obtained on each time point after surgery. The expression of the EDRz transfected in the vein graft was detected using a fluorescent microscope. Early growth response gene-1 (Egr-1) mRNA was measured using reverse transcription-PCR and in situ hybridization. And the protein expression of Egr-1 was detected by using western blot and immunohistochemistry analyses. EDRz was located at the media of the vein graft from 2 to 24 h, 7 h after grafting. The Egr-1 protein was mainly located in the medial VSMCs, monocytes, and endothelium cells during the early phase of the vein graft. The degree of VSMC proliferation and thickness of intima were obviously relieved compared with the no-gene therapy group. EDRz can reduce Egr-1 expression in autogenous vein grafts, effectively restrain VSMC proliferation and intimal hyperplasia, and prevent vascular stenosis and occlusion after vein graft. Introduction In 1977, Paterson et al. [1] first inhibited gene transcription using a complementary combination of single-stranded DNA and RNA in a cell-free system. Later, Stephenson and Zamecnik [2] reversely inhibited the replication of the Rous sarcoma virus using a 13 oligodeoxynucleotide and pioneered the direction of gene-based drugs by inhibiting gene expression. A variety of catalytic DNA, called DNA enzymes, was one of the important breakthroughs in life science history since the discovery of catalytic RNA (ribozyme, Rz) [3][4][5][6][7]. In 1994, Breaker and Joyce [8] found that a singlestranded DNA molecule (catalytic DNA) can catalyze the hydrolysis of RNA phosphodiester bonds. This singlestranded DNA molecule was also called DNA enzyme (DRz). The enzyme activity center was the "10-23 motif " [9][10][11][12][13][14][15] composed of 15 deoxyribonucleotides (5 -GGC TAG CTA CA A CGA-3 ). Its mutation or reverse mutation variants had no activities. Both ends of the active center were substratebinding regions that can specifically combine with the target RNA through the Watson-Crick base pairing. Early growth response gene-1 (Egr-1) is a Cys2-His2-type zinc-finger transcription factor. A broad range of extracellular stimuli are capable of activating Egr-1, thus mediating growth, proliferation, differentiation, or apoptosis, therefore, participating in the progression of a variety of diseases such as atherosclerosis [16][17][18][19]. Previous studies have demonstrated that Egr-1 can activate the restenosis process and intimal hyperplasia and inhibit vascular smooth muscle cell apoptosis in vein grafts [20]. The DNA enzyme is an oligonucleotide that bound to and interfered with translation of the Egr-1 mRNA and it could inhibit the expression of Egr-1. In the present study, an Egr-1 DNA enzyme (EDRz) was designed for Egr-1 mRNA, used a liposome as a carrying agent, and investigated the inhibitory action of the Egr-1 DNA enzyme on vascular smooth muscle cell (VSMC) proliferation and intimal hyperplasia. Figure 1: The construction conceptual diagram of Egr-1 DNA enzyme shear and its substrate. Construction of Early Growth Response Gene-1 DNA Enzyme. The primer sequences were as follows: 5 -CC GCT GCC AGG CTA GCT ACA ACG ACC CGG ACG T-3 . The 3 end was phosphorothioate-modified, the 5 end was labeled with carboxyl fluorescein (FAM), and a total of 15 OD 260 (495 g) of the Egr-1 DNA enzyme was synthesized ( Figure 1). Approximately 80 L of DEPC was added to the solution (1 : 1000), mixed, centrifuged, and then added with 120 L of the liposome Lipofectamine 2000 (Invitrogen,USA). After 10 min, 32 L of 1 mmol/L MgCl 2 and 568 L of Pluronic gel 30% F-127 (Sigma, USA) were added to a final volume of 800 L. The solution was oscillated and homogenized at 4 ∘ C and stored until use. Establishment of Animal Model and Sample Collection. This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The animal use protocol has been reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of the First Hospital Affiliated to Jiamusi University. Ninety Wistar rats of either sex (200 g to 250 g) were used. The rats were anesthetized with an intraperitoneal injection of 10% chloral hydrate solution (300 mg/kg) and underwent a sterile microsurgery under the SXP-1B microscope (10 times). The procedure was as follows: about 5 mm of the rat's right jugular vein was cut; the vein was flushed with heparin saline; and the vein was anastomosed to the infrarenal abdominal aorta using an 11-0 vascular suture line in an end-to-end manner. Up to 8 L of EDRz was evenly used around the graft vein (including anastomotic) when no signs of active bleeding were found. The retroperitoneum was closed without the use of anticoagulants either before or after the surgery. The animals were randomly divided into 9 groups (10/group), namely, 1, 2, 6, and 24 h and 3, 7, 14, 28, and 42 days after the graft surgery, respectively. The graft vein specimens were cut on each time point. The no-gene therapy group was taken as control group (Figure 2). The specimens were fixed with 4% paraformaldehyde in 0.1% diethylpyrocarbonate (DEPC) for 2 h. The gradient sucrose was dehydrated. Then, they were frozen-embedded and cut into 5 m thick sections. EDRz transfection on the vein graft was observed under a fluorescence microscope. The localization of EDRz was determined by confocal microscopy. The fluorescence gray value was detected using a fluorescence image analyzer and was replicated and verified in multiple samples. Histomorphology Staining. Vein grafts were fixed in 10% neutral formalin for 24 h. Conventional dehydration was then performed. The grafts were transparent and wax-dipped. The wax block was embedded and the middle section of the vein graft was cut into 5 m thick sections. HE staining was conducted and images of the computer image analysis system were collected. Finally, intimal hyperplasia thickness was measured. The data collection and analysis of intimal hyperplasia were performed in a blinded manner. In Situ Hybridization. Specimens were processed with fixation. The sucrose gradient was dehydrated, and frozenembedded, and then a constant cold slicer was used to slice them into 5 m thick sections. Digoxigenin-labeled oligonucleotides were used as probes, and the in situ hybridization was performed according to the manufacturer's instructions (Wuhan Boster Corporation, Wuhan, China). The specimens were dyed with DAB or AEC. The percentage of positive cells in the total cell in eight-unit perspective was randomly counted and performed in a blinded manner. Reverse Transcription-Polymerase Chain Reaction. Total RNA was extracted from cell lines according to instructions of the kit (Wuhan Boster Corporation, Wuhan, China). Primers for Egr-1 were designed using the Jellyfish software according to the sequence in GenBank and synthesized by Shanghai Sangon. For Egr-1, the primers were 5 -CAG TCG TAG TGA CCA CCT TAC CA-3 (Fwd) and 5 -AGG TTG CTG TCA TGT CTG AAA GAC-3 (Rev), 448-bp long. For -actin, the primers were 5 -TTG TAA CCA ACT GGG ACG ATA-3 (Fwd) and 5 -GAT CTT GAT CTT CAT GGT GCT-3 (Rev), 668-bp long. The PCR program involved the following procedures: predenaturation for 2 min at 94 ∘ C; denaturation for 30 s at 94 ∘ C; annealing for 30 s at 58 ∘ C; extension for 1 min at 72 ∘ C; and final elongation at 72 ∘ C for 10 min. Thirty cycles of PCR were performed. PCR products were analyzed by electrofluorescence on 2% agarose gel in a 1x TAE buffer at a voltage of 100 V for 1 h and EB-stained for 20 min. Band intensity was photographed and analyzed on the Gel Imaging System. The gene expression value = Egr-1mRNA/ -actin mRNA. 2.6. Immunohistochemical Staining. Conventional SABC staining was performed according to the kit's instructions (Wuhan Boster Company, Wuhan, China). PBS was used in place of primary antibodies as the negative control. The nucleus or cytoplasm had positive brown-yellow (DAB) or red (AEC) particles at 400 times magnification under the light microscope; it was considered positive regardless of dyeing intensity as long as there was a color display. The percentage of positive cells in the total cell in the eight-unit perspective was randomly counted and was performed in a blinded manner. Western Blot Analysis. The specimens were lysed with a cell lysis solution. The vessel tissues were cut into pieces. The specimens were ultrasound-homogenized. Proteins (100 g/sample) were separated using 10% SDS-PAGE. The proteins were electrotransferred to nitrocellulose membranes using a semidry system. Then, the membrane was blocked in 5% skimmed milk diluted in TBST for 1 h at room temperature. Thereafter, the membranes were incubated with a primary antibody for 2 h at room temperature. Next, the membranes were further incubated with a horseradish peroxidase-labeled goat anti-mouse IgG antibody at a 1 : 500 dilution. The specimens were washed with TBST three times. Then, 12.5 mg of -Naphthyl acid phosphate and 12.5 mg of O-Dianisidine tetrazotized (Sigma Corporation) were added to color the specimens. The NC membrane was photographed and analyzed on the Gel Imaging System. Statistical Methods. Data were shown as mean ± SD ( ± ) and analyzed using the SPSS10 statistical software. The significance of the differences between the group means was determined using ANOVA and post hoc test. Egr-1 DNA Enzyme (EDRz) Transfection. The early growth response gene-1 DNA enzyme was mainly located in the tunica media, adventitia, and partial endothelial cells of the vein graft 1 h after the grafting in transfection group (fluorescence expression value of 70.3 ± 13.5) ( Table 1, Figure 3(a)). The early growth response gene-1 DNA enzyme was located in the tunica media of the vein graft from 2 h to 24 h after-grafting. There was a small amount of EDRz in the tunica media of the vein graft 3 d after the grafting. It was mainly located in the intima of the vein graft 7 d after grafting (Table 1, Figure 3(b)). There were no traces of the early growth response gene-1 DNA enzyme in the vein grafts at 14, 28, and 42 d and control group (Table 1, Figure 3(c)). Changes in Histomorphology. There was no expression of PCNA protein in normal vein. There was still a small amount of slightly disordered VSMCs in the media 2 h to 6 h after the vein graft compared with the control group. Slightly positive expression of PCNA at 6 h, positive cell rate of (2.5 ± 0.4)% in transfection group, (5.6 ± 0.4)% in control group. Moreover, VSMCs were also found partly in the thin layer of a thrombus formation in the cavity surface of the intima. Figure 4). Egr-1 mRNA expressions at 7, 14, 28, and 42 d after grafting ( Figure 5(a)). Egr-1 mRNA expression had biphasic changes in control group. Egr-1 mRNA rapid rise at 1 h after graft, a spontaneous decline at 6 h to 3 d, increase at 7 d after graft operation, a peak at 28 days ( Figure 5(b)). (Table 3, Figure 6(a)) in control group, and the positive expression of Egr-1 mRNA was found in the part of VSMCs of the media at 1 h after graft. A peak at 28 d, the positive rate of Egr-1 mRNA was (45.7 ± 6.4)%, Egr-1 mRNA major located in the vascular smooth muscle cells of neointimal (Table 3, Figure 6(b)). (Figure 7(a)). In control group, we found that Egr-1 protein was expressed at the early phase of 2 h, and continuing to 6 h, the expression of Egr-1 protein was decline from 24 h to 3 d, reincreased at 7 d, and reached peak at 28 d (Figure 7(b)). 3.6. Immunohistochemistry. The Egr-1 protein was mainly located in the medial VSMCs, monocytes, and endothelium cells during the early phase of the vein graft. However, there were no Egr-1 proteins in the medial and neointimal VSMCs after 7 d. The positive expression rates were as follows: positive cell rate of (15.3 ± 4.2)% at 2 h; positive cell rate of (9.7 ± 2.4)% at 6 h; positive cell rate of (6.4 ± 1.8)% at 24 h; and positive cell rate of (2.3 ± 0.2)% at 3 d (Figure 8(a)). In control group, the positive expression of Egr-1 protein reached peak at 28 days (40.7 ± 9.5)% (Figure 8(b)). Discussion AUG (816 to 818 sequence) is a selected target of the Egr-1 mRNA. The splice site was located between 816 and 817, adding T GCA GGC CC to the 3 end of DNA enzyme for the 807-815 sequence (A CGU CCG GG) of Egr-1 mRNA and ACC GTC GCC [21][22][23][24] to the 5 end of DNA enzyme for the 817-825 sequence (UGG CAG CGG). A phosphorothioate modification was made in the 3 end to resist nuclease degradation, and the 5 end was labeled with carboxy fluorescein (FAM) for detection purposes. The constructed DNA enzyme was called Egr-1 DNA enzyme (EDRz) (Figure 1). The 816 base (A) of the Egr-1 mRNA did not undergo base pairing with EDRz. Meanwhile, the rest of the EDRz sites formed the combination of base pairing with Egr-1 mRNA. Then, the latter underwent conformational changes. The 2 end at the point of the OH proton was cut with the help of divalent metal cations, such as Mg2. Moreover, a nucleophilic attack occurred on the adjacent phosphate. The Egr-1mRNA molecular structure was dissociated by two transesterification reactions [25][26][27][28][29][30]. The substrate-binding site can be applied to shear the RNA of a variety of pathogens and mRNAs of disease-related genes after changing its sequence composition in the 10-23 DNA enzyme [31,32]. In gene therapy, 10-23 DNA enzymes have the advantages of both the ribozyme (Rz) and antisense oligodeoxynucleotide (ASODN) [33,34]. The 10-23 DNA enzyme has the following features compared with ASODN: it not only has a substrate RNA antisense inhibitory effect by virtue of the two substrate-binding sites, but also kills virus RNA through the "shear" mechanism [35][36][37][38]. Furthermore, DNA enzyme molecules can be used repeatedly, which means that they can shear a number of RNA molecules. The 10-23 DNA enzyme has the following characteristics compared with a variety of Rz: the identified splice site of 10-23 DNA enzyme is present in a range of RNA molecules, including the RNA translation initiation codon AUG of viruses. It is a good shear target and has more shearing targets to choose from compared with Rz. Its nature is relatively stable. The stability of DNA is about 100, 000 times that of RNA in the conditions of physiological pH, temperature, ionic strength, and so on. Its resistance to hydrolysis is about 100 times or more than that of a protein enzyme [39][40][41]. The sequence of the active center is short. The molecular weight is relatively small with relatively good elasticity. Therefore, it is less affected by the secondary structure of the target sequence. The trend to the substrate is better. Thus, the specificity of the target RNA, combing stability and shear activity, is expressed better than Rz in general [42][43][44]. It is easier to dissociate the DNA-RNA hybrid molecule than the RNA-RNA hybrid molecule. Therefore, the shear rate of the shear product DRz dissociation process is relatively small [45,46]. The RNA of the DNA-RNA hybrid molecules can be degraded by the RNA enzyme H. Hence, the DNA enzyme can not only directly kill the target RNA such as Rz, but also cause the hydrolysis of the RNA enzyme H to target RNAs, such as ASODN [47,48]. The results of this experiment combined with those of previous studies [8,49,50] indicated that the early growth response gene-1 DNA enzyme was mainly located in the media and adventitia of the vein graft 1 h after grafting and then gradually shifted to the media. There was a small amount of EDRz in the media of the vein graft 3 d after grafting and was mainly located in the media. It was mainly located in the intima of the vein graft 7 d after grafting. In addition, the Egr-1 DNA enzyme can also be found in some small newborn blood vessels. However, Egr-1 mRNA and protein expressions in the vein graft were not detected 14 d after grafting. There was no EDRz in the vein grafts, suggesting that the EDRz pathway is adventitia → medial → intima and perhaps degraded by a deoxyribonuclease in the end. Egr-1 mRNA and protein expressions decreased at the same time point. Egr-1 mRNA expression decreased obviously 1 h after grafting. This finding indicated that the Egr-1 DNA enzyme rapidly transferred from the adventitia to the media to combine with the Egr-1 mRNA under a short period of time. Hence, the role of the carrier liposome Lipofectamine 2000 was confirmed. Egr-1 proteins were mainly located in the medial VSMCs, monocytes, and endothelium cells during the early phase of the vein graft. However, there were no Egr-1 proteins in medial and neointimal VSMCs 7 d after grafting, indicating that the early growth response gene-1 DNA enzyme can reduce Egr-1 expression in an autogenous vein graft. VSMC proliferation and intimal hyperplasia reached a peak 7 and 14 d after grafting. The degree of VSMC proliferation and thickness of intima were obviously relieved at the same time compared with the no-gene therapy group. Therefore, Egr-1 DNA enzyme transfection of vein grafts with the liposome Lipofectamine 2000 as a carrier can effectively restrain VSMC
v3-fos-license
2014-10-01T00:00:00.000Z
2012-05-03T00:00:00.000
17699488
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0036493&type=printable", "pdf_hash": "80105ecf89dfdf1af59683963f0e9a9ca02c36c0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43980", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "80105ecf89dfdf1af59683963f0e9a9ca02c36c0", "year": 2012 }
pes2o/s2orc
The Acinetobacter baumannii entA Gene Located Outside the Acinetobactin Cluster Is Critical for Siderophore Production, Iron Acquisition and Virulence Acinetobacter baumannii causes severe infections in compromised patients, who present an iron-limited environment that controls bacterial growth. This pathogen has responded to this restriction by expressing high-affinity iron acquisition systems including that mediated by the siderophore acinetobactin. Gene cloning, functional assays and biochemical tests showed that the A. baumannii genome contains a single functional copy of an entA ortholog. This gene, which is essential for the biosynthesis of the acinetobactin precursor 2,3-dihydroxybenzoic acid (DHBA), locates outside of the acinetobactin gene cluster, which otherwise harbors all genes needed for acinetobactin biosynthesis, export and transport. In silico analyses and genetic complementation tests showed that entA locates next to an entB ortholog, which codes for a putative protein that contains the isochorismatase lyase domain, which is needed for DHBA biosynthesis from isochorismic acid, but lacks the aryl carrier protein domain, which is needed for tethering activated DHBA and completion of siderophore biosynthesis. Thus, basF, which locates within the acinetobactin gene cluster, is the only fully functional entB ortholog present in ATCC 19606T. The differences in amino acid length and sequences between these two EntB orthologs and the differences in the genetic context within which the entA and entB genes are found in different A. baumannii isolates indicate that they were acquired from different sources by horizontal transfer. Interestingly, the AYE strain proved to be a natural entA mutant capable of acquiring iron via an uncharacterized siderophore-mediated system, an observation that underlines the ability of different A. baumannii isolates to acquire iron using different systems. Finally, experimental infections using in vivo and ex vivo models demonstrate the role of DHBA and acinetobactin intermediates in the virulence of the ATCC 19606T cells, although to a lesser extent when compared to the responses obtained with bacteria producing and using fully matured acinetobactin to acquire iron. Introduction Acinetobacter baumannii is being increasingly recognized as an important pathogen that causes severe infections in hospitalized patients as well as deadly cases of community-acquired pneumonia [1,2,3,4]. More recently, it has been described as the ethiological agent of severe wound infections in military personnel injured in the Middle East [5,6] and cases of necrotizing fasciitis [7]. A serious concern with this pathogen is its remarkable ability to acquire genes and express resistance to a wide range of antibiotics as well as to evade the human defense responses [4]. Among the latter is the capacity of A. baumannii to prosper under the ironlimited conditions imposed by the human host's high-affinity chelators lactoferrin and transferrin [8,9]. Although some progress has been made in recent years, not much is known about the pathobiology of this bacterium and the nature of its virulence factors involved in the serious diseases it causes in humans. Bacterial pathogens respond to iron limitation imposed by the human host by expressing different high-affinity uptake systems including siderophore-dependent and siderophore-independent systems, as well as systems that remove iron from host compounds, such as hemin, by either direct contact or by producing scavengers known as hemophores [10,11]. In the case of A. baumannii, experimental data [12,13,14,15] and in silico analyses of fully sequenced and annotated genomes [16,17] show that different A. baumannii clinical isolates could express different iron uptake systems. Currently, the best-characterized system is that expressed by the ATCC 19606 T type strain, which is based on the production and utilization of acinetobactin [14,18,19]. This catechol-hydroxamate siderophore is a non-cyclic derivative of 2,3-dihydroxybenzoic acid (DHBA) linked to threonine and Nhydroxyhistamine [19]. Genetic and functional analyses indicate that the acinetobactin-mediated system is the only high-affinity iron acquisition system expressed by the ATCC 19606 T type strain [14]. The bas, bau and bar genes needed for the production, transport and secretion of acinetobactin, respectively, are located in a 26.5-kb chromosomal region harboring seven operons [14,18]. However, this locus does not include an entA ortholog coding for a 2,3-dihydro-2,3-dihydroxy-benzoate dehydrogenase. This enzyme is involved in the last step of the conversion of chorismate into DHBA, which is essential for the biosynthesis of the catechol moiety of siderophores such as enterobactin [20,21]. This observation indicates that at least two chromosomal regions are involved in the biosynthesis of acinetobactin in the ATCC 19606 T strain; one containing the bas, bau and bar genes and another harboring at least the entA gene. In this report, we present experimental and genomic evidence supporting this hypothesis as well as showing that there are variations not only in nucleotide sequence but also in genetic arrangements among the A. baumannii loci harboring the entA genetic determinant. In addition, we demonstrate that the expression of an active entA gene is needed for the full virulence of the ATCC 19606 T strain when tested using A549 human alveolar epithelial cells and Galleria mellonella caterpillars as experimental infection models. We also report the observation that the A. baumannii AYE strain is a natural entA mutant that acquires iron through a siderophore-mediated system that remains to be characterized. Materials and Methods Bacterial strains, plasmids, and culture conditions Bacterial strains and plasmids used in this work are shown in Table S1. Strains were routinely cultured in Luria Bertani (LB) broth or agar [22] at 37uC in the presence of appropriate antibiotics. Iron-rich and iron-limiting conditions were achieved by the addition of FeCl 3 dissolved in 0.01 M HCl and 2,29 dipyridyl (DIP), respectively, to liquid or solid media. Recombinant DNA techniques Chromosomal and plasmid DNA were isolated by ultracentrifugation in CsCl density gradients [22,23] or using commercial kits (Qiagen). DNA restriction and Southern blot analyses were conducted using standard protocols and [ 32 P]a-dCTP-labelled probes prepared as described before [22,24]. Construction of a gene library and cloning of the entA gene An A. baumannii ATCC 19606 T genomic library was prepared using Escherichia coli LE392 and the cosmid vector pVK100 as described before [13]. Cosmid DNA was isolated en masse from E. coli LE392 clones and used to transform E. coli AN193 by electroporation as described before [13]. Transformants harboring the ATCC 19606 T entA gene were selected on LB agar containing 20 mg/ml tetracycline (Tet) and 250 mM DIP. Cosmid DNA was isolated from one of the E. coli AN193 complemented clones, which was named 2631, digested with HindIII and subcloned into pUC118 to generate pMU748 (Fig. 1A). Plasmid DNA was isolated from E. coli DH5a recombinant subclones and sequenced with standard automated DNA sequencing methods using M13 forward and reverse [25] and custom-designed primers. Sequences were assembled using Sequencher 4.2 (Gene Codes Corp.). Nucleotide and amino acid sequences were analyzed with DNASTAR, BLAST [26], and the software available through the ExPASy Molecular Biology Server (http://www.expasy.ch). Construction of an ATCC 19606 T entA::aph isogenic derivative To generate the ATCC 19606 T entA::aph insertion mutant 3069, a 2.5-kb pMU858 fragment, which encompasses the pUC4K DNA cassette inserted into an EcoRV site located within entA, was PCR amplified with primers 3504 (59-CCAACAAGAACGT-CACTT-39) and 3505 (59-ATTCCTGTTCGGTACTGG-39) (Fig. 1A) and Phusion DNA polymerase (NEB). The amplicon was cloned into the SmaI site of the pEX100T and E. coli DH5a transformants were selected on LB agar containing 40 mg/ml kanamycin (Km) and 150 mg/ml ampicillin (Amp). Plasmid DNA (pMU902) was isolated from one of these derivatives and the appropriate cloning was confirmed by automated sequencing using primers 3187 (59-AGGCTGCGCAACTGTTGG-39) and 3188 (59-TTAGCTCACTCATTAGGC-39), which anneal close to the pEX100T SmaI site. ATCC 19606 T cells were electroporated with pMU902 as described before [27] recovering the cells in SOC medium [22] for 6 h in a shaking incubator at 37uC. Transformants were selected on LB agar containing 40 mg/ml Km. The generation of the appropriate ATCC 19606 T entA::aph derivative was confirmed by PCR using primers 3504 and 3580 (59-CCATGCTTGGATTACTTG-39) (Fig. 1A) as well as Southern blotting [22] using as a probe the amplicon obtained with primers 3504 and 3580 (Fig. 1A) and parental DNA as a template. The ATCC 19606 T 3069 derivative was genetically complemented with pMU951 (Fig. 1A), a derivative of the shuttle vector pWH1266 harboring an amplicon encompassing the parental entA allele that was obtained with Phusion DNA polymerase using primers 3631 (59-GGATCCGGGAATATTAGACTGGCG-39) and 3632 (59-GGATCCCCAACAAGAACGTCACTT-39), both of which included BamHI restriction sites. Production and utilization of catechol and siderophore compounds and expression of EntA and EntB activity by cloned genes A. baumannii cells were cultured in a chemically defined medium containing sodium succinate as a carbon source [19]. Production of extracellular compounds with siderophore activity was investigated with the Chrome Azurol S (CAS) reagent [28]. The presence of catechol compounds in cell-free succinate culture supernatants [19] was detected with the Arnow test [29]. Briefly, 1 vol of reagent A (0.5 N HCl), reagent B (10% Na nitrite, Na molybdate), and reagent C (1 N NaOH) were successively added to 1 vol of culture supernatant cleared by centrifugation at 16,0006 g. The reaction was measure by determining OD 510 after 10 min incubation at room temperature. DHBA (Sigma-Aldrich) was used as a standard in chemical and biological assays. Production of DHBA was biologically examined with cross-feeding assays using the Salmonella typhimurium enb-7 enterobactin mutant as previously described [15]. Minimal inhibitory concentrations (MICs) of DIP, which were repeated at least three times in duplicate each time, were determined using M9 minimal medium [30] containing increasing concentrations of DIP. OD 600 was used to monitor cell growth after overnight incubation at 37uC. Expression of EntA and EntB activity of cloned DNA was tested by transforming the E. coli AN193 and E. coli AN192 mutants with pMU925, which was obtained by PCR amplification and cloning of the ATCC 19606 T genomic region encompassing the predicted entA and entB genes with primers 3606 (59-GAACTGAACCATATGGCG-39) and 3607 (59-CGCAGTGGTTTCATCGTT-39) (Fig. 1A). The same set of primers were used to PCR clone the cognate chromosomal region from the AYE clinical isolate to generate the derivative pMU968. Primers 3206 (59-CGCAGGCATCG-TAAAGGG-39) and 3211 (59-TCTGCACAGCATCAACCG-39) were used to PCR amplify and clone the ATCC 17978 entA and entB orthologs (pMU804) (Fig. 1B). Production of DHBA and acinetobactin was examined by HPLC analysis with an Agilent 1100 LC instrument using succinate culture supernatants filtered with 0.45 mm cellulose acetate filter units (Spin-x centrifuge filter units, Costar, Cambridge, MA). Supernatants were fractionated with a Vydac C-8, 5 mm, 250 mm64.6 mm reversed-phase column (Grace Davison Discovery Sciences, Deerfield, IL). Water and acetonitrile containing 0.13% and 0.1% trifluoroacetic acid, respectively, were used as mobile phases. The gradient was as follows: 17% acetonitrile for 5 min and then from 17% to 30% within 30 min, and thereafter held for 15 min. Detection was at 317 nm with a flow rate of 0.5 ml/min. A549 infection assays A549 human alveolar epithelial cells [32], which were provided by Dr. E. Lafontaine (College of Veterinary Medicine, University of Georgia, USA) were cultured and maintained in DMEM supplemented with 10% heat-inactivated fetal bovine serum at 37uC in the presence of 5% CO 2 as previously described [33]. A549 monolayers maintained in modified Hank's balanced salt solution (mHBSS, same as HBSS but without glucose) for 24 h at 37uC in 5% CO 2 without infection remained viable as determined by trypan blue exclusion assays. 24-well tissue culture plates were seeded with approximately 10 4 epithelial cells per well and then incubated for 16 h. Bacterial cells were grown 24 hr in LB broth at 37uC with shaking at 200 rpm, collected by centrifugation at 15,000 rpm for 10 min, washed, resuspended, and diluted in mHBSS. The A549 monolayers were singly infected with 10 3 cells of the ATCC 19606 T parental strain, the s1 (basD) or 3069 (entA) isogenic derivatives. Inocula were estimated spectrophotometrically at OD 600 and confirmed by plate count. Infected monolayers were incubated 24 h in mHBSS at 37uC in 5% CO 2 . The tissue culture supernatants were collected, the A549 monolayers were lysed with sterile distilled H 2 O and lysates were added to the cognate tissue culture supernatants. Bacteria were collected from the resulting suspensions by centrifugation, resuspended in 1 ml sterile distilled H 2 O, serially diluted and then plated on nutrient agar. After overnight incubation at 37uC, the colony forming units (CFUs) were counted and CFU/ml values for each sample were calculated and recorded. Counts were compared using the Student's t-test; P values,0.05 were considered significant. Experiments were done four times in triplicate using fresh biological samples each time. To determine bacterial relative fitness, the recovered CFUs were divided by the CFUs of the inoculum used to infect monolayers. G. mellonella killing assays Bacteria grown in LB broth were collected by centrifugation and suspended in phosphate-buffered saline solution (PBS). The number of bacteria was estimated spectrophotometrically at OD 600 and diluted in PBS to appropriate concentrations. All bacterial inocula were confirmed by plating serial dilutions on LB agar and determining colony counts after overnight incubation at 37uC. Ten freshly-received final-instar G. mellonella larvae (Grubco, Fairfield, OH) weighing 250-350 mg were randomly selected and used in killing assays as described previously [34]. Briefly, the hemocoel at the last left proleg was injected with 5-ml inocula containing 1610 5 bacteria 60.25 log of each tested strain using a syringe pump (New Era Pump Systems, Inc., Wantagh, NY) with a 26 GK needle. Each test series included control groups of noninjected larvae or larvae injected with sterile PBS or PBS containing 100 mM FeCl 3 . The test groups included larvae infected with the parental strain ATCC 19606 T , the s1 basD mutant or the 3069 entA insertion derivative, which were injected in the absence or presence of 100 mM FeCl 3 . Injected larvae were incubated at 37uC in darkness, assessing death at 24-h intervals over six days. Larvae were considered dead and removed if they did not respond to probing. Results were not considered if more than two deaths occurred in the control groups. Experiments were repeated three times using 10 larvae per experimental group and the survival curves were plotted using the Kaplan-Meier method [35]. P values,0.05 were considered statistically significant for the log-rank test of survival curves (SAS Institute Inc., Cary, NC). Results and Discussion The acinetobactin locus in different A. baumannii strains Previous reports described a 26.5-kb A. baumannii ATCC 19606 T chromosomal gene cluster involved in acinetobactin production and utilization [14,18]. Table 1 shows that the same bas-bau-bar 18-gene cluster, recently referred to as the acinetobactin gene cluster [17], is also found in A. baumannii AYE [36]. In contrast, the cognate clusters in strains AB0057 and ACICU include the additional putative genes AB57_2807 and AB57_2818 [37], and ACICU_02575 and ACICU_02586 [38], respectively. Furthermore, the A. baumannii AB307-294 bas-bau-bar gene cluster has three additional predicted genes; ABBFA_001054, ABBFA_001053, and ABBFA_001065 [37]. It is not clear whether these additional coding units are the result of sequencing and/or annotation artifacts and their potential role in siderophore production and utilization remains to be tested considering that most of them (AB57_2807, AB57_2818, ACICU_02575, ACICU_02586, ABBFA_001054 and ABBFA_001065) code for polypeptides containing 43 to 51 amino acid residues. Similarly, the annotation of the ATCC 17978 bas-bau-bar gene cluster [39] encompasses 21 rather than 18 predicted genes because of potential nucleotide sequencing errors ( Table 1). The products of coding regions A1S_2382 and A1S_2383 are highly related to the BasD predicted protein, while the products of the A1S_2376-A1S_2378 coding regions highly match that of the barA gene originally described in the ATCC 19606 T type strain [18]. Unfortunately, these errors were also included in a recent report analyzing potential iron acquisition functions expressed by A. baumannii using bioinformatics [17]. In spite of all these differences, a common feature of the bas-baubar gene cluster present in all these strains is the absence of an entA gene coding for a 2,3-dihydro-2,3-dihydroxybenzoate dehydrogenase needed for the biosynthesis of DHBA, which is a precursor needed for the production of catechol siderophores, such as enterobactin [21]. This observation indicates that a second locus must contain the entA ortholog, a possibility that is supported by our initial finding that the genome of the A. baumannii ATCC 17978 strain has an additional gene cluster (Table 2) potentially involved in siderophore biosynthesis and utilization [40]. The coding region A1S_2579 of this cluster, which was identified as cluster 2 [17], was annotated as a putative entA ortholog [39]. However, initial BLAST searches using A1S_2579 as a query did not identified entA orthologs as top matches in any of the fully sequenced and annotated A. baumannii genomes and the ATCC 19606 T partial genomic data (GenBank accession number NZ_ACQB00000000.1) deposited in GenBank by the Broad Institute as part of the Human Microbiome Project (http://www. broadinstitute.org/). Furthermore, attempts to identify the ATCC 19606 T entA gene by PCR amplification or Southern blotting, using ATCC 17978 genomic information to design appropriate primers and probes, failed to produce positive results (data not shown). Taken together, all these observations indicate that there are variations in the nucleotide sequence and chromosomal arrangements among the entA genes present in different A. baumannii strains. Cloning and testing of the ATCC 19606 T entA gene Since the in silico approach failed to locate the ATCC 19606 T entA gene, a functional complementation approach was applied using E. coli AN193, an entA mutant that does not produce enterobactin because of its inability to make the DHBA precursor. Transformation of this mutant with plasmid DNA isolated en masse from an ATCC 19606 T genomic library made in E. coli LE392, using the cloning cosmid vector pVK100, resulted in the isolation of the AN193-2631 derivative capable of growing in the presence of significantly higher DIP concentrations when compared with AN193 transformed with empty cloning vector. Transformation of E. coli AN193 with plasmid pMU711 isolated from the AN193-2631 derivative confirmed the ability of this cosmid clone to restore the iron uptake capacity of E. coli AN193 (data not shown). Restriction analysis of pMU711 digested with HindIII showed that it has an insert larger than 20 kb (data not shown). Subcloning into HindIII-digested pUC118 and nucleotide sequencing resulted in the identification of pMU748, which has a 2.7-kb HindIII insert ( Fig. 1A). This restriction fragment harbors a 771-nucleotide gene, the nucleotide sequence of which is identical to the ATCC 19606 T HMPREF0010_00620.1 locus found in the scaffold supercont 1.1 of the whole genome sequence uploaded to GenBank by the Broad Institute under accession number NZ_GG704572. This gene codes for a 28-kDa predicted protein highly related (E values lower than 16e 25 ) to the EntA protein found in a wide range of bacteria, showing the top BLASTx scores with products of the cognate A. baumannii AB0057 (AB57_1983), AB307-294 (ABBFA_001741), and ACICU (ACICU_01790) genes ( Table 3). Transformation of the E. coli AN193 entA mutant with either pMU748 (data not shown) or the PCR derivative pMU925, which includes a downstream entB ortholog (Fig. 1A), restored iron uptake proficiency to this derivative ( Fig. 2A). The function of the ATCC 19606 T entA ortholog was confirmed by the observation that the AN193-3101 transformant harboring pMU858, a pUC118 HindIII subclone in which entA was inactivated by inserting a DNA cassette coding for Km resistance into a unique EcoRV site (Fig. 1A), showed a growth similar to that of untransformed E. coli AN 193 cells when cultured in the presence of 250 mM DIP ( Fig. 2A). The role of the ATCC 19606 T entA ortholog in the acinetobactin-mediated iron acquisition process was confirmed further with the isogenic derivative 3069, which was generated by allelic exchange using pMU902 (Fig. 1A). This is a derivative of pEX100T, which does not replicate in ATCC 19606 T , harboring the entA::aph construct. Compared with the ATCC 19606 T parental strain, the 3069 derivative, which showed the predicted genetic arrangement by PCR and Southern blotting (data not shown), displayed a drastic growth defect (P = 0.0001) when cultured in M9 minimal medium containing increasing DIP concentrations (Fig. 3A). This response is similar to that displayed by the ATCC 19606 T s1 mutant impaired in BasD-mediated acinetobactin biosynthesis activity [14], which was used as a control. The CAS colorimetric assay showed that while the ATCC 19606 T succinate culture supernatants tested positive, no reaction could be detected with culture supernatants of the ATCC 19606 T 3069 and s1 mutants (data not shown). Furthermore, Arnow colorimetric assays showed that 3069 cells produced drastically reduced amounts of DHBA (P = 0.002), which were within the detection limit of the Arnow test (Fig. 3B). This finding was supported by the lack of crossfeeding of the S. typhimurium enb-7 reporter mutant (Fig. 3C), which uses DHBA as a precursor to produce enterobactin and grow under iron-chelated conditions. Finally, HPLC analysis of culture supernatants of cells grown in succinate medium showed the presence of two peaks with elution times of 8.925 and 10.272 min (Fig. 3D). Although these two peaks were absent in the sterile medium as well as in the supernatant of the ATCC 19606 T 3069 mutant, they could be detected in the ATCC 19606 T 3069 sample only when it was spiked with either pure DHBA or purified acinetobactin before HPLC analysis (Fig. 3D). These observations showed that the two peaks detected in ATCC 19606 T culture supernatants indeed correspond to DHBA and mature acinetobactin. Interestingly, the chromatograms shown in Fig. 3D and Fig. 4 indicate that ATCC 19606 T cells produce and secrete a significant amount of DHBA in addition to fully matured acinetobactin. A similar analysis of s1 succinate culture supernatants showed the presence of DHBA but not fully matured acinetobactin (data not shown), a result that is in accordance with the colorimetric data shown in Fig. 3B. Finally, the role of the ATCC 19606 T entA ortholog was confirmed with genetic complementation assays, which proved that electroporation of pMU951, a derivative of the shuttle vector pWH1266 harboring the entA wild-type allele expressed under the promoter controlling the expression of tetracycline resistance, was enough to restore the parental iron utilization phenotype in the ATCC 19606 T 3069.C derivative (Fig. 3A) as well as its capacity to produce DHBA (Fig. 3, panels B and C). This complementation was not observed with the ATCC 19606 T 3069.E derivative harboring the empty cloning vector pWH1266. Although not shown, there were no significant growth differences among the parental strain, the mutants and the complemented strains used in these experiments when they were cultured in LB broth without any selection pressure. Furthermore, the plasmid pMU951 was stably maintained in the ATCC 19606 T 3069.C strain as an independent replicon without detectable rearrangements. Taken together, all these results strongly indicate that the entA ortholog shown in Fig. 1A is essential for the production of DHBA, which is used by A. baumannii ATCC 19606 T cells as a key precursor for acinetobactin biosynthesis. The location of the entA gene outside the main locus involved in the biosynthesis, transport and secretion of acinetobactin resembles the arrangement of genes needed for the biosynthesis of anguibactin in the fish pathogen V. anguillarum 775 [41]. The pJM1 plasmid present in this strain contains most of the genes coding for anguibactin biosynthesis and transport functions, including an angA ortholog that is inactive because of a frameshift mutation [42]. Further analysis showed that the 2,3-dihydro-2,3-dihydroxy-benzoate dehydrogenase activity needed for anguibactin biosynthesis is indeed encoded by the chromosomal ortholog vabA, which is located within a 11-gene cluster that contains all the genetic determinants needed for the production and utilization of vanchrobactin [43]. Although this system is fully active in V. anguillarum RV22, an O2 serotype isolate that lacks pJM1, vanchrobactin is not produced in the serotype O1 V. anguillarum 775 (pJM1) strain because of an RS1 transposon insertion within vabF. Furthermore, in silico analysis of the predicted product of vabA and angA loci, after virtual correction of the frameshift present in the later gene, showed that they are only 32% identical [41]. These observations and the fact that the angA pJM1 plasmid copy is near transposable elements led to the hypothesis that transposition events resulted in the acquisition by horizontal transfer of two genes potentially coding for the same function that may have evolved independently [41]. A mechanism similar to this one could also explain the location of the ATCC 19606 T entA gene outside the acinetobactin cluster [17], a possibility that is supported by the recent observation that A. baumannii has an underappreciated capacity to rearrange its genome by swapping, acquiring or deleting genes coding for a wide range of functions, including those involved in iron acquisition [44]. Our results also provide strong support to our previous report that acinetobactin is the only high-affinity siderophore produced by this strain [14], when cultured under iron-chelated laboratory conditions, although it contains genetic determinants that could code for additional iron acquisition functions as deduced from recent comparative genomic analyses [16,17]. Considering these reports and the experimental data presented here, it is possible to speculate that the additional iron-acquisition related genes present in A. baumannii ATCC 19606 T are either not expressed or do not code for complete functional iron acquisition systems and represent remnants of DNA fragments acquired from other sources by horizontal gene transfer. Such outcomes, which are currently being explored, could be due to a situation similar to that reported for the fish pathogen V. anguillarum, where a vanchrobactin-producing ancestor acquired the pJM1 plasmid coding for the production and utilization of anguibactin [43]. Since anguibactin is potentially a better iron chelator than vanchrobactin, evolution and adaptation processes favored the emergence of V. anguillarum O1 serotype strains harboring the pJM1 plasmid and producing only anguibactin because of mutations in the vanchrobactin coding genes, but capable of using both anguibactin and vanchrobactin as it is the case with the 775 (pMJ1) strain [45]. Analysis of the ATCC 19606 T chromosomal region harboring entA and entB orthologs Sequence analysis of flanking DNA regions showed that a modE-modA-modB-modC gene cluster, which could code for an uncharacterized molybdenum transport system, locates upstream of ATCC 19606 T entA gene (Fig. 1A). Downstream, entA is separated by a 72-nt intergenic region from a predicted gene coding for an isochorismatase (EntB) ortholog, which is followed by a 524-nt intergenic region preceding a fur iron-dependent regulatory gene. The nucleotide sequence and genetic arrangement is the same as that reported for this strain by the Broad Institute Human Microbiome Project. Table 3 shows that the same gene cluster is also found in the genome of A. baumannii AB0057, AB307-294 and ACICU. The predicted product of the ATCC 19606 T entB gene depicted in Fig. 1A showed significant amino acid sequence similarity and the same length when compared with the products of the cognate AB0057 (AB57_1984), AB307-294 (ABBFA_001740), ACICU (ACICU_01791), and AYE (ABAYE1888) genes. The product of the ATCC 19606 T entB ortholog is also related to that of basF, which is located within the acinetobactin gene cluster originally described in this strain [18]. The product of basF is a 289-amino acid protein that contains the N-terminal isochorismatase lyase (ICL) domain, which is needed for DHBA biosynthesis from isochorismic acid, and the C-terminal aryl carrier (ArCP) protein domain, which is needed for tethering activated DHBA and chain elongation in the biosynthesis of siderophores such as enterobactin [46] and anguibactin [47]. In contrast, in silico analysis showed that the entB ortholog shown in Fig. 1A codes for a 213-amino acid residue protein that includes the N-terminal ICL domain but lacks the C-terminal ArCP protein domain. Based on this predicted protein structure, it is possible to speculate that this shorter EntB ortholog should code for the isochorismatase lyase activity. This possibility was examined by testing the iron-uptake proficiency of the E. coli AN192 entB mutant harboring either no plasmid or the derivatives AN192-3171 and AN192-3170 transformed with pMU925, which harbors the entA and entB genes shown in Fig. 1A, or pMU964, which harbors a copy of the basF gene present in the acinetobactin cluster, respectively. Fig. 2B shows that AN192-3171 (pMU925) and AN192-3170 (pMU964) grew significantly better (P,0.0001 and P = 0.0013, respectively) than the non-transformed AN192 strain when cultured in LB broth containing 250 mM DIP. This response indicates that both ATCC 19606 T entB orthologs complement the E. coli AN1932 entB mutant, allowing it to produce enterobactin and grow under ironchelated conditions. The complementation by the BasF ortholog is straightforward since this protein contains the ICL and ArCP protein domains needed for enterobactin biosynthesis. On the other hand, the complementation by the shorter ATCC 19606 T EntB ortholog can be explained by the previous observation that the mutation in E. coli AN192 affects only the functionality of the ICL domain of the protein produced by this derivative that was obtained by chemical mutagenesis [31]. This functional observation is further supported by our DNA sequencing data showing that the AN192 entB ortholog has two point mutations when compared with the parental AB1515 allele. One is a silent mutation that resulted in a G-to-A base transition at position 801 of the gene, which maps within the ArCP protein domain and produced no change in amino acid sequence. In contrast, the other mutation, which locates at position 593 and is also a G-to-A base transition, produced a Gly-to-Asp amino acid change at position 198 that maps within the ICL domain. This amino acid change, which is one residue away from Arg196 that is predicted to play a role in the interaction of the ICL domains in the functional EntB dimer protein [48], could be responsible for the lack of isochorismatase lyase in the AN192 mutant. Taken together, these results indicate that basF, which locates within the acinetobactin gene cluster, is the only fully functional entB ortholog present in the genome of the A. baumannii strains fully sequenced and annotated. Considering the different genetic context in which basF and the entB ortholog shown in Fig. 1A are found in different A. baumannii genomes and the differences in length (289 vs. 213 amino acid residues) and amino acid composition (52.4% similarity and 40.1% identity to one another) of their predicted products, it is possible to speculate that these two genes were transferred from unrelated sources by at least two independent horizontal mechanisms, one of which may have been driven by the need of acquiring the entA trait required for acinetobactin biosynthesis. These observations are in agreement with the recently described capacity of A. baumannii to rearrange its genome content [44], as proposed above for the presence of the entA ortholog in a genomic region outside the acinetobactin cluster. Analysis of the A. baumannii AYE entA ortholog The comparative genomic analysis of the ATCC 19606 T entA ortholog with other A. baumannii sequenced genomes showed that the clinical isolate AYE also harbors the gene cluster shown in Fig. 1A, which was annotated as ABAYE1887-ABAYE1894 [36]. However, this cluster contains an additional 210-nucleotide gene (ABAYE1890) ( Table 3), which was not included in any of the other fully sequenced A. baumannii genomes. ABAYE1890 codes for a hypothetical 69-amino acid protein and overlaps by 35 nucleotides with ABAYE1889, which codes for a predicted 229amino acid protein. Both genes were recently recognized by comparative genomic analysis as components of the siderophore biosynthesis gene cluster 5 [17]. However, it is important to note that the predicted product of ABAYE1889 is 26 amino acids shorter than the putative ACICU, AB0057 and AB307-0294 EntA orthologs. This observation suggests that the A. baumannii AYE ABAYE1889 entA gene may not code for a functional product needed for the biosynthesis of DHBA and acinetobactin. This possibility was supported by the HPLC analysis of AYE succinate culture supernatants, which showed an elution profile that does not include peaks corresponding to DHBA and acinetobactin that were detected in the ATCC 19606 T culture supernatant (Fig. 4). Interestingly, AYE succinate culture supernatants promoted the growth of S. typhimurium enb-7 in the presence of DIP in spite of the fact that it does not produce DHBA (data not shown). This observation indicates that A. baumannii AYE produces an uncharacterized non-catechol-based siderophore(s) capable of promoting growth under iron-chelated conditions. The failure of the ABAYE1889-ABAYE1890 genetic region to code for EntA activity was further confirmed by the fact that AN193-3179, a derivative transformed with pMU968 harboring the 2.8-kb AYE chromosomal region encompassing these predicted genes, grew as poorly as the non-complemented AN193 mutant when cultured in the presence of 250 mM DIP ( Fig. 2A). All these results indicate that the AYE isolate is a natural A. baumannii entA mutant that does not make acinetobactin because of the lack of DHBA production. HPLC analysis of AYE culture supernatants of cells grown in succinate medium supplemented with 100 mM DHBA confirmed this possibility since a peak with a retention time corresponding to acinetobactin could be detected only under this experimental condition (Fig. 4). Cloning, DNA sequencing and bioinformatic analysis of the 2.8kb AYE chromosomal region encompassing the ABAYE1889 and ABAYE1890 genes confirmed the original genome sequence report [36] and showed that the only difference between this region and the cognate regions of other A. baumannii chromosomes is the presence of an extra T at position 1,946,969 in the AYE genome. This single-base insertion, which could be due to DNA slippage, results in the two predicted annotated genes, neither of which code for a full-length EntA ortholog. Accordingly, in silico deletion of the extra T from the AYE genomic region results in a single predicted coding region, the product of which is an ortholog displaying the same number of amino acid residues predicted for the product of the ATCC 19606 T entA gene shown in Fig. 1A. All these results indicate that the AYE clinical isolate is a natural entA mutant incapable of producing acinetobactin, although this isolate tests positive with the CAS reagent. This situation could be similar to that of V. anguillarum 775 (pJM1) that only produces anguibactin but uses this siderophore as well as externally provided vanchrobactin to acquire iron under chelated conditions [43]. This is due to the inactivation of the vabF vanchrobactin chromosomal gene and the possibility that anguibactin is an iron chelator stronger than vanchrobactin, a condition that does not justify the production of two siderophores. Accordingly, the production and utilization of the AYE uncharacterized siderophore, which may have a higher affinity for iron than acinetobactin, could be mediated by genes located in cluster 1 identified by in silico genomic analysis [17] that remain to be characterized genetically and functionally. Furthermore, preliminary siderophore utilization bioassays showed that AYE cell-free succinate culture supernatants crossfeed the ATCC 19606 T s1 (basD) and 3069 (entA) acinetobactin production deficient mutants as well as the t6 (bauA) acinetobactin uptake mutant. These findings indicate that A. baumannii AYE acquires iron via an uncharacterized siderophore, which is different from acinetobactin but can be used by ATCC 19606 T cells via acinetobactinindependent mechanisms. All these results provide evidence supporting the ability of the ATCC 19606 T strain to use xenosiderophores produced by related and unrelated bacterial pathogens. Analysis of the ATCC 17978 chromosomal region harboring the entA ortholog The ATCC 17978 A1S_2562-A1S_2581 cluster (Table 2), cluster 2 according to Eijkelkamp et al. [17], includes the A1S_2579 gene coding for a putative EntA ortholog [39]. The role of this gene in DHBA production was confirmed by the observation that the AN193-2943 transformant harboring pMU804 (Fig. 1B) displays an iron-restricted response similar to that detected with AN193-3172 harboring pMU925, a response that was not detected with AN193-2944 harboring pMU807 ( Fig. 2A). The latter plasmid is a derivative of pMU804 with a transposon insertion within the annotated entA (A1S_2579) coding region (Fig. 1B). Detailed analysis of the nucleotide sequence and annotation of the A1S_2562-A1S_2581 cluster (GenBank accession number NC_009085.1) showed that because of potential DNA sequencing errors, this cluster is most likely composed of 18 predicted coding regions rather than the 20 genes originally reported [39]. Fig. 1B and Table 2 show that A1S_2563 and A1S_2564 could be a single genetic unit coding for a predicted siderophore interacting protein that belongs to the ferrodoxin reductase protein family. Our DNA sequencing data, which confirmed the DNA sequence originally reported [39], and in silico analysis showed that there are two potential coding regions between A1S_2568 and A1S_2570 ( Fig. 1B and Table 1), with one of them annotated as the A1S_2569 coding region and the other omitted in the original report. Our analysis showed that the predicted products of these two genes are truncated transposases found in a wide range of bacterial genomes including members of the Acinetobacter genus. We also observed that A1S_2573 and A1S_2574 most likely correspond to a single genetic unit coding for a predicted 2,3 dihydroxybenzoate-AMP ligase (EntE) (Fig. 1B and Table 1), which is needed for the activation of DHBA and further biosynthesis of DHBA-containing siderophores such as enterobactin and anguibactin [10]. Nucleotide and amino acid comparative analyses also showed that the products of A1S_2573/A1S_2574, A1S_2580 and A1S_2581 are significantly related to that of the basE, basF and basJ genes, respectively. The products of these three genes, which work together with EntA, are needed for the biosynthesis and activation of DHBA using chorismate as a precursor. Taken together, all these observations indicate that there is a potential redundancy in the A. baumannii ATCC 17978 functions needed for the biosynthesis and utilization of DHBA as a siderophore precursor. In contrast, and as it was observed with the other A. baumannii genomes, the ATCC 17978 entA ortholog, which is represented by the coding region A1S_2579 (Fig. 1B and Table 1), is a unique coding region that locates outside the bas-bau-bar gene cluster. Furthermore, the genetic arrangement and content of this ATCC 17978 gene cluster containing the entA ortholog is different from that shown in Fig. 1A and described in the genome of other A. baumannii strains. This observation resulted in the classification of this cluster as cluster 2 [17], which at the time that report was published was found only in the ATCC 17978 genome. However, genomic data recently uploaded into the BaumannoScope web site (https://www. genoscope.cns.fr/agc/microscope/about/collabprojects.php?P_id = 8) indicate that the same gene cluster is also present in the genome of the A. baumannii strains 6013113 (GenBank accession number ACYR02000000.2) and 6013150 (GenBank accession number ACYQ00000000.2). Interestingly, the ATCC 17978 A1S_2562-A1S_2581 gene cluster has the same genetic content and organization as that of the A. baylyi ADP1 ACIAD2761-ACIAD2776 gene cluster (GenBank accession number NC_005966.1) with the exception of the presence of the transposase coding regions. A gene potentially coding for a transposase fragment is located outside the A. baylyi ADP1 ACIAD2761-ACIAD2776 gene cluster, downstream of ACIAD2761 [49]. In contrast, the ATCC 17978 cluster 2 includes two putative genes coding for transposase-related proteins located between the A1S_2568 and A1S_2570 annotated coding regions ( Fig. 1B and Table 1), one of which was not included in the original report [39]. Furthermore, our previous analysis showed that cluster 2 includes perfect inverted repeats located at the ends of the cluster [40]. Taken together, these observations suggest that this particular cluster was mobilized by horizontal gene transfer among environmental and clinical Acinetobacter strains, which must acquire iron in different free-iron restricted ecological niches. Role of the ATCC 19606 T entA gene in virulence The role of the entA gene in the virulence of A. baumannii ATCC 19606 T was tested using A549 human alveolar epithelial cells and G. mellonella caterpillars as ex vivo and in vivo experimental models. The tissue culture assays showed that the number of ATCC 19606 T bacteria recovered after 24 h incubation was significantly greater than the s1 (P = 0.0048) and 3069 (P = 0.000045) derivatives (Fig. 5A), affected in acinetobactin biosynthesis at intermediate (basD) and early (entA) biosynthetic stages, respectively. It was also noted that the persistence of 3069 was significantly lower than that of the s1 mutant (P = 0.0024) (Fig. 5A). It is important to mention that incubation of A. baumannii ATCC bacterial growth. Infection of G. mellonella larvae showed that more than 40% of them died six days after injected with the parental ATCC 19606 T strain, a value that is significantly different (P = 0.0014) from that obtained with animals injected with sterile PBS (Fig. 5B). Infection of caterpillars with the 3069 entA mutant showed that the killing rate of this derivative is not statistically different from that of the PBS negative control (P = 0.6705) but significantly different from the killing rate of the parental strain (P = 0.0005). On the other hand, the s1 derivative is significantly more virulent than the 3069 mutant (P = 0.043) and almost significantly different from the parental strain (P = 0.0820). The killing rates of the s1 and 3069 mutants were corrected to values statistically indistinguishable from the parental strain when the inocula used to infect the larvae were supplemented with 100 mM FeCl 3 (Fig. 5C). This response shows that the virulence defect of the s1 and 3069 isogenic mutants is due to their inability to acquire iron when injected into G. mellonella. Taken together, these results demonstrate that inactivation of entA produces a more drastic reduction in the capacity of A. baumannii ATCC 19606 T cells to persist in the presence of A549 cells, which represent a target affected during the respiratory infections this pathogen causes in humans, or infect and kill G. mellonella larvae, an invertebrate host capable of mounting a complex innate immune response similar to that of vertebrate animals [50], when compared to the response obtained with the basD mutant. This response could be due to the fact that the 3069 derivative does not produce DHBA and acinetobactin intermediates, which are secreted by the s1 mutant and could have some functions in iron acquisition and virulence as it was shown with Brucella abortus [51,52]. Thus, these acinetobactin precursors together with DHBA could bind iron, although less efficiently than acinetobactin, and provide s1 cells with enough iron to persist and cause host injury, although to a much reduced extent when compared with the response obtained with ATCC 19606 T cells. Conclusions The work described in this report shows that a single A. baumannii entA functional ortholog, which is essential for the biosynthesis of the acinetobactin precursor DHBA, is located outside the acinetobactin gene cluster, which otherwise codes for all biosynthesis, secretion and transport functions related to iron acquisition mediated by this high-affinity siderophore. Although the same genetic arrangement is found in all fully sequenced and annotated A. baumannii genomes, the genetic context within which the entA ortholog is found varies among different clinical isolates. In one group of strains this gene is next to genes coding for putative molybdenum transport, as in ATCC 19606 T . In another group, entA is located within a large gene cluster, which could code for an alternative uncharacterized siderophore-mediated system, as in ATCC 17978. Interestingly, this cluster also includes genes coding for potential DNA mobility functions and is flanked by perfect inverted repeats, a feature that may explain its transfer by lateral processes. Nevertheless, in all cases examined the entA ortholog is always next to an entB ortholog, which codes for a protein that is able to catalyze the production of DHBA from isochorismic acid but cannot promote the completion of the acinetobactin biosynthesis process because of the lack of the Cterminal aryl carrier (ArCP) protein domain. This is in contrast to the presence of basF within the acinetobactin cluster that codes for a fully functional EntB ortholog. All these findings indicate that all genetic components needed for iron acquisition via the acinetobactin-mediated system involved horizontal transfer as well as complex chromosomal rearrangement processes. All these observations are in agreement with the recent observation that A. baumannii has the capacity to acquire, lose and/or shuffle genes, a number of which could code for virulence factors involved in the pathogenesis of the infections this bacterium causes in humans [44]. Our study also shows that the presence of all genetic determinants needed for the biosynthesis and utilization of acinetobactin does not warrant its active expression and may not reflect a virulence advantage when compared to other strains; the AYE clinical isolate is a natural entA mutant incapable of producing DHBA and acinetobactin. However, AYE is an ironuptake proficient strain that seems to acquire this essential metal due to the expression of a siderophore-mediated system that remains to be functionally characterized. This finding reinforces our initial observation that different A. baumannii clinical isolates can express different iron acquisition systems [12]. Finally, we show that acinetobactin intermediates and DHBA, which are produced in addition to acinetobactin in the ATCC 19606 T strain, play a role in the virulence of A. baumannii when tested using ex vivo and in vivo infection experimental models, although to a lesser extent when compared to the role of the fully matured acinetobactin-mediated system.
v3-fos-license
2021-04-23T05:17:18.864Z
2021-03-22T00:00:00.000
233348461
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.wroa.2021.100098", "pdf_hash": "34abc726f6e3babdce0bd448e890ed6a74c2e749", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43983", "s2fieldsofstudy": [ "Engineering" ], "sha1": "34abc726f6e3babdce0bd448e890ed6a74c2e749", "year": 2021 }
pes2o/s2orc
Linking seasonal N2O emissions and nitrification failures to microbial dynamics in a SBR wastewater treatment plant Highlights • Strong correlation of nitrite peaks, seasonal N2O emissions and microbial dynamics.• Reactors with a stable microbial community do not exhibit nitrification failure.• AOB are quite stable, NOB disappear in disturbed reactors.• Standard engineering approaches do not improve plant performance.• Loss and gain of NOB activity coincides with loss and gain of filamentous bacteria. Introduction Nitrous oxide (N 2 O) is the third most important greenhouse gas (GHG), contributing roughly 8% to the globally emitted GHG potential of anthropogenic origin ( IPCC, 2013 ). Additionally, it is considered the dominant ozone depleting substance in the stratosphere ( Ravishankara et al., 2009 ). Biological nitrogen removal during wastewater treatment can cause high N 2 O fluxes to the atmosphere with a significant contribution to global N 2 O emissions ( Vasilaki et al., 2019 ). In wastewater treatment plants (WWTP), emissions ranging from very low amounts up to a few percent of the total nitrogen load were shown to exhibit a strong seasonal pattern ( Gruber et al., 2020 ). Typically, emissions exhibited a seasonal emission patter with high emissions between March and June, and low emissions between July and November ( Chen et al., 2019 ). N 2 O in wastewater treatment systems can be produced by ammonia-oxidizing bacteria (AOB) and heterotrophic denitrifying bacteria (DNB) ( Schreiber et al., 2012 ). AOB can produce N 2 O through hydroxylamine oxidation and nitrifier denitrification ( Caranto and Lancaster, 2017 ;Wrage-Mönnig et al., 2018 ). DNB produce N 2 O as an intermediate during denitrification ( Von Schulthess and Gujer, 1996 ). Chemical oxidation of hydroxylamine to N 2 O is the only known abiotic source and mostly occurs in systems with high ammonia ( NH 4 + -N * L −1 ) and high or low pH ( ≥ 8, ≤ 5), such as in side stream treatment for reject water from sludge treatment ( Soler-Jofra et al., 2020 ). In general, the abiotic reactions are of minor importance in biological nitrogen removal systems ( Su et al., 2019 ). In activated sludge systems, high biological production and emissions of N 2 O have been linked to several patterns, such as i) ammonia or toxicity shocks and quickly changing process conditions, ii) low dissolved oxygen concentrations and increased concentrations of nitrite (NO 2 − ), iii) transient zones with alternating aerobic/anoxic conditions, and iv) limitation of organic substrate ( Vasilaki et al., 2019 ). However, these factors are not exclusive and could only partly explain emission patterns assessed in long-term monitoring campaigns ( Vasilaki et al., 2019 ). Statistical regression algorithms and mechanistic process modeling based on the activated sludge modeling framework have been applied with limited success to model N 2 O emissions from WWTP ( Ni and Yuan, 2015 ;Song et al., 2020 ;Vasilaki et al., 2018 ). Thus, to better understand the N 2 O emissions from WWTP and identify relevant mechanisms, new aspects may have to be taken into account. Among other factors, microbial community dynamics has been proposed in previous studies as a potential driver of long-term N 2 O dynamics ( Daelman et al., 2015 ). The activated sludge in a WWTP is a unique engineered ecosystem consisting of a complex microbial community that orchestrates the biological removal of pollutants in the wastewater ( Wu et al., 2019 ). However, as with all complex ecosystems, minor environmental changes may trigger internal dynamics in activated sludge that result in substantial impacts on the microbial community and its performance ( Bürgmann et al., 2011 ;Griffin and Wells, 2017 ;Johnston and Behrens, 2020 ;Johnston et al., 2019 ;Shade et al., 2012 ). Previous studies have reported a reproducible, seasonally driven pattern for the bacterial alpha diversity at multiple WWTP ( Griffin and Wells, 2017 ;Johnston et al., 2019 ). Microbial diversity in temperate climates dropped dramatically at the beginning of the winter season (November and December), started to increase at the end of spring (April/May) and peaked at the end of autumn (October). Furthermore, these seasonal patterns appear to have a significant impact on the performance of valuable members involved in the nitrification but also other pollutant removal processes ( de Celis et al., 2020 ;Ju et al., 2014 ). Biological nitrogen removal through nitrification and denitrification in WWTP includes multiple species and can exhibit seasonal variation ( Ju et al., 2014 ). While denitrification can be performed by a large number of organisms and there is therefore a high degree of functional redundancy in most cases ( Lu et al., 2014 ), nitrification activity is linked to only a few specialized organisms ( Siripong and Rittmann, 2007 ). In conventional wastewater treatment with activated sludge, nitrification is typically a twostep process, with AOB oxidizing ammonium to NO 2 − and nitrite oxidizing bacteria (NOB) oxidizing nitrite to nitrate. In biofilm systems and activated sludge with high solid retention times (SRT), complete nitrification performed by a single organism ( Comammox ) can be important ( Cotto et al., 2020 ), but is expected to be a minor contributor to N 2 O emissions ( Han et al., 2021 ). Several factors such as insufficient solids retention times (SRT), low oxygen concentrations, low temperatures, elevated pH values and increased free ammonia concentrations have been linked to the loss of certain NOB species in activated sludge and NO 2 − accumulation ( Huang et al., 2010 ;Ren et al., 2019 ;Vuono et al., 2015 ). Similarly, yearlong community assembly studies in WWTP have reported lower abundances for nitrifiers during winter, especially for NOB from the Phylum Nitrospira ( Griffin and Wells, 2017 ). However, functional redundancy and niche differentiation for the NO 2 − oxidation process in the activated sludge microbiome is theoretically possible given different NOB species present, such as Nitro-spira, Nitrobacter and Ca. Nitrotoga ( Huang et al., 2010 ;Lucker et al., 2015 ) . Factors inducing a seasonal change in the NOB community of a full-scale WWTP and how such changes affect NO 2 − accumulation as well as N 2 O production have not yet been studied. Here, we test the hypothesis that seasonal NO 2 − accumulation and N 2 O emission episodes can be linked directly or indirectly to shifts in the activated sludge microbiome. Of interest for full-scale operation are changes in the nitrogen converting populations resulting in reduced nitrification performance and potentially causing increased N 2 O production. To address our research questions, we combined an extensive N 2 O measurement campaign over 1.5 years and 16S rRNA sequencing for microbial community analysis during two seasonal N 2 O emission episodes. Using the floating flux chamber method, as described in Gruber et al. (2020) , N 2 O emissions were assessed on six parallel SBR reactors in a Swiss WWTP. Using operational data and multivariate-and ecological-statistics, activated sludge composition analysis allowed us to uncover microbial dynamics that followed the trajectory of nitrification failures and N 2 O emission episodes. To the best of our knowledge, this is the first study to discuss shifts in microbial community composition as a potential cause for seasonal N 2 O emission pattern and nitrite accumulation based on long-term data of a full scale WWTP. Field site The study was performed at the municipal WWTP of Uster (Switzerland, 47 °21 02.8 N 8 °41 34.0 E). On average, the plant treats 16,0 0 0 m 3 wastewater per day and is designed for a nutrient load of 45,0 0 0 person equivalents (PE) with average loading of 35,0 0 0 PE. Detailed information on the influent characteristics can be found in Table S1, Supplementary Information (SI). After mechanical treatment by screening, grit chambers, sand and fat traps, and primary clarification, the wastewater enters the biological stage. The biological treatment step consists of six sequencing batch reactors (SBR) with a volume of 30 0 0 m 3 each. On average, total solids retention time (SRT) was 34 days and aerobic SRT 10 days. Operating conditions of the SBRs are described in Table S2, SI. The SBRs were operated with dynamic cycle times depending on the same rules for all reactors (Table S3, SI). A yearly average SBR cycle includes a fixed sequence of process steps (total time = 3.5 h): 45 min feeding, 90 min reaction phase (30 min anoxic, 60 min aerobic), and 75 min settling and decanting. The total cycle length as well as the length of each step vary substantially over a year. The operation of the reaction and settling phases are adapted seasonally. During the warmer months and if sufficient nitrification capacity is available, a pre-anoxic phase is implemented. When nitrification performance is limiting, the reaction phase is fully aerated. The settling phase is adapted depending on the actual settling velocity. Following the biological treatment, the wastewater is polished in a rapid sand filtration and discharged into the environment. The SBRs are controlled and monitored with several online sensors and 24 h composite samples taken at multiple treatment steps of the WWTP (after primary clarifier, after biological treatment, and after filter). Except for the O 2 -probe, the online liquid sensors are situated in the analytics room of the WWTP where mixed liquor from the reactors is pumped to two identical monitoring trains equipped with multiple sensors ( Fig. 1 ). Each monitoring train receives mixed liquor from three reactors (R1, R3, R5 or R2, R4, R6). Each reactor is sampled for 5 min, consisting of a flushing period of the monitoring train to remove the mixed liquor from the previous reactor and a measurement phase. For the present study, the following online signals were used for further analysis: NH 4 + concentration, NO 3 − concentration, O 2 concentration, pH and TS concentration (Table S4, SI). Furthermore, data on wastewater flow, excess sludge flow, air flow, wastewater temperature, dosage of precipitant and sludge settling velocity were used to analyze process performance. To compare AOB and NOB activity among reactors, activities were estimated for each SBR cycle by subtracting the concentrations of NH 4 + and NO 3 − measured at the beginning and the end of an aeration phase and dividing by the duration of the aeration phase. During the second campaign, NO 2 − concentration was tracked online with UV/VIS sensors in both monitoring trains. The following operational data was used as input data for a Pearson's correlation analysis: oxygen concentration, total and aerobic SRT, anoxic cycle time, settling velocity, precipitant dosage, N 2 O emissions, estimated AOB and NOB activity and temperature. From weekly lab measurements, we extracted the following variables in the effluent of the biological treatment and after the sand filter: NO 2 − effluent concentration, NH 4 + effluent concentration, transparency determined with the Snellen method subsequently referred as transparency, and sludge volume index (SVI) ( Table S4, SI). N 2 O measurement and monitoring campaigns The N 2 O monitoring campaign was conducted at the Uster WWTP from February 2018 to July 2019. The emissions were assessed using an adapted version of the flux chamber for off-gas monitoring on WWTP. At least one flux chamber was installed on every reactor ( Fig. 1 ). A detailed description of the monitoring setup can be found in Gruber et al. (2020) . The emissions at Uster WWTP exhibited a strong seasonal pattern with two extended emission peaks (February 2018 to May 2018; March 2019 to May 2019) and low emissions between the two peaks. The study focuses on the processes around the two peaks subsequently called campaign 1 and campaign 2. As stated above, the operation of Uster WWTP is adapted depending on wastewater flow and plant performance, changing sig-nificantly over a year. During campaign 1 and campaign 2, extended periods of process failure on the majority of reactors were observed with high NO 2 − effluent concentrations and bad settling qualities of the activated sludge. An overview of the WWTP operational changes and mitigation strategies is provided in Table 1 . Table 2 gives detailed information on sludge exchange for each event. Activated sludge sampling and DNA extraction The activated sludge sampling was performed on a weekly basis for selected reactors during the sampling campaigns. To reduce the number of samples, R4 was completely excluded from the sampling for the first campaign given the high similar behavior of all reactors. During the second campaign, samples were collected from all reactors. Overall, we sequenced 53 sludge samples from campaign 1 and 47 samples from campaign 2. For each sample, a 50 ml tube of mixed liquor was collected when the reactors were fully mixed during the aeration phase or the anoxic mixing phase and immediately transported to the lab. In the lab, 2 ml tubes were filled with the mixed liquor and centrifuged at 60 0 0 rcf and 4 °C for two minutes. The supernatant was withdrawn, and the procedure was repeated twice. Three aliquots of each sample were stored at −20 °C for further processing. Nucleic acids from the 1st campaign were extracted with the MoBio power soil kit (Qiagen, Germany) following the standard operating procedure of the extraction kit. Nucleic acids from the 2nd campaign were extracted based on a method modified from Griffiths et al. (20 0 0). Activated sludge samples from every time point were transferred to 1.5 ml Matrix E lysis tubes (MPbio) and 0.5 ml of both hexadecyltrimethylammonium bromide buffer and phenol:chloroform:isoamylalcohol (25:24:1, pH 6.8) was added. The activated sludge was lysed in a FastPrep machine (MPbio), followed by nucleic acid precipitation with PEG 60 0 0 on ice. Nucleic acids were washed three times with ethanol (70%) and dissolved in 50 μl DEPC treated RNAse free water. For all samples, DNA quality and quantity were assessed by using agarose gel electrophoresis and a Nanodrop ND-20 0 0c (Thermo Fisher Scientific, USA). Sequencing 16S rRNA gene amplicon sequencing from the 1st campaign was performed at the University of Basel (Switzerland) on an Illumina MiSeq platform, based on the pair-end algorithm (300 bp, V3-V4) and the primer pair 341f and 806r resulting in an average number of 92,200 ± 34,700 sequences. Due to the Covid-19 outbreak and entailed constraints, we were not able to perform sequencing of the samples from the second campaign at the same sequencing service provider. Samples from the 2nd campaign were sequenced at DNASense ApS (Aalborg, Denmark, www.dnasense.com ), using the same algorithm and primers, resulting in an average number of 30,80 0 ± 560 0 sequences. Although using the same PCR chemistry (2 × 300 bp, V3/V4 region) and Illumina sequencer, the outcome from the sequence providers differed significantly in the number and quality of sequences, which made it particularly challenging to analyze both sequence sets together. Therefore, and due to the different DNA extraction protocols used, the microbial data from both campaigns were analyzed as separate datasets although they were observed in the same WWTP. Sequence analysis and microbial community analysis Raw sequences from both sequence runs were analyzed within the QIIME2 framework ( Caporaso et al., 2010 ). Amplicon sequence variants (ASVs) were produced with the DADA2 (Callahan et al., (Nierychlo et al., 2020). More information on sequence analysis and subsequent ecostatistics can be found in in section S2 (SI). N 2 O emission, plant performance and incomplete nitrification During our N 2 O monitoring campaign at Uster WWTP, the biological treatment went through two extended periods of severe nitrification and settling failure leading to high NO 2 − concentrations and turbidity in the effluent. A detailed overview of the performance and operation of the biological treatment during both periods is shown in Fig. 2 . Starting in March 2018 and April 2019, increased N 2 O emissions, very low nitratation performance (NO 2 − in effluent), bad settleability of the activated sludge (SVI) and a turbid effluent (low transparency value) were the most important process failure characteristics observed over a period of two to three months ( Fig. 2 ). After an extended transition phase of roughly one month, the reactors reverted to a satisfying treatment performance (as before the process failure period) and emitted very low amounts of N 2 O during both campaigns. Interestingly, the transition between phases was not synchronized between the different reactors. This asynchrony of the recovery is highlighted by the high standard deviations for the N 2 O emissions, estimated NOB and AOB activity in mid-April 2018 to mid-May 2018 and May 2019 ( Fig. 2 a; for individual reactor data see SI Figs. S1-3). During both campaigns, NO 2 − concentrations in the mixed effluent of all reactors reached very high values of up to 9.3 mgNO 2 − -N/l during campaign 1 and 4.9 mgNO 2 − -N/l during campaign 2, as shown in Fig. 2 b. While NO 2 − concentrations increased within a month from satisfying to peak concentrations, the return to normal concentrations took two to three months. Although the rapid sand filtration for effluent polishing could reduce some of the produced NO 2 − , the effluent concentrations were still dramatically higher than the target value of 0.3 mgNO 2 − -N/l of the Swiss water protection law. The NO 2 − concentrations correlated negatively with the observed average NOB activity ( r = −0.61, p < 0.001, n = 81). While the NOB activity dropped by up to 100% to levels around 20 mgN/l/d, AOB activity decreased only slightly (campaign 2) or remained stable and increased later (campaign 1, cluster E). Therefore, NH 4 + effluent concentrations increased slightly but remained clearly below the discharge limits of 2 mgNH 4 + -N/L after the filter. The transparency of the effluent dropped parallel to the decreasing NOB activity ( Fig. 2 b, Fig. S4, Figs. S5 and S6, SI). The sludge settling characteristics changed dramatically leading to high SVI values and low sludge settling velocities ( Fig. 2 , Figures S5 and S6, SI). Both properties showed a medium negative correlation ( r = −0.51, p < 0.001, n = 332) and were heavily affected during both process failure phases. The WWTP emitted significant amounts of N 2 O during both campaigns. During peak days, up to 30% of the influent nitrogen load was emitted as N 2 O, resulting in a massive impact on the greenhouse gas balance of the WWTP. N 2 O emissions showed a close and highly significant positive correlation with NO 2 − concentrations in the effluent of the biological treatment ( r = 0.81, p < 0.001, n = 60). Generally, the emission pattern was highly variable. Under wet weather conditions e.g., at the beginning of April 2018, N 2 O emissions dropped to very low levels and then peaked only a few days later when the influent wastewater amount returned to dry weather conditions. Effluent NO 2 − concentrations and transparency values from the biological treatment indicate that similar events of incomplete nitrification were observed in the spring seasons of preceding years (Fig. S3). Despite the evident periodicity of the nitrification failure episodes, the two campaigns indicate a different progression of process performance in different years. In campaign 1, NO 2 − rose and peaked rapidly, and the estimated NOB activity dropped accordingly to levels close to zero at the beginning of March. The effluent transparency mirrored the pattern of the NO 2 − concentrations. In campaign 2, the decline of NOB activity and the increase of NO 2 − effluent concentration happened more gradually with a peak in March while the effluent transparency value reached its ( Table 2 ). Data was smoothed with a moving average of 6 days in panels a), b), and c). minimum one month before the NO 2 − concentrations. Interestingly, the process failure phenomenology was overall less dramatic in campaign 2 compared to campaign 1 ( Fig. 2 , Fig. 3 , Fig. S9). While all reactors performed similarly and exhibited a partial failure of nitrification and settling during campaign 1, R1 and R3 did not exhibit episodes of dramatic process instabilities during campaign 2. This fortuitous development allowed a comparative analysis of the characteristics of failing and functioning tanks in campaign 2. Elevated NO 2 − concentrations ( ≥ 1 mg NO 2 − -N/L) during aeration can to some extent be observed in all reactors (Fig. S7, SI). However, R1 and R3 during campaign 2 had enough nitrite oxidation and denitrification capacity to avoid a drastic long-term NO 2 − accumulation (Fig. S10, SI). Additionally, the N 2 O emissions of R1 and R3 were clearly lower compared to the other reactors ( Fig. 3 c). The estimated AOB activity, however, was comparable in all reactors ( Fig. 3 a). After the transient loss of nitrification and settling performance, overall process performance returned to the previous levels. After sludge exchange in the low performing reactors, settling and nitrite oxidation performance increased significantly. Mitigation measures applied by the operators and correlation analysis In order to reduce the duration of the process failure phases in campaign 1 compared to previous years, the operators changed operation parameters according to the following four operational strategies ( Table 1 ): i) increase of aerobic SRT to retain more nitrifiers (see Fig. S8, SI), ii) increase the oxygen concentration during aeration to increase aerobic activity (see Fig. 2 c), iii) reduce or skip the anoxic reaction phase to allow lengthening the aeration phase ( Fig. 2 c), and iv) replacement of the activated sludge with sludge from a well running system (see Fig. 3 , Table 2 ). In the sec-ond campaign, dissolved oxygen and aerobic SRT were only slightly increased, since the strategies were not successful during campaign 1 ( Fig. 2 c). Aerobic reaction phases were extended by reducing or skipping the anoxic reaction phase in both campaigns ( Fig. 2 c). Overall, the strategies i), ii) and iii) were found insufficient, as they did not accelerate the recovery of nitrification performance ( Fig. 2 c, Figure S11: DO, aerobic SRT, anoxic time). The complete exchange of activated sludge (strategy iv) appeared to be the only successful strategy to recover treatment performance ( Fig. 3 ). In order to investigate potential causes for the seasonal process failure, Pearson correlation analysis was performed with standard operational parameters, performance indicators and influent indices (Fig. S11, SI). Although correlation analysis has been applied in previous N 2 O monitoring studies with limited success, WWTP operators often rely on strategies based on empirical correlations to address unexpected performance issues like incomplete nitrification. NO 2 − ( r = 0.8, p < 0.001, n = 59) and COD ( r = 0.71, p < 0.001, n = 59) effluent concentrations showed the highest correlations with N 2 O emissions. N 2 O emissions showed a moderate negative correlation with temperature ( r = −0.48, p < 0.001), and NOB activity ( r = −0.5, p < 0.001), as well as a weak negative correlation with anoxic cycle time ( r = −0.32, p < 0.001). While temperature only correlated on a daily average and is thus assumed to influence the emissions only indirectly, the latter two appear to be highly relevant variables for NO 2 − accumulation and N 2 O emissions. No other significant correlations with operational parameters were found. Overall, the correlation analysis does not yield any strategies to optimize plant performance, since all process optimization strategies applied were shown to be ineffective and therefore exhibited correlations with N 2 O contrary to the intended effect. Microbial community dynamics as a driver of N 2 O emissions and NO 2 − accumulation As we were not able to explain the observed N 2 O dynamics and concomitant nitrification failures based on WWTP operational parameters, we decided to investigate the role of microbial community dynamics as a potential driver. We used 16S rRNA gene sequencing analyses to obtain time-series data of the microbial community composition, with the goal of correlating the process performance with changes in the activated sludge microbiome. To identify distinct phases in the microbial community composition over time, we applied a hierarchical clustering approach to the ASV abundance table (amplicon sequence variants reflecting microbial "species") of all samples from the different reactors within the consecutive sampling campaigns. Dissimilarities of microbial community composition and resulting clusters are visualized in Fig. 4 . The resulting distinct clusters, based on the dissimilarities in microbial community composition, followed the temporal progression, and in campaign 2 additionally reflected the split between reactors with and without process failure. We therefore used these clusters to divide the campaigns into a sequence of distinct phases for subsequent analyses of microbial data. Within the 1st campaign we observed a significant (PERMANOVA; p < 0.05) change in the microbial community composition from cluster A to E, which was comparable for all reactors. In the second campaign, a similar temporal dynamic could also be observed for the communities in reactors experiencing process failure (R2, R4, R5 and R6) in clusters X, Y β , Z. However, the microbial community structure in reactor R1 and R3 from campaign 2 remained nearly unchanged after the initial transition from cluster X to Y α and did not change thereafter, in line with the stable nitrification performance ( Fig. 3 ). Interestingly, while they displayed lower N 2 O emissions and no process failures during the second campaign, these two reactors were characterized by impaired nitrification and particularly high N 2 O peaks during the first campaign. Notably, these reactors were op-erated identically to the others over the period of both campaigns, as long as nitrification worked sufficiently. The failing reactors (R2, R4, R5 and R6), however, shared a common clustering pattern, as already observed during the first year, ending with a significantly distinct community structure in summer (cluster Z) compared to the initial state in late fall (cluster X) or the stable reactors (Y α ). The alpha diversity index (Shannon), average N 2 O concentrations and the SVI all varied considerably between the temporal clusters ( Fig. 5 ). We found that species diversity significantly decreased in all reactors during process failure episodes, i.e., from cluster A to C in campaign 1 and from X to Y β to Z ( Fig. 5 a). While diversity was decreasing, N 2 O emissions and SVI tended to increase in both campaigns ( Fig. 5 b, c). As with diversity, we did not observe a substantial change for these parameters between cluster X and cluster Y α in campaign 2. The diversity of the activated sludge increased again from cluster D to cluster E (campaign 1), accompanied by decreasing N 2 O emissions and SVI. The observed increase in diversity at the end of campaign 1 could not be observed in campaign 2, since the recovery phase was not sampled. The strong link between microbial diversity and performance indicators for settling and nitrification is confirmed by correlation analysis (Fig. S12, SI). The Shannon diversity and two other indices (Simpson diversity and species evenness) were found to be significantly negatively correlated with N 2 O emissions, SVI values, and NO 2 − concentrations in effluent of the biological treatment during both campaigns. A weak positive correlation was found with effluent transparency during campaign 1 (Fig. S12, SI). In order to identify which functional groups of the microbial community displayed the significant changes in abundance, we assigned all ASVs, based on their assigned genus and using the Global Database of Microbes in Wastewater Treatment Systems and Anaerobic Digesters (Nierychlo et al., 2020), either to the morphological group of filamentous bacteria or to a putative functional role in WWTP. Given the crucial importance of filamentous bacteria in WWTP ( Nierychlo et al., 2019 ;Speirs et al., 2019 ), we decided to include this category into our assignment. Therefore, in case filamentous ASVs could be assigned in addition to other putative functions (aerobic heterotrophs or fermenters), we used the morphological feature rather than the putative function. To quantify which ASVs substantially contributed to observed fluctuations in relative abundance and diversity changes, we performed a differential abundance analysis and expressed the magnitude of change between consecutive clusters as log2foldchange (Fig. S13, SI). A positive log2foldchange indicates a decrease in abundance over time while a negative log2foldchange means increasing counts. The assignment to high-level functional roles allows for comparison between the two campaigns. We found that the transitions from clusters A -> B -> C (campaign 1) and X -> Y β -> Z displayed the highest numbers in ASVs that significantly ( p < 0.05, Wald test) decreased in abundance (Fig. S13, SI; number of bubbles). The transitions from D -> E (campaign 1) and X -> Y α (campaign 2) were characterized by an increase in abundance of ASVs, which decreased in the earlier clusters. During the early transition from cluster A -> B and X -> Y β that corresponds to the initial development toward process failure in both campaigns, we observed an increase in abundance of aerobic heterotrophs and fermenting bacteria while filamentous bacteria decreased in abundance ( Fig. 6 , S13). The declining abundance of filamentous bacteria continued during the transition from cluster Y β to Z during campaign 2. Fermenting bacteria, mostly affiliated to the genera Arcobacter and Bacteroides , tended to increase from A -> B and X -> Y β in both campaigns. Interestingly, they decreased during phases with elevated NO 2 − concentrations and N 2 O emissions (i.e., campaign 1: B -> C and C -> D ; campaign 2: Y β -> Z ), respectively. This dynamic was accompanied by an increase in aerobic heterotrophs and a decrease in denitrifying bacteria (DNB). We also found that NOB were low in abundance during cluster C -> D (campaign 1) and Y β -> Z (campaign 2). Associated with a recovery of the process performance, the transition from cluster D -> E in campaign 1 was characterized by a re-increase in abundance of filamentous bacteria, DNBs and NOBs, while aerobic heterotrophs substantially decreased in abundance ( Fig. 6 , S13). We also observed a stabilization of the community for all reactors in cluster Z of campaign 2. In stark contrast to these dynamic cluster transitions, the shift from cluster X to Y α (stable reactors of campaign 2) entailed merely an increase in abundance for filamentous bacteria and AOB. Focusing on the temporal development of the microbial communities in reactor 1 and 3 (cluster Y α , Fig. 6 , S13), we observed a surprisingly stable community with a significant increase (linear regression analyses, p < 0.05) in abundance of filamentous bacteria in comparison to the starting condition (cluster X), in contrast to the decreasing trend for this group in the other reactors. Given the crucial importance of nitrifying bacteria in municipal wastewater treatment, we dissected the microbial communities from both campaigns to elucidate the individual dynamics of AOB and NOB affiliated bacteria ( Fig. 7 ). During both campaigns, Nitrosomonas was the only detected bacterial genus affiliated with aerobic ammonium oxidation and its abundance did not change dramatically over the course of the sampling campaigns despite process disturbances. However, bacteria affiliated with NO 2 − oxidation displayed surprising dynamics in abundance. During both campaigns, the abundance of the dominant NOB ( Nitrospira ) significantly decreased during the periods with a low nitratation performance (campaign 1: cluster B, C, D; campaign 2: Y β , Z). During campaign 1, ASVs assigned to a different bacterium affiliated with NO 2 − oxidation ( Candidatus Nitrotoga) started to emerge in cluster D and became the dominant NOB fraction of the community in cluster E. Interestingly, Candidatus Nitrotoga was not present in the prior clusters of campaign 1, nor could it be detected during campaign 2. The recovery phases of R2, R4, R5, and R6 were not sampled during campaign 2 and it is therefore not clear if the species may have emerged later. However, it is likely that Nitrotoga did not appear in the second campaign, since the operators started to replace the activated sludge of the unsatisfyingly performing reactor one week after the last sludge samples were taken ( Fig. 3 ). In order to identify potential process parameters or environmental factors, which could have initiated these drastic changes in community structure, we performed a correlation-based analysis. Here, we used all ASVs that were present in at least 25% of the samples and sorted them into their putative functional groups. We determined the correlation of these groups with the same, averaged process parameters, as used for the process correlation analysis described above, for each sampling point of the treatment plant for each campaign (Fig. S14, SI). However, we were not able to find a large number of significant correlations after the p -value adjustment, which would allow us to make assumptions on what might have caused the initiation of the community change. Further, diverging results between the two consecutive campaigns, which can perhaps be attributed to differences in operation strategy of the reactors and different periods of the clusters (Fig. S14, SI), ultimately do not allow to identify drivers. Discussion The yearly N 2 O emissions at Uster WWTP are an example for a broadly observed pattern of seasonally driven N 2 O emission from WWTP. Most previous N 2 O monitoring campaigns at WWTP observed an emission pattern peaking in spring and reaching its minimum in autumn, such as the Kralingseveer WWTP ( Daelman et al., 2015 ), Avedøre WWTP ( Chen et al., 2019 ), Lucerne WWTP and Altenrhein WWTP ( Gruber et al., 2020 ). Hence, these monitoring campaigns might represent observations of the same phenomenon. Given the reported correlation of N 2 O emissions of NO 2 − concentration in two studies ( Daelman et al., 2015 ;Gruber et al., 2020 ), we hypothesize that seasonally increased NO 2 − concentrations in the biological reactors of these treatment plants are directly and functionally linked to the N 2 O emissions patterns. During both campaigns at Uster WWTP high N 2 O emissions were observed after substantially diminished NOB activity resulting in NO 2 − accumulation in the effluent, which suggests a high contribution of denitrification (nitrifier or heterotrophic) to N 2 O production ( Domingo-Felez et al., 2016 ;Wunderlin et al., 2013 ). Although the extent of nitrite accumulation in our monitoring campaign is extreme ( Fig. 2 ), seasonal nitrite accumulation has been previously reported for full-scale WWTPs and shown to be related to N 2 O emissions ( Castro-Barros et al., 2016 ;Philips et al., 2002 ;Randall and Buth, 1984 ). At the Vikinmäkki WWTP, a very similar case with substantial NOB failure could be observed in a continuously fed activated sludge process with denitrification and nitrification ( Kuokkanen et al., 2020 ). The Uster WWTP is designed following the standard guidelines (Section S1, SI). The strategies applied by the operator in campaign 1 to counter incomplete nitrification were shown to be unsuccessful ( Fig. 2 ; i.e.., increasing aerobic SRT and oxygen setpoints). They target typical key operation parameters aiming to support nitrifying bacteria ( Stenstrom and Poduska, 1980 ). Other reported causes for NOB loss and nitrite accumulation, such as high temperatures, elevated pH values and increased free ammonia concentrations ( Ren et al., 2019 ) can be clearly excluded for the case reported ( Fig. 2 , Fig. S11). Hence, the yearly recurring episodes (Fig. S4) of substantial nitrite accumulation followed by N 2 O emissions cannot be solved and explained using standard engineering approaches. In strong agreement with the microbial analysis, we find that the NOB loss correlates with important changes of the entire microbial community and thus the primary cause likely does not reside in the nitrifiers themselves. The clustering of the changing microbial community structure correlated surprisingly well with the changing nitrification performance and sludge characteristics in both campaigns ( Figs. 2 ,5 ). Our analysis of the microbial communities clearly revealed a progressive and quite well synchronized change of the community composition in all independent reactors ( Fig. 4 ) and that the respective species diversity negatively correlated with nitrite accumulation, changing sludge settleability and N 2 O emissions ( Fig. 5 , S5). With the exceptions of R1 and R3 during campaign 2, where the microbial community was very stable ( Fig. 4 ), the six reactors exhibited synchronized microbial commu-nities and reproducible impaired treatment performances. The high similarity of the activated sludge microbiome within different independent reactors of the same WWTP or even in the same region has been observed in previous studies ( Griffin and Wells, 2017 ). The microbial community analysis of the two campaigns revealed significant differences between the pre-and post-processfailure community compositions ( Fig. 4 ). Despite the differences in community structure, all reactors re-emerged to satisfying performances in N-removal ( Fig. 2 ) and displayed comparable diversity measures again at the end of campaign 1 and at the beginning of campaign 2 ( Fig. 5 ). We hypothesize that the destabilization of the activated sludge microbiome was initiated by the loss of certain key functional groups that maintain the sludge structure; this in turn triggered a cascading decline of other valuable members, including NOB, of the community ( Van den Abbeele et al., 2011 ). Our observations on decreasing diversity and evenness ( Fig. 5 , S13) as well as the pronounced loss of specific microbial consortia during clusters A -> C and X -> Y β -> Z , support this notion. Specifically, the observed decline in filamentous bacteria (mainly Chloroflexi ) after cluster A and X appears likely to have initiated the cascading effect on the community in both campaigns as it provides a credible explanation for the reported changes in sludge settling ( Figs. 3 and 5 ). The visible change in transparency and settling velocity further supports the notion of the lost sludge integrity ( Figs. 2 , 3 , S12). Filamentous members of the phylum Chloroflexi are known to support the structural integrity of activated sludge. Their ability to degrade complex polymeric organic compounds to low molecular weight substrates is very beneficial for other members of the community (Kragelund et al.;2007;Nierychlo et al., 2019 ;Speirs et al., 2019 ). Burger et al. (2017) found a direct correlation between the abundance of filamentous bacteria and the strength of the floc, which further supports our findings. However, the mechanisms that lead to the decline of filamentous bacteria and NOB, while AOB are significantly less affected remain unclear. Both loss of structural integrity (e.g., pin-point floc formation and washout) and loss of mutualistic interactions (e.g., substrate trans-fer) could potentially play a role ( Burger et al., 2017 ;Lau et al., 1984 ;Örmeci and Vesilind, 20 0 0 ;Sezgin et al., 1978 ). Disturbance-or changing-condition-induced species loss can open up new niches within the sludge community that are prone to colonization by other bacterial consortia with ecological advantages under the given conditions (Vuono et al., 2016). We observed this phenomenon during campaign 1. While the NOB species Nitrospira declined substantially in abundance, another NOB species, Nitrotoga, emerged and took over as the dominant NOB in these reactors ( Fig. 7 ). During the transition phase between these two NOB species, we observed the highest N 2 O emissions ( Fig. 2 ). To our surprise, no sequences from the 2nd campaign could be annotated to the genus Nitrotoga . However, Nitrotoga was also not found during the first three clusters of campaign 1. We believe that, as fast as the cold affine Nitrotoga ( Lucker et al., 2015 ;Wegen et al., 2019) was emerging, it was soon again replaced by Nitrospira as the dominating NOB species during the warm summer months preceding campaign 2. In stark contrast to the NOB community, the AOB fraction ( Nitrosomonas ) remained comparably stable in abundance over the course of both campaigns. We speculate that the changing sludge morphology, initiated by the loss of filamentous bacteria, could also affect the observed abundance dynamics within the nitrifying community. Given the increased effluent turbidity after biological treatment due to diminished sludge integrity in the affected reactors ( Fig. 2 ), we speculate that the NOB fraction could be preferentially washed out in pin-point flocs. The washout of NOBs in turn leads to NO 2 − accumulation as observed during campaigns after cluster A and X, respectively. As our results indicate, the exchange of activated sludge can work as a mitigation strategy, but it should be only applied in emergency cases for two reasons. Firstly, the transfer of significant amounts of sludge leads to lower treatment performance in the source reactor. Secondly, the replacement of sludge speeds up the system recovery but does not prevent system failure later during a season or in the following year. The results from campaign 2 and the well performing reactors R1 and R3 show that probably only small changes are needed to stabilize the microbiome, since the same operational strategies were applied in the disturbed and the satisfying reactors. Although the initial causes for impaired plant performance remain unknown, strategies to reduce process failure should aim for a stabilization of activated sludge microbiome already well before the problem becomes acute. As reported in previous studies, several strategies could be applied, such as (i) increase of oxygen concentration (Huang, 2010), (ii) increase SRT (Kim et al., 2011;Vuono et al., 2015 ) or (iii) maintaining a stable process operation strategy (Dytczak et al., 2008). Since strategies (i) and (ii) have been unsuccessfully applied during campaign 1 when the microbiome was already substantially disturbed, we hypothesize that the changes in operation should be implemented a few months before the expected phase of nitrification failure. Integrating a proactive management of the activated sludge microbiome in the operational strategy of a WWTP could be an asset for the mitigation of seasonally occurring nitrification failure and insufficient sludge settleability. Our study highlights the need for further detailed sampling campaigns and experimental work to uncover the chain of events that leads to community disturbance and ultimately to significant peaks in N 2 O emissions and NO 2 − accumulation. A better understanding of seasonal patterns of microbial population dynamics will be central to this objective. To investigate microbial dynamics as a potential cause or mediator of such patterns, further studies are required in three directions, i.e. (1) 16 s rRNA amplicon sequencing with a higher resolution (weekly sampling over a whole year), (2) seasonal assessment of microbial activity with metagenomics or multi-omics approaches, and (3) systematic assessment of the microbial community during tests of mitigation strategies and comparison with a reference system. In particular, multi-omics approaches could help to characterize the initial causes for strong dynamics in microbial communities. For seasonal studies, independent of the methods applied, it seems crucial to include not only species involved in the nitrogen cycle, but the whole activated sludge microbiome. Furthermore, future studies should always be coupled with spatially and temporally highly resolved long-term N 2 O and NO 2 − monitoring and extended process monitoring as at Uster WWTP. Ultimately, suitable targets (organisms, genes or community traits) that can be measured reliably and costeffectively would have to be characterized that are reliably linked to subsequent process failures -merely collecting microbial data does not automatically advance the operation of a WWTP. Our study clearly shows that extended discussions and a close collaboration between operators, engineers and microbiologists are required to take advantage of the full potential of microbial assays, to analyze the data appropriately and to suggest mitigation strategies. Conclusions • NO 2 − accumulation correlates strongly with and is very likely the cause for the observed seasonal N 2 O emission peaks on a full-scale activated sludge SBR plant. While the AOB abundance and performance remained relatively stable throughout the campaigns, the NOB population disappeared and needed to re-establish. • The phases of impaired nitrification and high N 2 O emissions correlated with the process of a drastic change in the microbial community affecting multiple process relevant species. The communities of reactors with high emissions differed significantly before and after the peak emission phases. On the contrary, reactors with a stable microbial community over the whole period did not exhibit increased N2O emissions. • The NO 2 − oxidation on the SBR plant repeatedly underperformed even though (i) the important operating parameters (aeration and aerobic SRT) were set according to standard guidelines and (ii) common factors known to cause NO 2 − oxidation failure were not present. These results counter the notion that the accumulation of NO 2 − and the seasonal N2O emission pattern are issues uniquely related to growth conditions of nitrifiers. • Loss and re-establishment of NOB activity seems to coincide with loss and re-establishment of filamentous bacteria and entailed bad sludge settling properties (impaired settleability and a turbid effluent). This has considerable practical implications since measures to maintain complete nitrification might need to target floc structure rather than AOB and NOB growth conditions only. • Regular, long-term microbial and physico-chemical monitoring of the activated sludge and a better understanding of its microbial community likely is important for understanding seasonal N 2 O emission patterns, while current standard engineering approaches could not explain the process failure. Appropriate operational strategies to avoid large community shifts still need to be identified. Author contributions W.G, A.J and E.M designed the study. All authors provided helpful feedback and suggestions throughout work on the study. J.R was responsible for data collection of process performance data. W.G performed the sludge sampling. R.N and W.G performed the laboratory work, sequencing and data analysis. R.N and W.G wrote the first draft of the manuscript. The manuscript was written by W.G, and R.N with critical and helpful reviews from H.B, A.J and E.M. Data availability Raw 16S sequences can be found on the NCBI sequence read archive under the repository number: PRJNA691692 All other data (species abundance tables as comma-separated tables, physico-chemical data sheets and R codes) are available from the Eawag Research Data Institutional Collection (Eric) at https://doi.org/10.25678/0 0 03SA. Declaration of Competing Interest The authors declare no competing interest.
v3-fos-license
2023-03-12T15:33:18.124Z
2023-03-01T00:00:00.000
257445598
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1729/13/3/734/pdf?version=1678333494", "pdf_hash": "fd2c88c8a3846238193deca8f9722d87a1e0b820", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43987", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0ffda6ac5820eb99d40cae91fcde697874b262a8", "year": 2023 }
pes2o/s2orc
Relapse of Monoclonal Gammopathy of Renal Significance after mRNA COVID-19 Vaccination: A Case Report This case report represents the first suspected case of light chain deposition disease relapse associated with mRNA COVID-19 vaccination. The 75-year-old female patient of Greek ethnicity was admitted to the clinic for the investigation of worsening renal function detected on routine lab examinations, two weeks after she received the second dose of the Moderna COVID-19 vaccine (mRNA-1273). Rapidly progressive glomerulonephritis and anemia were the most notable findings. She had a history of LCDD, which had remained stable for four years. Serum protein immunofixation showed monoclonal kappa zones, and a bone marrow biopsy revealed 5% plasma cell infiltration. These, along with other investigations, established the diagnosis of LCDD recurrence. The patient was started on chemotherapy, which improved her immunological profile, but not her renal function. The patient has remained on hemodialysis since. The association between mRNA vaccinations and LCDD relapse may be grounds for investigations into the pathophysiology of MGRS, given the patient’s previous long-term remission. This case report is not intended to directly inform changes in clinical practice. We must stress the importance of following all standardized vaccination protocols, especially in immunocompromised patients. Introduction The present case report is, to our knowledge, the first known incident of an adverse effect of the Moderna COVID-19 vaccine (mRNA-1273) in patients with MGRS due to light chain deposition disease. MGRS is characterized by increased serum levels of a monoclonal immunoglobulin (M-protein) produced by a non-malignant or pre-malignant B cell or plasma cell clone, causing organ damage, and more specifically in our case, renal disease. As MGRS is associated in its pathophysiology with dysfunctions of B cell maturation and proliferation, there are indications that vaccines targeting these processes, such as mRNA vaccines, may be implicated in exacerbations of the disease [1][2][3]. Furthermore, this patient's clinical course after vaccination is congruent with this hypothesis, based on the mechanistic understanding proposed by other research results from our clinical center [4]. Since early 2020, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the pathogen behind coronavirus disease , has had a tremendous medical and socioeconomic impact worldwide, and mass vaccination remains a foundational tool for effectively containing its effects on a large scale [5]. Many vaccines have been Life 2023, 13,734 2 of 8 designed, tested, licensed, and released since the outbreak began and have been proven to be protective, mainly by inducing the activation of memory B cells (MBCs) and long-lived plasma cells (LLPCs). As with any new medical intervention, complications following their administration are sometimes observed, some of them being rare and only in specific populations [6,7]. Describing and understanding the possible side effects of vaccines in general, and each specific vaccine, is important for the improvement of patient care, the further understanding of newer vaccine technologies such as mRNA technology, and for building and retaining trust between patients and healthcare institutions. Furthermore, case reports such as this one can be the basis for explorations into the pathogenesis of LCDD and for the further elucidation of mRNA vaccine mechanisms of action. Case Report A 75-year-old woman of Greek ethnicity presented to the Department of Nephrology of our hospital with rapidly declining renal function. More specifically, microscopic hematuria and notable proteinuria were detected on a routine laboratory examination that she had conducted for clinical monitoring of preexisting disease. Four years before the time of presentation, at the age of 71 years, she had a similar renal disturbance, which was diagnosed as light chain deposition disease. At the time, serum free light chain levels were elevated, and the kappa to lambda ratio was 22, while both serum and urine immunofixation revealed a monoclonal fraction of kappa light chains, as can be seen in Figure 1. A renal biopsy was taken, and it revealed mesangial proliferation, extended mesangial matrix, tubular atrophy with protein material deposition, interstitial infiltration, and kappa light chain deposits in the glomeruli, as can be seen in Figure 2. A bone marrow biopsy had unremarkable findings, showing only 5% plasma cell infiltration, thus excluding the diagnosis of lymphoma. Immunohistology staining for CD138 in the bone marrow biopsy revealed plasma cells. Unfortunately, cytogenetic studies for this patient were not available to our team. Since early 2020, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the pathogen behind coronavirus disease , has had a tremendous medical and socioeconomic impact worldwide, and mass vaccination remains a foundational tool for effectively containing its effects on a large scale [5]. Many vaccines have been designed, tested, licensed, and released since the outbreak began and have been proven to be protective, mainly by inducing the activation of memory B cells (MBCs) and long-lived plasma cells (LLPCs). As with any new medical intervention, complications following their administration are sometimes observed, some of them being rare and only in specific populations [6,7]. Describing and understanding the possible side effects of vaccines in general, and each specific vaccine, is important for the improvement of patient care, the further understanding of newer vaccine technologies such as mRNA technology, and for building and retaining trust between patients and healthcare institutions. Furthermore, case reports such as this one can be the basis for explorations into the pathogenesis of LCDD and for the further elucidation of mRNA vaccine mechanisms of action. Case Report A 75-year-old woman of Greek ethnicity presented to the Department of Nephrology of our hospital with rapidly declining renal function. More specifically, microscopic hematuria and notable proteinuria were detected on a routine laboratory examination that she had conducted for clinical monitoring of preexisting disease. Four years before the time of presentation, at the age of 71 years, she had a similar renal disturbance, which was diagnosed as light chain deposition disease. At the time, serum free light chain levels were elevated, and the kappa to lambda ratio was 22, while both serum and urine immunofixation revealed a monoclonal fraction of kappa light chains, as can be seen in Figure 1. A renal biopsy was taken, and it revealed mesangial proliferation, extended mesangial matrix, tubular atrophy with protein material deposition, interstitial infiltration, and kappa light chain deposits in the glomeruli, as can be seen in Figure 2. A bone marrow biopsy had unremarkable findings, showing only 5% plasma cell infiltration, thus excluding the diagnosis of lymphoma. Immunohistology staining for CD138 in the bone marrow biopsy revealed plasma cells. Unfortunately, cytogenetic studies for this patient were not available to our team. chain, KI-free kappa, and LI-free lambda. There was a predominance of kappa light chain (arrow) and free kappa chain (KI), which are also visible in the reference lane. The patient's previous medical history also included primary hypertension, hypothyroidism, osteoporosis, and nephrolithiasis, all of which were under proper medical management. At the time of the initial diagnosis, the patient was started on a corticosteroid regimen, with no additional chemotherapy. She showed a prompt response to the medication, with a rapid reduction in proteinuria, which allowed the treating physicians to avoid any escalations in treatment. Six months later her proteinuria had subsided, κ chains were no longer detectable in serum, and urine samples and her renal function had stabilized, as can be gleaned by her serum creatinine levels, shown in Figure 3. From then on, she entered follow-up on an outpatient basis, with few signs of disease exacerbations for more than 4 years, until 2021. In that year, two weeks after completing her COVID-19 vaccination with the mRNA-1273 (Moderna) vaccine (two doses separated by a 3-week interval) routine laboratory examinations revealed an acute decline in her renal function. Her serum creatinine levels increased from 1.1 mg/dL (a month previously) to 2.67 mg/dL two weeks after the second dose. In the same period, her EGFR, as calculated using the CKD-EPI equation, declined from 52 to 18 mL/min/1.73 m 2 . Proteinuria also worsened during the same period, from 0.75 g/24 h to 3 g/24 h, and active urine sediment with microscopic hematuria and red blood cell casts were detected in the latter urinalysis specimen. The patient's previous medical history also included primary hypertension, hypothyroidism, osteoporosis, and nephrolithiasis, all of which were under proper medical management. At the time of the initial diagnosis, the patient was started on a corticosteroid regimen, with no additional chemotherapy. She showed a prompt response to the medication, with a rapid reduction in proteinuria, which allowed the treating physicians to avoid any escalations in treatment. Six months later her proteinuria had subsided, κ chains were no longer detectable in serum, and urine samples and her renal function had stabilized, as can be gleaned by her serum creatinine levels, shown in Figure 3. From then on, she entered follow-up on an outpatient basis, with few signs of disease exacerbations for more than 4 years, until 2021. Patients with circulating M-protein are diagnosed as having monoclonal gammopathy of undetermined significance (MGUS) if the M-protein is <30 g/L. Typically, MGRS exhibits low levels of circulating M-protein, reflecting the small size of the underlying B cell or plasma cell clone [8]. Serum protein electrophoresis (PE) and immunofixation (IF), 24-h urine protein electrophoresis and immunofixation, and a serum free light chain (FLC) assay are included in the recommended diagnostic procedure, whereas a urine FLC assay provides no additional useful information [9][10][11]. International guidelines recommend us- In that year, two weeks after completing her COVID-19 vaccination with the mRNA-1273 (Moderna) vaccine (two doses separated by a 3-week interval) routine laboratory examinations revealed an acute decline in her renal function. Her serum creatinine levels increased from 1.1 mg/dL (a month previously) to 2.67 mg/dL two weeks after the second dose. In the same period, her EGFR, as calculated using the CKD-EPI equation, declined from 52 to 18 mL/min/1.73 m 2 . Proteinuria also worsened during the same period, from 0.75 g/24 h to 3 g/24 h, and active urine sediment with microscopic hematuria and red blood cell casts were detected in the latter urinalysis specimen. Patients with circulating M-protein are diagnosed as having monoclonal gammopathy of undetermined significance (MGUS) if the M-protein is <30 g/L. Typically, MGRS exhibits low levels of circulating M-protein, reflecting the small size of the underlying B cell or plasma cell clone [8]. Serum protein electrophoresis (PE) and immunofixation (IF), 24-h urine protein electrophoresis and immunofixation, and a serum free light chain (FLC) assay are included in the recommended diagnostic procedure, whereas a urine FLC assay provides no additional useful information [9][10][11]. International guidelines recommend using a serum FLC assay along with serum protein electrophoresis and immunofixation as an initial screening panel for monoclonal gammopathies [8,12]. Additional genetic tests and fluorescent in situ hybridization studies are helpful for clonal identification and for generating treatment recommendations. Flow cytometry can help identify small clones. Serum and urine protein electrophoresis and immunofixation, as well as analyses of serum FLC, were performed to identify the monoclonal immunoglobulin, which helped establish the diagnosis of MGRS relapse and could also be useful for assessing the patient's response to treatment. Finally, bone marrow aspiration and biopsy were conducted to identify the lymphoproliferative clone. This patient's immunological profile revealed very high levels of kappa chains, accompanied by monoclonal kappa zones on serum immunofixation and 9% plasma cell infiltration of bone marrow. Immunohistochemistry on the bone marrow biopsy also showed a predominance of CD138+ cells. These findings supported the diagnosis of recurrent LCDD and secondary renal involvement. In general, the chemotherapeutic agents used to treat MGRS are those that target plasma cells or other B cell neoplasms. Such agents include proteasome inhibitors (e.g., bortezomib, carfilzomib, and ixazomib), monoclonal antibodies (e.g., rituximab and daratumumab), alkylating agents (e.g., cyclophosphamide, bendamustine, and melphalan), immunomodulatory drugs (e.g., thalidomide, lenalidomide, and pomalidomide), and glucocorticoids (e.g., prednisone and dexamethasone) or human immunoglobulin replacement therapy [13]. In some patient populations, such as those with amyloidosis or monoclonal immunoglobulin deposition disease, the treatment strategy may also involve autologous hematopoietic cell transplantation and chemotherapeutic agents that do not require dose modification for kidney function. In our case, the patient was commenced on the standard treatment protocol, with bortezomib, cyclophosphamide, and dexamethasone. The therapeutic approach was organized around 21-day treatment cycles, with: • Intravenous cyclo-phosphamide 500 mg on the first day of each treatment cycle. She received only two treatment cycles; then, given the decline in her renal function, she was started on hemodialysis. Therefore, bortezomib doses had to be reduced to 0.7 mg/m 2 on the treatment schedule outlined above, and administered after her dialysis sessions. Despite treatment, her renal function deteriorated, and she entered hemodialysis. Four months later, her immunological profile had improved, with undetectable serum free light chains, but her renal function did not improve, and the patient remained on regular hemodialysis. The decline in renal function despite the improvement in the patient's immunological status was attributed to the aggravation of chronic renal lesions. Chronic renal involvement, including membranoproliferative lesions and tubulointerstitial inflammation, cannot be Life 2023, 13, 734 5 of 8 completely reversed by treatment; secondary focal segmental sclerosis and tubular atrophy are inevitable, as has been proven in other progressive glomerular diseases. Moreover, our patient had a mild baseline renal impairment with an episode of acute renal failure during the four-year follow-up, which was strongly indicative of chronic sclerotic lesions. Discussion The definition of MGRS was initially described in 2012 by the International Kidney and Monoclonal Gammopathy Research Group (IKMG) and later refined in 2017, when the diagnostic criteria for MGRS-related diseases were updated [1][2][3]. Monoclonal gammopathies (MGs) are directly associated with the dysregulation of B cell maturation and proliferation processes, as the responsible monoclonal immunoglobulin is produced in excess by an indolent B cell clone. Monoclonal gammopathy of undetermined significance (MGUS) is a disease entity defined by the presence of paraprotein and hematologic findings in line with, but unsatisfactory for the diagnosis of, multiple myeloma [14]. More specifically, MGUS is diagnosed with <10% bone marrow plasma cell representation and a lack of B cell aggregates [14]. MGRS is a term introduced to highlight that MGUS is not a benign disease that should only be monitored, as previously thought [1]. It highlights the renal damage caused by monoclonal gammopathy of undetermined significance, a disorder hematologically consistent with MGRS, but apparently without end-organ involvement [14]. Interestingly, MGRS was recently recognized and formally defined in the "5th Edition of the World Health Organization of Haematolyphoproliferative Tumours: Lymphoid Neoplasms" as follows: "Monoclonal gammopathy of renal significance (MGRS) represents a plasma cell or B-cell proliferation that does not meet accepted criteria for malignancy but secretes a monoclonal immunoglobulin or immunoglobulin fragment resulting in kidney injury" [15]. The diagnosis of MGRS affecting the kidneys is established via renal biopsy in cases of high clinical suspicion. Characteristic findings in optical microscopy and immunofluorescence include the membranoproliferative or mesangial hyperplasia pattern of glomerulopathy, followed by monotypic immunoglobulin and complement deposition. Secondary causes of MGRS may include systemic diseases, malignancies, viral or bacterial infections, and overall, any factor that stimulates the proliferation of B cells [16]. Vaccines may be included in these factors, as B cell proliferation is their main target of action. Renal biopsy registries show that a diagnosis of light chain deposition disease (LCDD) is made in 0.3-0.5% of all kidney biopsies, with an identified underlying monoclonal gammopathy of undetermined significance in approximately 41% of these cases [17,18]. All other types of MGRS have unknown incidence and/or prevalence. Kidney lesions in MGRS are primarily caused by the abnormal deposition of monoclonal proteins. Monoclonal proteins produced may be light chains, heavy chains, or intact whole immunoglobulins; they are produced by small, nonmalignant, or premalignant plasma cell or B cell clones. Deposition of these proteins may occur within the glomeruli, tubules, vessels, or the interstitium, depending upon the specific biochemical characteristics of the pathogenic light and/or heavy chains involved. The deposits can be categorized as organized or nonorganized. MGRS lesions with organized deposits can be further subdivided into those with fibrillar deposits, microtubular deposits, or crystal inclusions. MGRS lesions with nonorganized deposits include the monoclonal immunoglobulin deposition diseases (MIDDs; light chain, heavy chain, or light and heavy chain deposition diseases) and monoclonal gammopathy-associated proliferative glomerulonephritis, involving monoclonal immunoglobulin G (IgG), and rarely immunoglobulin A (IgA), immunoglobulin M (IgM), or light chain-only deposits. MGRS lesions without deposits include thrombotic microangiopathy associated with monoclonal gammopathy [19]. Other mechanisms regarding the pathogenesis of MGRS have also been described: 1. Secretion of high levels of vascular endothelial growth factor [20][21][22][23]. Circulating monoclonal immunoglobulin autoantibodies can target the phospholipase A2 receptor and induce a form of membranous nephropathy that can rapidly recur after kidney transplantation [24,25]. 4. Light chain (AL), heavy chain (AH), and heavy and light chain (AHL) amyloidosis. Extracellular deposition of amyloid in glomeruli, tubules, and/or vessels is characteristic of renal amyloidosis. In most cases, the M-protein-related amyloidosis is derived from fragments of monoclonal light chains (LCs), which are more often of the lambda (λ) than kappa (κ) isotype 26, and rarely from fragments of intact immunoglobulin (Ig) or heavy chains only. Amyloid is the only MGRS lesion that is Congo red-positive [8]. Kidney biopsy is crucial not only to establish a diagnosis of MGRS and differentiate it from MGUS, but also to describe the exact type of renal pathology. Patients with MGRS usually present with a progressive decline in kidney function, microscopic hematuria, proteinuria ranging from sub-nephrotic to overt nephrotic syndrome, electrolyte abnormalities, and/or proximal tubular dysfunction [8]. Although MGRS is commonly seen in patients 50 years old or older, it has also been reported in younger patients. Several studies of patients with MGRS have shown that kidney outcomes are closely associated with the hematologic response to chemotherapy [8,[26][27][28][29]. The treatment approach should be directed against the pathologic clone, with the primary goal of preserving kidney function. Regarding the recurrence of lymphoproliferative disorders, several environmental factors may be involved, such as bacterial infections and medications. Patients with MGRS are routinely vaccinated against seasonal influenza, H1N1, and Streptococcus pneumoniae before treatment initiation. Thus, this may be the main reason why there are no data regarding disease relapse following vaccination. Vaccination against COVID-19 aims to provoke an adequate cellular and humoral immune response, which includes a shift of B lymphocytes to specific subtypes capable of producing antibodies and establishing immune memory [30]. Recent studies have shown that levels of CD4+CD38+HLADR+ and CD8+CD38+HLADR+ activated T cells are substantially increased three weeks after vaccination [30]. Our results also showed that antibodies against the receptor binding domain of SARS-Cov-2 S protein, as well as neutralizing antibodies, were raised in CKD patients after mRNA vaccination for COVID-19, with the main increase at the first and second month after the second dose [4]. This timeframe corresponds to the time our patient had a relapse. Similar effects from other vaccines regularly administered to these patients may be attenuated by routine vaccination before the initiation of treatment, thus hiding the effect. Vaccines in this category include those against seasonal influenza, H1N1, and Streptococcus pneumoniae. Conclusions Our case may be an example of the potential pathogenic effects of immune stimulation after vaccination in patients with monoclonal gammopathy of renal significance. Laboratory findings pointed to a relapse of the underlying chronic disease with an aggravation of chronic kidney lesions, as revealed on biopsy. The previous long-term remission the patient was in, along with the timing of the LCDD flare-up, are signs of a possible underlying association. Despite adequate medical treatment, the patient had to enter dialysis and has remained on it ever since. The introduction of new vaccines and vaccine technologies, such as the mRNA vaccine, may be a good opportunity to study plasma cell dysfunction, such as in this disease. The mechanism of action of vaccines, in that they stimulate B cells to proliferate and differentiate and produce immunoglobulins, may affect the balance sought after in patients with monoclonal gammopathies. It is possible that if more cases such as this one are
v3-fos-license
2021-07-27T06:23:24.274Z
2021-07-26T00:00:00.000
236431872
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/med.21844", "pdf_hash": "5b98524017912ca7fe130f13a78bad547f1e8227", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43988", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "sha1": "2f16a952c747e6ab539f3171243d42810cdfb8d0", "year": 2021 }
pes2o/s2orc
Current status and future prospects of nanomedicine for arsenic trioxide delivery to solid tumors Despite having a rich history as a poison, arsenic and its compounds have also gained a great reputation as promising anticancer drugs. As a pioneer, arsenic trioxide has been approved for the treatment of acute promyelocytic leukemia. Many in vitro studies suggested that arsenic trioxide could also be used in the treatment of solid tumors. However, the transition from bench to bedside turned out to be challenging, especially in terms of the drug bioavailability and concentration reaching tumor tissues. To address these issues, nanomedicine tools have been proposed. As nanocarriers of arsenic trioxide, various materials have been examined including liposomes, polymer, and inorganic nanoparticles, and many other materials. This review gives an overview of the existing strategies of delivery of arsenic trioxide in cancer treatment with a focus on the drug encapsulation approaches and medicinal impact in the treatment of solid tumors. It focuses on the progress in the last years and gives an outlook and suggestions for further improvements including theragnostic approaches and targeted delivery. studies, side effects attributable to ATO lead to Grade 5 events 31 and treatment 29 respectively study discontinuation 37 in patients with solid tumors. Well-known adverse effects of ATO, not only in patients with solid tumors but in APL patients as well, are QTc prolongation, 15,38 dermatological conditions like rashes or hyperkeratosis, 7,29 neurotoxicity, 15 and transaminase elevation. 14,15 What is more, the carcinogenic potential of arsenic compounds has been pointed out (see Martinez et al. 39 for a review) and carcinogenicity of ATO is considered an "important potential risk" by the European Medicines Agency (EMA). 12 The poor clinical outcome in solid tumors stands in contrast to the antiproliferative, proapoptotic effect of ATO in many solid cancers in preclinical in vitro and in vivo models. For this circumstance, different explanations are conceivable. First, as APL is a hematologic malignancy, intravenously administered ATO is located where it needs to act: in the blood. It does not need to accumulate at a specific tumor site nor pass the blood-brain barrier (BBB), as it does when acting on a solid (brain) tumor. Therefore, insufficient concentrations of ATO reaching the tumor site are considered the main obstacle in the treatment of solid tumors. 40 Second, a priori or acquired resistance towards ATO has been described in APL patients [41][42][43] and seen in solid tumor cell lines 44,45 and is likewise imaginable in solid tumors in the clinic. | Nanoparticles as an approach to overcome the shortcomings of ATO in solid tumors In recent years, the advent of nanoparticles as novel drug delivery systems (DDSs) has offered new possibilities for improved delivery of chemotherapeutic drugs, for example, by increasing their bioavailability, decreasing their effects on healthy tissue, or enhancing their uptake by tumor cells (see Sun et al. 46 for a review). Utilizing DDSs for ATO delivery has been proposed as an efficient tool to eliminate some of the drawback of ATO use in therapy, such as (i) rapid clearance of ATO and its products from the blood, 47 and (ii) low specificity. Due to rapid clearance, a therapeutic dose of ATO is not reaching the tumor sites and a simple dosage increase of ATO is not feasible due to its systemic toxicity. However, utilizing DDSs offers an attractive approach to foster the antitumor effects of ATO and to possibly overcome the limitation of the insufficient enrichment at the tumor side while reducing its adverse effects. Moreover, nanotechnology offers the possibility of tailoring the DDDs to target different types of solid cancers specifically, for instance, by attaching specific targeting ligands to the carrier surface. Different strategies of ATO delivery to solid tumor entities have been examined over the past several years. They differ regarding the encapsulation strategies, the kind of carrier material used and the type of tumor the ATO-formulation aim to target. In this review, we focus on the newest development over the last few years and highlight some of the older studies, which were reviewed previously. 48 2 | STRATEGIES OF ATO ENCAPSULATION ATO (As 2 O 3 ) is an amphoteric oxide (i.e., a compound able to react both as a base and as an acid) and its aqueous solutions are weakly acidic (H 3 AsO 3 ). ATO dissolves readily in alkaline solutions and forms arsenites with the following pKa values: H 2 AsO 3 − (pK A1 = 9.22), HAsO 3 2− (pK A2 = 12.10), and AsO 3 3− (pK A3 = 13.40). 49 Tables 1 and 2 and discussed in detail in the following chapter. | MATERIALS USED AS DDSs OF ATO DDSs can be classified based on the type of material which forms the nanocarrier as organic, inorganic and hybrid. Each of these groups has its advantages and disadvantages. For instance, liposomes often feature a low drug loading capacity and instability during storage. Meanwhile, mesoporous silica nanoparticles (MSNs) can load the drug efficiently due to the porous structure and extremely high specific surface area, but often exhibit a high burst drug release during systemic circulation. The pros and cons of each material class are shown and discussed in the text below. | Organic DDSs of ATO based on organic materials are summarized in Table 1 and include liposomes, proteins, dendrimers, and polymer nanoparticles. SÖNKSEN ET AL. | 381 liposome. During the reaction, protons are released, which react with acetate ions to form acetic acid. Consequently, the weak acid diffuses out of the liposome in exchange for ATO. Both the formation of insoluble metal(II) arsenite complexes and the efflux of acetic acid facilitate the ATO uptake and entrapment in a liposome ( Figure 1). For such systems of drug encapsulation in liposomes, the term "nanobin" (NB) was proposed. 82 For instance, it was shown that nanobin encapsulation of ATO (NB(Ni, As)) significantly improved pharmacokinetic properties of the drug and led to greater therapeutic efficacy compared with free ATO in an orthotopic model of triple-negative breast cancer. 84 In a follow-up work, the nanobins (NB(Ni, As)) were coated with a pH-sensitive polymer to enable pH-triggered drug release. 85 Nanobins were also used for co-encapsulation of arsenic and platinum drugs. 83 Liposomes can be also functionalized with various targeting ligands to enable ATO delivery to specific cells. For instance, ATO-loaded liposomes were functionalized with folate ligands and their cellular uptake and antitumor efficacy were evaluated in folate receptor (FR)-positive human nasopharyngeal epidermal carcinoma (KB) and human cervical carcinoma (HeLa) cells, as well as FR-negative human breast carcinoma (MCF-7) cells. 86 The uptake of folate functionalized ATO-loaded liposomes by KB cells was three to six times higher than that of free ATO or liposomes without the targeting ligands. Zhang et al. 87 reported on nanobins (NB(Ni, As)) functionalized with urokinase plasminogen activator antibodies to promote targeted delivery to epithelial ovarian cancer cells (in which the urokinase system is overexpressed compared to normal cells). The targeted nanobins showed a fourfold higher uptake in ovarian cancer cells in comparison with nontargeted nanobins. In the last years, delivering ATO using liposomes has also been studied to examine whether liposomalencapsulated ATO could reduce the drug toxicity and improve the efficacy of ATO in treating human papillomavirus (HPV)-associated cancers. Wang et al. 50 showed that ATO encapsulated into liposomes in presence of Ni(II) ions induced apoptosis and reduced protein levels of HPV-E6 in HeLa cells more effectively than ATO alone. Akhtar et al. 51 altered the properties of liposomes such as size (from 100 to 400 nm) and surface charges and studied their influence on the efficiency of ATO delivery to cervical cancer cells. It was shown that neutral liposomes of 100 nm in size were the best-tested formulation, as they showed the least intrinsic cytotoxicity and the highest loading efficiency. When Mn(II) ions are used as transitions metal to efficiently encapsulate ATO inside a liposome, drug nanocarrier suitable for magnetic resonance imaging (MRI), and thus theragnostic applications, can be prepared ( Figure 1). 52 The formation of the Mn(II) arsenite precipitate in liposomes generates magnetic susceptibility effects, which can be detected as a dark contrast on T 2 -weighted MRI. When accepted by cells, due to a low pH in endosome-lysosome, the Mn(II) arsenite complex decomposes, which results in a release of the As-drug and Mn(II) ions (i.e., a T 1 contrast agent that gives a bright signal in MRI). The convertible MRI signals (dark to bright) enable to follow not only the ATO delivery but also its release. Moreover, the liposomes were functionalized with phosphatidylserine (PS)-targeting antibodies to enable a specific binding of the nanodrug to PS-exposed glioma cells. | Proteins Zhou et al. 88,89 investigated albumin as a DDS for ATO. Albumin microspheres as a DDS for ATO were prepared using a chemical crosslink and solidification method and the synthesis was optimized with regard to the particle size and drug loadings. 88 In another work, ATO-loaded albumin microspheres were functionalized with a transactivating transcriptional activator peptide (i.e., a cell-penetrating peptide) and the nano drug delivery into bladder cancer cells was evaluated. 89 The results indicated that the attached peptide enhanced intracellular permeation of the nano drug by translocating microspheres across the cell membrane. | Polymers Nanoparticles Another polymer examined as DDS for ATO was poly(lactide-co-glycolide) (PLGA). 90 reported that the nano drug had a better inhibition and promoted greater lactate dehydrogenase release in comparison to free ATO. In vivo the ATO-NPs induced a significant decrease in the expression of DNA methyltransferases, while the expression of N-terminal-cleaved gasdermin E was upregulated. As a consequence, the nanoparticles inhibited the tumor growth more than free ATO or a control. Lian et al. 58 Lu at al. 59 reported on a pH-responsive dendrimer based on polyamidoamine (PAMAM) as a DDS of ATO. The surface of the nanoparticles was functionalized with an αvβ3 integrin targeting ligand to enable targeted delivery to glioma. In in vitro BBB model, the targeting ligand attachment heightened the cytotoxicity of the ATO-loaded nanoparticles, due to an increased uptake by C6 (glioma) cells. In vivo, the tumor volume of C6 glioma-bearing rats was reduced by 61.5 ± 12.3% after intravenous administration of the nano drug, and that was approximately fourfold higher than that of free ATO and twofold higher than that of the nano drug without the targeting ligands. | Inorganic nanoparticles As inorganic carriers, two types of materials were intensively studied-materials based on metal (or metal oxide) nanoparticles and silica nanoparticles. The overview of inorganic DDSs for ATO is given in Table 2. | GdAsO x nanoparticles As metal nanoparticles for ATO delivery, GdAsO x NPs were proposed. To synthetize such nanodrug, Chen et al. 60 co-precipitated As with Gd in the presence of dextran into GdAsO x NPs. It was proposed that the unloading of ATO from such nanoparticles could be triggered by endogenous phosphate ions present in the plasma and cytosol. In the release process, the arsenite ions would be exchanged by phosphate ions, and thus ATO release could be achieved. Indeed, the in vitro results showed that the nanoparticles gradually "dissolved" into fragments in a phosphate solution. In follow-up studies, the therapeutic effect of GdAsO x NPs on aggressive HCC was studied. 61,93 After administration of the ATO-NPs, arsenic accumulation within tumors was evaluated. It was found that the accumulation of the ATO-NPs was as much as 5%, which was ten times more than when only ATO was administrated. 61 tumor. 62 The nanoparticles were studied both in vitro and in vivo, and the results showed that the ATO-NPs caused severe necrosis via chemoembolization combinational therapy. | Mesoporous silica nanoparticles Nanoparticles formed by mesoporous silica have been extensively studied as DDSs not only for ATO. 94 MSNs are a class of inorganic porous material, which comprise open mesoporous channels with a diameter of 0.1-10 nm. Furthermore, their outer surface can be modified by attaching various molecules including targeting ligands for tumor specific drug delivery. The high material porosity enables high drug loading. However, due to nonspecific drug-material interactions, a burst drug release is often observed. To decrease the burst release and increase ATO loading, two main strategies were reported ( Figure 4A,B). First, enhanced ATO binding via thiol 63,64 or amino functional groups 65,66 anchored on the surface of the mesoporous channels, and second-similar to the strategy for liposomes described above-an encapsulation of ATO in presence of transition metal ions to form insoluble MAsO x complexes. [67][68][69][70] To increase the ATO loading even more, the second approach was applied to hollow MSNs ( Figure 4C). [71][72][73] Thiol group and amino group functionalized MSNs Silica nanoparticles functionalized with thiol groups were used to bind ATO to develop nano drug for treating MDA-MB-231 triple-negative breast cancer (TNBC). 63 The inner and outer surfaces of MSNs were functionalized with thiol groups not only for the ATO binding, but also to conjugate targeting agents to the outer surface. As a targeting ligand, cyclic MSNs with MAsO x complexes To prepare silica nanoparticles with MAsO x complexes, two approaches were reported ( Figure 4B)-(i) loading presynthesized MSNs with a transition metal salt and subsequently with ATO, 67 and (ii) pre-synthesizing MAsO x nanoparticles and coating them subsequently with a shell of mesoporous silica. [68][69][70] The advantage of the first approach is that the MSNs can be combined with other nanoparticles before ATO is loaded. For instance, MSNs were combined with magnetic iron oxide nanoparticles to enable not only ATO delivery but also real-time Another theragnostic agent combining ATO delivery and MRI was reported by Zhang et al., 68 who developed MnAsO x @SiO 2 core-shell nanoparticles. In the synthesis, first manganese arsenite complexes were prepared by a co-precipitation of manganese acetate and aqueous ATO. Then tetraethyl orthosilicate was added to coat the MnAsO x nanocomplexes with a silica shell. In a subsequent step, the nanoparticles were decorated with a pH-low insertion peptide (pHLIP), which was added to target an acidic tumor microenvironment. The targeting ability was Fei et al. 74 prepared hybrid core-shell nanoparticles by coating HSNs (functionalized with amino groups) with a liposomal shell for controlled ATO release. The surface of the nanoparticles was functionalized with Arg-Gly-Asp (RGD)-ligands to enable targeted delivery. In vitro, the ATO-NPs showed good biocompatibility and low toxicity on HepG2, MCF-7, and LO2 cells. Moreover, due to the attached ligand, enhanced cellular uptake and a reduced halfmaximal inhibitory concentration (IC 50 value) of the nano drug could be detected. In addition, the targeting efficiency of the ligand functionalized ATO-NPs was also confirmed in an H22 tumor-xenograft mouse model. | Hybrid Hybrid materials consist of at least two constituents at the nanometer or molecular level. Commonly one of these components is inorganic and the other one organic in nature. Many of the materials discussed in the two previous chapters (organic and inorganic materials) and summarized in Tables | Metal-organic frameworks MOFs are porous crystalline coordination polymers. They comprise inorganic metal ions (or clusters) and organic ligands. 96 They exhibit outstanding properties including high internal surface area and chemical versatility. They have been suggested as promising materials for many different applications including gas storage, catalysis, and sensing, 97 but also drug delivery. 98 The most prevalent method described in published reports to capture drug molecules in MOFs is via noncovalent interactions. 98 In such cases, upon administration, the drug can easily diffuse from the material, and thus no control over the release is achieved. However, when administrating toxic drugs such as ATO, having a control of the drug release is crucial. Therefore, for ATO delivery, MOFs having possibilities to form a strong interaction with ATO, such as a chemical bond, were proposed and are summarized in Table 2. F I G U R E 5 Schematic illustration of HSNs loaded with ATO and its drug release with activatable T1 imaging process inside cells enabled by released Mn(II)ions. (Figure reprinted with permission clusters to prepare a core-shell structure, in which the core would be responsible for imaging via MRI, whilst the shell could function as DDS for treatment of ATRT. Both the imaging and therapeutic activity were demonstrated in vitro. Another reported MOF for ATO delivery, which also displayed a prominent pH-triggered behavior, was Zn-MOF-74. 78 Zn-MOF-74 consists of Zn(II) ions and 2,5-dihydroxybenzene-1,4-dicarboxylate ligands and when desolvated, it contains a high density of vacant metal sites readily accessible for guest binding. It was shown that ATO could be successfully attached to these sites, and thus a high drug loading could be achieved. Moreover, it has been shown that the drug release, tested in a phosphate buffered saline, could be triggered by a pH change from 7.4 to 6.0. However, no additional biological studies have been reported. | BENEFITS OF UTILIZING NANOPARTICLES FOR ATO DELIVERY In addition to the chemical properties and encapsulation strategies of DDSs for ATO, the benefits which the ATOformulations offer are of great interest too. The most recent studies dealing with ATO-NPs can be assigned to five categories regarding the benefit(s) which the nanoparticle formulation(s) is/are supposed to yield: • Improvement of pharmacokinetics, • Targeted delivery via surface modification, • Theragnostic properties, • Enhancement of Transarterial Chemoembolization (TACE), and • Enhancement of BBB crossing. | Improvement of pharmacokinetics Since improvement of pharmacokinetics-such as controlled release or prolonged blood circulation half-life-is such a crucial point when it comes to nanomedicine, almost all studies evaluated dealt with this subject in one way or another. | Controlled release of ATO The most favored approach to achieve controlled release of ATO was to ensure pH-triggered release from the respective nanoparticle. Since acidic pH is a well-known characteristic of tumor tissue, 99 making ATO release SÖNKSEN ET AL. | 389 pH-dependently, with higher ATO release at lower pH value, ought to provide a kind of tumor-directed ATO delivery while sparing healthy tissue. pH-dependent release was achieved mainly through pH-labile bond respectively attachment between ATO and the nanoparticle. 52,59,[69][70][71][72][73]75,76,78 Other researchers grafted pH-responsive material upon the surface of their nanoparticles to accomplish pH-dependent ATO release. 65,66,68 The degree of pH-selective release differed not only depending on the type of nanoparticle used, but on the exact composition of the respective nanoparticle. Inorganic phosphate (Pi-)triggered ATO release was another way of obtaining controllable release. All four studies following this approach 60-62,93 used gadolinium-based nanoparticles, in which the arsenic could be exchanged by phosphate ions. Chen et al. 60 reported an outstanding ON/OFF specificity for their GdAsO x nanoparticles, with no arsenic release in the absence of Pi in vitro. Fu et al. 61 and Zhao et al. 62 attempted to introduce Pi-triggered ATO drug-eluting beats (DEBs) for the improvement of TACE therapy (see below) for HCC. As occlusion of the hepatic artery is a key characteristic of TACE, and intracellular Pi supply is limited upon occlusion, the Pi deprivation slowed down the drug release, avoiding high plasma peak levels of arsenic within the first hours of treatment compared to ATO alone. 62,93 Of note is that none of the studies testing for disturbance of Pi levels in plasma observed lasting changes of the very same. 60,93 | Prolonged blood circulation and sustained release of ATO In comparison to controlled release that is mediated by a defined trigger, sustained release of ATO eventually aims to prolong the circulation of ATO, allowing sufficient ATO concentrations to reach the tumor site before being metabolized and excreted. Controlled release can also lead to or be accompanied by sustained release. Zhao et al. 73 coated their pH-sensitive, ATO-containing HSNs with GSH and observed a higher retention time in blood, which they attributed to reduced interactions between the GSH-coated nanoparticles and serum proteins. Zhang et al. 68 observed that modifying their nanoparticles with pHLIP not only lead to pH-dependent release of ATO but also prolonged nanoparticle blood circulation in mice. Similar observations were made by Tao et al. 66 as well as by Xiao et al., 65 that both grafted their nanoparticles with the pH-responsive PAA. The in vivo half-life of those PAA-coated nanoparticles was significantly prolonged compared with free ATO. 65,66 Independent from pH-dependency, Lian et al. 58 achieved sustained release in vitro by camouflaging their ATOloaded SANs with RBCM. As RBCM coating reduced the macrophage uptake in vitro and showed higher antitumor effect in vivo, the authors hypothesized that RBCM-SANs could escape the clearance by the immune system, enabling more ATO to reach the tumor site. 58 Two authors used RGD-conjugated nanoparticles as a targeted delivery system (see below) and observed sustained release, namely an enhanced half-time of ATO in vivo compared to uncoated nanoparticles and free ATO. 59,74 Coating with PEG can reduce the uptake of nanoparticles by the reticuloendothelial system, nanoparticle accumulation in the liver and thereby increase the circulation lifetime. 100 By contrast, Chen et al. 60 achieved enhanced arsenic accumulation in the tumor via a different mechanism. Their Pi-triggered nanoparticles showed a 10-fold accumulation of arsenic in the tumor tissue compared to free ATO, which they ascribed to the EPR effect of nanoparticles. 60 The EPR effect was also considered a reason for enhanced uptake of ATO-NPs in the tumor tissue observed by Tao et al. 66 and Huang et al. 69 Another nanoparticle system by Chi et al. 72 lead to almost doubled arsenic uptake compared with free ATO into HCC cells, which the authors speculated might have been due to the rampant metabolism of tumor cells or easier internalization of nanoparticles via endocytosis. 72 Endocytosis was also identified as the most probable mechanism for enhanced uptake of arsenic from ATO-NPs compared with free ATO by Hu et al. 55 ; likewise, they observed a doubling of arsenic concentration. A similar increase of arsenic accumulation could be observed by Fu et al.,93 in whose study the arsenic level in rabbit VX2 tumors (a model for human HCC) was almost three times higher under treatment with ATO-NPs compared to free ATO. Long-term accumulation was described in a study by Zhao et al.,62 who showed that with their nanoparticles used in the TACE procedure, intratumoral arsenic could be detected as long as seven days after the TACE procedure. Free ATO in turn was close to zero after the same time. 62 The enhanced uptake of arsenic into HCC cells in the study of Zhang et al. 68 showed pH-dependency, wherefore the authors ascribed the accumulation in tumor cells to the pH-triggered release properties of their nanoparticles. | Targeted delivery via surface modification A huge advantage of nanoparticles consists in their modifiable surface. In the past few years, several studies have shown, for instance, that chemical modification not only enabled nanoparticles to increase BBB penetration (see below), but also tuned the toxicity of nanoparticles as drug delivery vehicles. 59 Apart from general diversification of nanoparticle characteristics, surface modification of nanoparticles holds great potential in terms of targeted therapy. Attaching targeting ligands directed towards specific structures on tumor cells or the tumor microenvironment could possibly lead to an enhanced antitumor effect while sparing healthy tissue. Recently, three studies evaluated nanoparticles modified with RGD for targeted delivery of ATO towards glioma, HCC, and TNBC cells. 59,63,74 RGD selectively binds α v β 3 integrin peptides, which are overexpressed by endothelial cells of the tumor vasculature and tumor cells. 102 Indeed, the authors showed that the tumor uptake of RGD-modified nanoparticles was higher compared to uncoated nanoparticles, which was accompanied by higher antitumor efficacy, namely lower tumor volume, larger area of tumor necrosis in vivo 59,63,74 and longer survival 59,74 compared with uncoated ATO-NPs and ATO alone. Beyond that, Fei et al. 74 confirmed that the transport of their RGD-modified nanoparticles was effectively dependent on α v β 3 integrins. Another targeting ligand, lactobionic acid, was studied as a coating agent by Song et al. for HCC-directed ATO-NPs. Lactobionic acid is a disaccharide consisting of gluconic acid and galactose. Galactose-binding asialoglycoprotein receptor (ASGPR) is a receptor primarily expressed in the liver and not in other human tissues, therefore it constitutes an interesting target for HCC-directed drug delivery. 103 The authors showed for two different nanoparticle compositions that surface modification with lactobionic acid led to a decreased toxicity of ATO-NPs in normal hepatocytes in comparison to the toxic effect in HCC cells in vitro. 56,57 However, in vivo, only minimal reduction of tumor volume upon treatment with lactobionic acid-modified ATO-NPs could be detected compared with ATO alone. The authors predicated the advantage of lactobionic acid-modified nanoparticles in sparing the healthy tissue compared to free ATO, as confirmed by H&E staining of the liver and kidney. 57 It is of note that the preference for HCC cells is ought to be at least partly mediated by the EPR effect as ASGPR is not only expressed on HCC cells but on normal hepatocytes as well. 103 Folic acid is yet another targeting ligand that aims at a receptor which is overexpressed on the surface of various cancers and has hence been identified as an attractive target for tumor-directed therapy: the folate receptor (see Assaraf et al. 104 for a review). Chi et al. 67 liposomal ATO-NPs to target glioma cells. Their in vitro study revealed that nanoparticle binding to glioma cells was PS-dependent. 52 However, in vivo experiments of this approach are still pending. Finally, Tao et al. 66 modified their nanoparticles with angiopep-2, a specific ligand of the lipoprotein receptorrelated protein (LRP) receptor. As Glioma and normal brain endothelial cells express LRP receptor on their surface, the authors proposed that functionalization of the nanoparticle surface with angiopep-2 could lead to increased accumulation of ATO in glioma. As a matter of fact, they verified that angiopep-2-modification led to a higher cellular uptake of nanoparticles by glioma and brain endothelial cells. The study revealed that targeted therapy with angiopep-2 was effective in vivo as it was shown by significantly decreased tumor volume, longer survival time and higher accumulation of the nanoparticles in tumor tissue. | Theragnostic properties Theragnostics describes the combination of therapy and diagnostics in one system. Visualization of drug-containing nanoparticles by integrating imaging agents into the nanoparticles is an attractive feature as it allows for image- In contrast to the bright T 1 -imaging contrast manganese, iron oxide displays negative enhancement in T 2 -weighted MRI. Ettlinger et al. 77 as well as Chi et al. 67 confirmed that nanoparticles with (superpara)magnetic iron oxide cores could be visualized via MRI. While the biocompatibility of magnetic iron oxide nanoparticles seems to be given, 105 further studies evaluating the in vivo distribution of ATO-NPs with iron oxide cores upon intravenous administration are pending. | Enhancement of TACE for HCC treatment For patients with intermediate-stage HCC, TACE has become a core treatment method. The method combines intra-arterial injection of a chemotherapeutic substance with embolization of tumor feeding vessels. 106 In a randomized trial, it could be demonstrated that TACE using drug-eluting beads (DEB-TACE) leads to a better tumor response with reduced adverse side effects compared with normal TACE. 107 HCC has been the tumor entity prevailing the most recent studies on ATO nanoparticles for drug delivery (see Tables 1 and 2). Therefore, it is only logical that certain studies focused on assessing the value of ATO-nano DEBs (ATO-NDEBs) for TACE. The studies by Fu et al. 93 and Zhao et al. 62 both focused on ATO-NDEBs from which ATO could be released in a Pi-triggered manner (see above). While Fu et al. 93 emulsified their ATO-NPs in lipiodol, which is also used for conventional TACE, Zhao et al. 62 coated their ATO-NPs with dextran. Both authors administered their ATO-NDEBs intraarterially into VX2-tumor-bearing rabbits. They observed high intratumoral arsenic accumulation (see above) and low plasma arsenic levels compared with conventional TACE, indicating that the NDEB formulation prevented the rushing out effect of ATO into the peripheral circulation. 62,93 Moreover, it was demonstrated that the liver and renal toxicity of ATO-NDEB was close to the sham group and much lower than the toxicity of conventional TACE with ATO, confirmed by H&E staining and blood levels of liver and kidney markers. 93 | Enhancement of BBB crossing The second most prevalent tumor entity used in the evaluation of nanoparticle-based drug delivery of ATO are brain tumors, namely glioma and ATRT (see Tables 1 and 2). As mentioned before, ATO has been shown to be a potent GLI-inhibitor (see above). GLI has been firstly identified to be amplified in human malignant glioma. 108 What is more, a subgroup of ATRT is characterized by an overexpression of GLI. 109 The desire to improve the characteristics of this potentially effective drug by nanoparticle encapsulation is therefore very reasonable. When it comes to brain tumors, the BBB constitutes a limiting factor to successful treatment as most drugs cannot pass it (see Pardridge 110 for a review). This problem has been addressed by evaluating ATO-NPs for transport across the BBB. While both studies showed higher BBB penetration of their modified ATO-NPs in vitro as well as higher antitumor efficacy in vivo, 59,66 the strategies differed. Tao et al. 66 used angiopep-2 as a targeting ligand for LRP receptors, present on both glioma as well as human brain endothelial cells (see above). The competition essay showed that transport of the NPs across the in vitro BBB model veritably relied on the targeting angiopep-2. Lu et al. 59 in turn coated their ATO-NPs with RGDyC, which is known to interact with integrin receptors expressed on the surface of neutrophils and monocytes. 111 The underlying idea was to target leukocytes in peripheral blood, stimulating phagocytosis of NPs and eventually enabling uptake into the brain across the BBB upon leukocyte recruitment. 59,111 Indeed, their ATO-NPs showed higher efficacy in vivo, but also decreased the cell viability of glioma cells in an in vitro BBB model. The leukocyte targeting therefore cannot be the only explanation for enhanced BBB uptake, which might at least partly be also attributable to the additional PEGylation the authors used. However, the exact mechanisms of RGDyC-mediated BBB crossing remain to be elucidated. | CONCLUSION AND OUTLOOK Evidently, the interest in evaluating nanomedicine for ATO delivery to solid tumors has emerged in the last years, especially for HCC and brain tumors. There are many aspects to consider when designing nanocarriers for ATO delivery. It is not just about the loading capacity, but also suitable carrier size, surface properties including an attachment of targeting ligands, options of triggered drug release or combination with imaging agents to form theragnostics. Encouragingly, more and more researchers have taken their nanoparticles to the in vivo stage, supposedly providing a better approximation to the efficacy of NPs than cell culture experiments. However, more data about biodistribution, in vivo safety and stability of NPs have to be gathered before ATO-NPs can be taken to the clinical stage. Given the numerous advances and attempts that have been made in the past few years, we hope that this review can provide an impetus and inspiration for future research on ATO-NPs. ACKNOWLEDGMENT The authors gratefully acknowledge financial support by the Else Kröner-Fresenius-Stiftung (project no. 2016_A181).
v3-fos-license
2021-09-27T18:51:46.207Z
2021-08-16T00:00:00.000
238645267
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1177/02610183211024820", "pdf_hash": "c1714cdfa71a12d79ee721f42d0de21961ef3ea4", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43989", "s2fieldsofstudy": [ "Psychology", "Sociology", "Education" ], "sha1": "65372d945085ede489b95045c853b0ea6d5b4e4c", "year": 2021 }
pes2o/s2orc
Mapping mental health and the UK university sector: Networks, markets, data The mental health and well-being of university staff and students in the UK are reported to have seriously deteriorated. Rather than taking this ‘mental health crisis’ at face value, we carry out network and discourse analyses to investigate the policy assemblages (comprising social actors, institutions, technologies, knowledges and discourses) through which the ‘crisis' is addressed. Our analysis shows how knowledges from positive psychology and behavioural economics, disciplinary techniques driven by metrics and data analytics, and growing markets in digital therapeutic technologies work as an ensemble. Together, they instrumentalise mental health, creating motivational ecologies that allow economic agendas to seep through to subjects who are encouraged to monitor and rehabilitate themselves. Mental health’ as a problem for UK universities has come to be largely defined through the outcomes of ‘resilience’ and ‘employability’ and is addressed through markets that enable training, monitoring, measuring and ‘nudging’ students and staff towards these outcomes. Introduction The 'mental health crisis' in UK universities is said to have deepened in the past five years, especially after a series of highly publicised student and staff suicides (BBC, 2018;Pells, 2018), as well as, more recently, during the COVID-19 crisis (Johnson and Kendall, 2020). Staff and student unions have pointed to unsustainable workloads, work insecurity, debt-related stress and cuts to student support (UCU, 2019;NUS 2020). On the side of management, representative organisation Universities UK (UUK) has declared mental health a 'strategic priority ' (2017, n.p.), to be addressed by a 'whole university approach' (2020). The Office for Students (OfS), England's Higher Education regulator, has run competitions for projects to improve students' 'mental health outcomes' (OfS, 2018;. Within critical university studies, much has been written about the emotional effects of UK higher education (HE) restructuring on students and staff, focusing especially on the stress of self-monitoring and loss of solidarity in increasingly competitive environments (e.g. Hall and Bowles, 2016;Loveday, 2018;Morrish, 2019a). Indeed, while the preoccupation with student mental health extends back to the mid-twentieth century (Crook, 2020), and renewed interest in it was shown under New Labour (Baker et al., 2006), most contemporary critics link the current 'mental health crisis' to recent institutional transformations. The restructuring of UK HE itself is typically understood through the loose framing of 'neoliberalism' (Smyth, 2017). While useful for locating the ideological lineage of similar public sector restructuring methods across different contexts and countries, this framing hinders tracing the changes produced by successive layers of restructuring in UK HE. Neoliberal New Public Management approaches have been implemented since the 1980s. However, over the past decade, universities became financially dependent on tuition fees -increased in 2017 to £9,250 per year -and are ever more subjected to metrics that feed into a marketplace of qualifications. Rapid institutional expansion has been achieved through unprecedented levels of borrowing (McGettigan, 2015), while 'soft privatisation' has embedded private sector services in HE infrastructure, creating new pathways for governance, measurement and intervention (Cone and Brøgger, 2020). Meanwhile, students graduate with increasing debt, whose repayment is dependent on earnings. Thus, the economic model of UK universities now directly links the labour market performance of graduates to the sustainability of HE debt. How might this context of HE restructuring shape conceptions of, and institutional responses to, the 'mental health crisis' in UK universities? We draw on approaches of critical studies of youth and education policy, including those focusing on mental health, that emphasise the power of networks and assemblages of policy production, encompassing governmental organisations, businesses and technologies (e.g. Ball, 2016;McGimpsey et al., 2017;Williamson, 2021). Our study situates HE restructuring as part of a broader dispositif (Foucault, 1980), whose heterogeneous elements contribute to the 'variegated ecology of knowledge and expertise' (Bacevic, 2019: 88) making up universities' mental health policy. We offer an exploratory network analysis of these elements, encompassing social and technical entities (actors, infrastructures, documents, events), combined with discourse analysis of policy, grey literature and relevant media. The policy is thus examined through mapping the management and business networks that contribute to it; the broader political agendas and networks served by it; and how these agendas materialise through particular technologies and relations. Our findings show that key elements of contemporary university restructuring -metrics, data, outsourcing, digital education tools -are also employed in mental health interventions. While the 'student mental health crisis' is predominantly an effect of insufficient support services, the 'solution' continues this trend. We replicate observations that link the positive mental health and well being agenda in education to New Labour policy networks (McGimpsey et al., 2017). These policy assemblages tie population mental health to national economic productivity, promote the metricisation, digitalisation and datafication of HE, and establish algorithmic behavioural economics in mental health governance. Technological solutions to student mental health -new apps and learning analytics -are promoted aggressively as the optimal, and labour-saving, approaches by the UK government, UUK, OfS and emerging therapeutic markets. Our network analysis identifies these markets, which are forming around the procurement of resilience and wellbeing workshops, digital mental health apps, and learning analytics for mental health. While our map cannot represent the circulation of value in these markets, we nevertheless note investment in labour-saving technologies, the exploitation of free staff and student labour, and intensified attempts to extract value from student data. Our critical analysis of the management of mental health in UK HE is not intended as a wholesale critique of psychotherapy discourse. We agree with Wright (2008; that the politicisation of private suffering in the public domain through therapeutic discourse has powerfully challenged gendered and patriarchal dynamics, in part by politicising vulnerability (Butler, 2006). Instead, our analysis draws attention to particular business networks, technologies and socio-economic/socio-technical assemblages formed around a narrow range of explicitly chosen therapeutic technologies and discourses, which are designed to meet the predetermined business needs of universities. The latter are, in turn, shaped by policies designed to produce an economic motivational ecology within which options are severely limited. Networked assemblages of policy production We draw on education policy research that analyses how 'neo-liberal policy networks' (Junemann et al., 2016: 537) are constituted and produce subjectivities. These approaches follow connections between governmental organisations, civil society and for-profit businesses, as well as the circulation of policy discourses, money (Ball, , 2016Au and Ferrare, 2015), and education technologies (Williamson, 2019), so as to map the accelerated, networked neoliberal policymaking Peck and Theodore (2015) have termed 'fast policy'. The Deleuzian concept of 'assemblage' is used increasingly in this research, to indicate 'complex social formations as made up of a whole array of trans-scalar and temporally multiple orders/levels/components and flows' that include not only social entities but also 'cultural forms, discourse, representation, subjectivities and affectivities' (Youdell and McGimpsey, 2015: 119). Conceptualising 'the university' as an assemblage allows us to de-reify its historical form and explore how its elements are constituted and transformed through social processes (Bacevic, 2019). Here, we use the term 'assemblage' in the Foucauldian sense, as a subset of a dispositif: 'a thoroughly heterogeneous ensemble. . .', 'the system of relations that can be established between these elements', which 'has a dominant strategic function' (Foucault, 1980: 196). In this approach, the network does not represent social power, but, instead, it displays the institutional and organisational avenues, discourses, technologies, knowledges and regulatory ecologies through which power is exercised and comes to be enacted or resisted (Ball and Olmedo, 2013). Our method thus combines (i) network analysis; and (ii) discourse analysis of policy and other grey literature. (i) Network analysis: We map actors, market relationships, organisational arrangements, events, products and technologies relevant to universities' latest mental health agendas, policies and interventions. Our mapping aims to reveal 'influencers', managerial hierarchies, the composition of policy communities and associated markets and infrastructures facilitating particular approaches (Jalili, 2013) to mental health. We store data using Neo4J (2014), a NoSQL graph database, which stores information as objects ('nodes') and relationships between objects ('edges'). This technology offers additional dimensions to traditional maps of education policy social networks (e.g. Morris et al., 2020), in that nodes are also projects, products, technologies, events and documents, allowing us to map multiple levels of social-material relationships (e.g. employment relationships, producer or exchange relationships) between nodes. We also trace the temporal mobility and productivity of nodes engaged in policy-making labour (Ball, 2016): commissioning and authoring reports; organising and speaking at policy events; fostering trust at trade fairs (Komljenovic, 2019). The static snapshots of the network we offer in this article cannot represent this temporal information. The power of Neo4J can be exploited by using network visualisations, which we anticipate making available on our project website, mapukhe.net. Data are not objective representations of fact: their collection, the form they acquire, their analyses and their uses are political and ideological (Beer, 2016;Gitelman, 2013;Prinsloo, 2019). This also applies to network analysis. Our lens, what we 'map' and leave out, is shaped by our aim to trace avenues and vehicles of power in UK HE policy networks and guide resistance to deleterious socio-political transformations. But network analysis has also been a governance tool -e.g. to locate 'troubled families' associated with the 2011 UK riots (McGimpsey et al., 2017: 914-915). Such collections of data actively produce their objects of knowledge which, simultaneously, are subject to reshaping through intervention (e.g. rearranging 'troubled' family networks). Given that data and tools are world-making, we ought to consider their limitations for critical social studies. One potential limitation is that network visualisations can allow room for reductive theorisation, for example by rendering the social as an accumulation of interpersonal relationships. The network map is not an exhaustive representation of social reality, but only a guide for research, always incomplete and lacking dimensions. Nodes, though similarly visualised, are not equivalent to one-another, nor do we posit equivalence between human and 'non-human' nodes -an often criticised feature of actor-network theory (Law, 1992;Kirsch and Mitchell, 2004). Our map can help identify the heterogeneous ensemble through which a strategy or dispositif is relayed, but it does not display the governance strategy itself, the subjects it seeks to produce or the circuits that reproduce societal power relations (class, racialisation, gender and ability), which themselves shape policy. Finally, it tells us little about the level of encounter with subjects. These are all a matter for qualitative research, analysis and theorisation. (ii) Discourse analysis: The dispositif and policy assemblages we are studying encompass knowledges, discourses and subjectivities. We examine how these figure in the vocabularies, frameworks, explanatory models and ommissions of strategies and interventions, and what behaviours, outcomes and affectivities they seek to produce. Textual and discourse analysis also guides the network analysis, by following the cross-referencing of influential policy documents, agendas and slogans. Our corpus of analysis includes publicly available policies, grey literature and business media surrounding staff and student mental health in 2010-2020 (see Table 1). Ethics: The project received ethical approval from the Economics, Law, Management, Politics and Sociology Ethics Committee, University of York. We map information that is not only publicly available (e.g. products organisations sell) but much of it (e.g. board membership) is required to be public by law for UK registered entities. Mapping of this type does however raise ethical issues because aggregated data are more than the sum of their parts: they allow us to view a system as a whole, which is materially different from individual organisations and their board membership. However, we argue this research meets a public interest -that the system should be made visible to provoke further investigation of -and, indeed, resistance to -the strategies and assemblages comprising this site of policy creation at a moment of rapid sector restructuring. Mental health crisis Our research shows intensifying policy interest in student mental health around 2017, although the 'mental health crisis' has been in HE media currency since at least 2013 (NUS, 2013). In response to high-profile student suicides, UUK commissioned a report on student mental health by the thinktank Institute of Public Policy Research (IPPR) (Thorley, 2017). The report speaks of 'dramatic increases' (3) in demand for counselling and disability services and in disclosure of mental health conditions. Yet it analyses the 'crisis' in absolute numbers and not in proportion to the number of students and staff at different institutions. Although mental illness disclosure rates are quoted to have risen from less than 0.5% in 2006 to 2% in 2016 (21), this is far below the prevalence rate of mental disorders in the 16-24 age group, at 18.9% in 2014 (McManus et al., 2016). An analysis that takes volumes into account shows, for example, that at the University of Liverpool, where student numbers rose between 2013 and 2017 from 21,345 to 28,795 (HESA, 2021), students seeking counselling also rose from 526 to 997 (University of Liverpool, 2018), demand thus increased from 2.5 to 3.5%. This is still far from the proportion of students who might need help, but overwhelms the counselling and mental health service, which only expanded from 18 to 25 staff. IPPR cautions about the risks of 'students dropping out of university' and 'reputational damage' (37), linking student mental illness to income loss by universities. It proposes a 'whole-university approach' whose priority must be 'to promote positive mental health and wellbeing' (52) and only secondarily to 'enable access' to support and care (56). While they recommend increased funding, this is in a context where universities are 'redesigning elements of their counselling provision' because of 'a huge growth in demand' (66). Thus, recommendations include training for academics, security/accommodation staff and student 'peer-supporters' (52-53); 'workshops to build resilience' (54); 'onsite' 'NHS mental health specialists' and 'strong relationships with external providers' (61-62); 'early intervention' by 'monitoring' students with 'intelligent use of data and analytics' (59); and, ironically, 'robust data and evidence' (68). IPPR appears to have been commissioned to provide evidence in favour of a service restructuring already under way. We note key elements that welcome new markets: extracting additional unpaid labour from staff and students, outsourcing services, and creating new opportunities for charities and for-profit providers. The emphasis on 'prevention' defines the kind of services to be procured: workshops, digital tools and data analytics. Following IPPR's report, in September 2017, UUK published its agenda on student mental health, entitled #Stepchange. Echoing IPPR, it championed a 'whole university approach' driven by 'leadership, co-production, information, inclusivity, research and innovation' (UUK, 2017: np). In December 2017, the Department for Education Green Paper on young people's mental health (Greening and Hunt, 2017) commended UUK's approach, endorsing collaboration between 'student welfare, accommodation and security services', 'innovation in data linkage and analytics' and a 'new national strategic partnership' between 'tertiary education providers, local authorities, and health and care commissioners and providers'. Since then, regional Health Service Networks have been set up to facilitate partnerships for 'innovation' among NHS, universities and external providers that facilitate new markets and business-led research on mental health. Resilience, employability, analytics Recent research on youth and education policy following the network/assemblage approach has highlighted increased policy activity in youth mental health. McGimpsey et al. (2017) describe projects for youth 'happiness', 'wellbeing' and 'resilience' promoted by New Labour figures such as Richard Layard and associated think-tanks (The Young Foundation, the New Economics Foundation, New Philanthropy Capital). Williamson (2021) has mapped similarly theoretically underpinned global policy trends around the introduction of 'social and emotional learning' (SEL) curricula in education, supported by philanthropic institutions (Bill and Melinda Gates Foundation, Chan-Zuckerberg Initiative) and international organisations (OECD, World Bank, UNESCO, World Economic Forum). Driven and validated by psychometric and econometric data to demonstrate 'value for money', SEL has created new profit opportunities for educational technology corporations ranging from global-level conglomerates (e.g. Pearson) to smaller startups. The incursion into education of 'deliverology' (Barber et al., 2011) -a term to be discussed shortly -and behavioural economics have also been documented as parallel trends Bradbury et al., 2013). Our analysis of the policy assemblage relating to mental health in UK HE shows that it, similarly, comprises knowledges from positive psychology and behavioural economics, disciplinary techniques driven by data and metrics, and digital educational technologies. These work together as an ensemble to link mental health with economic productivity, establish 'nudges' and digital self-monitoring as therapeutic modalities and promote data-driven interventions. The lasting influence of New Labour policy networks and preoccupations in this area is known, but ought to be more fully appreciated. Baker et al. (2006) suggested that New Labour's 'social inclusion' and 'widening participation' agendas forced universities to provide additional student mental health services, as well as expand staff's pastoral role. Our mapping confirms a strengthening relationship between (student) mental health and the labour market, but also the subsumption of student mental health under a broader agenda of HE restructuring. Positive psychology, behavioural economics, and 'deliverology' jointly shape policy on mental health in universities. Their interconnectedness is evident in the movements across boundaries (public, private, governmental, academic) of three highly networked actors: Richard Layard, Michael Barber and David Halpern. They are linked by their positions in New Labour government, charities and think tanks; by co-authoring policy; and by their combined influence under subsequent Coalition and Conservative governments. We map these actors' networks in Supplemental Figure 1 (supplemental material, online only). The policy strategies they have espoused are part of a broader governmental dispositif shaping approaches to mental health in UK HE. Richard Layard, already mentioned for promoting positive psychology curricula in schools, is a key figure in the LSE Centre for Economic Performance, well known for establishing in the government agenda the measurement and cultivation of 'happiness' at the service of the national economy (Ahmed, 2010;Binkley, 2011;Cederström and Spicer, 2015;Pickersgill, 2019). Positive psychology and other therapeutic modalities that lend themselves to measurement, like Cognitive Behavioural Therapy, are preferred for their 'cost-effectiveness', measurability (Frijters et al., 2019) and cultivation of 'skills' for 'emotional resilience' against 'adversity', especially in young people (Hale et al., 2011). As we discuss next, this discourse figures in the policy and marketing of products for student mental health: resilience is to be cultivated aiming at student productivity, retention and employability. There is now a sizable literature critical of the 'neoliberal individualism' and 'vulnerability' promoted by governmental projects for positive psychology in education (see review in Cabanas and Illouz, 2019: 50-81). Framing the problem as an opposition between individualist and collectivist ethos, this criticism often neglects to register how the 'happiness' agenda subsumes concern about individual suffering under the presumed common good of economic growth. As argued by Layton (2020) and Wright (2008;, following Butler (2006), the cultivation of 'resilience' can entail a denial of vulnerability and interdependence; above all, a repudiation of dependence on social welfare. Indeed, resilience training was first tested on US soldiers aiming to reduce their healthcare needs (Howell, 2015), and has been widely and coercively implemented in workfare programmes in the UK (Friedli and Stearn, 2015) and elsewhere (Ylöstalo and Brunila, 2020). Layard was also initiator of the NHS programme Improving Access to Psychological Therapies (IAPT) -a therapeutic factory where newly trained mental health workers process high caseloads following standardised guidelines. Their stringently monitored targets include getting patients off sick pay and state benefits. Rizq (2012: 7) describes how this setting compels therapists to 'disavow the realities of suffering, dependence and vulnerability and turn away from the complexities of managing those in psychological distress.' The high caseloads processed by university counsellors could entail similar kinds of disavowal. While psychology's intertwining with labour economics has been a constant in its history (Roberts, 2020), these links are now consolidated, subjecting therapeutic practices directly to employability outcomes. These methods of implementing public sector reform overlap with Barber's 'deliverology', invented within Tony Blair's Delivery Unit (Richards and Smith, 2006) and imposed in health and education sectors across the world, through the edtech multinational Pearson and the consultancy McKinsey, where Barber subsequently held senior roles. It is composed of 'the formation of a delivery unit, data collection for setting targets and trajectories, and the establishment of routines' (Barber et al., 2011: np). Barber's and deliverology's influence on UK HE could not have been more direct. As first chair of the OfS (which replaced the HE Funding Council for England in 2017), he oversaw the introduction of the Teaching Excellence Framework (TEF) in 2016-2017 which evaluates teaching quality by the metrics of the National Student Survey (NSS), student retention and graduate employability. The TEF is, in effect, an instrument to engineer particular policy outcomes under the guise of quality management, and to normalise an economistic discourse on HE learning (Morrish, 2019b). As in many cases of public sector marketisation, metrics simulate market structures (Muller, 2018). The metric of 'employability' seeks to contain the investment risk of student loans, particularly when the post-2008 financial crisis and the COVID-19 pandemic render graduate low pay and unemployment more likely. Student retention, employability and satisfaction metrics are now core aims of HE institutions; but -TEF rhetoric aside -teaching alone cannot achieve them. Institutional concern with student mental health and wellbeing becomes another vehicle towards these aims. OfS has elicited compliance with its policy agenda also through funding competitions for university projects. The two most recent (OfS, 2018; have addressed student mental health, and encourage digital interventions and partnerships with external organisations (charities, private companies, NHS). This resonates with the restructuring agenda in healthcare and education. Cost-cutting through digital tools and external services is announced in the NHS Long Term Plan (NHS England, 2019), which UUK (2020) and OfS (2020) reference to justify digitisation and data analytics for student mental health. Analytics are promoted to asses risk of student distress and suicides. Yet, as we show below, mental health risk operates as a proxy to the risk of student drop-outs, associated, again, with retention and employability metrics. Winning bids in OfS's (2018) competition 'Achieving a Step Change in Mental Health Outcomes for All Students' include two projects worth noting: University of Lincoln's collaborative project with mental health mobile apps Fika and UniHealth, and Northumbria University's mental health analytics project with Microsoft, Civitas and The Student Room Group app 'Enlitened'. These bring together positive psychology (the modality and discourse of Fika), data analytics and behavioural economics (UniHealth and Enlitened). The latter two provide behavioural 'social marketing' (Crawshaw, 2013) by health lifestyle messaging, self-tracking, and data mining through user surveys and clicks. They are combined with training students to offer unpaid 'peer support' (University of Lincoln, 2018). The toolset of behavioural economics concerns the third key actor in our map, David Halpern: Chief Analyst in Blair's Strategy Unit, collaborator with Richard Layard (O'Donnell et al., 2014) and Michael Barber, and founder of the Behavioural Insights Unit (now a 'Team' -BIT) under the Coalition government. Known for establishing libertarian paternalism as UK's 'default policy option' (Jones et al., 2014), BIT now wins high-value UK public sector bids as an independent company partially owned by NESTA. Halpern also founded the 'What Works' government research units whose representative advises UUK's Mental Health in Higher Education group. Behavioural economics harmonise with the broader dispositif by designing minimal cost interventions to direct citizen behaviour 'in an environment where "there is no more money"' (Cadman, 2014). BIT now also promote data analytics, e.g. in the NHS Long Term Plan to predict 'which cases cycle back into the system' (Kirkman and Harper, 2019); to alert Ofsted about 'at risk' schools (Williamson, 2017), and to prevent university student drop-outs. Currently dominant approaches to student mental health are thus part of a 'late-neoliberal' post-financial-crisis policy context of 'smart social investments' (McGimpsey, 2017), which seek to yield the maximum number of 'resilient', self-regulating and productive subjects at minimum cost. The discourses and toolsets of positive psychology and behavioural economics, whose libertarian paternalist style exercises power by 'incitement, provocation, intensification, and seduction' (Lambert, 2020: 50) are combined with the coercive techniques of deliverology. Metrics, digital technologies and data analytics are arranged into motivational ecologies to compel or induce feelings, actions and behaviours consistent with intended economic outcomes. Next, we look at the markets emerging as part of this assemblage, and their role in making and enacting this policy agenda. New markets and actors Our network analysis reflects the intertwining of governmental organisations, civil society and for-profit businesses in policy assemblages, which is characteristic of 'fast' neoliberal policy-making (Peck and Theodore, 2015). Powerful not-for-profit HE organisations, such as UUK and JISC (formerly Joint Information Systems Committee) facilitate new markets in mental health by nurturing links between the sector as a whole, individual universities and private providers, through policy workshops, consultancy and tenders. Outsourcing is the most direct element of 'redesigning counselling provision' (Thorley, 2017). For example, University of Bath has outsourced counselling, mental health, disability and wellbeing services to Spectrum.Life, and the London Universities Purchasing Consortium has contracts with a range of occupational health and wellbeing companies including OHWorks, Duradiamond and Monkey Mind Ltd. Other universities have advertised tenders, including London South Bank, Portsmouth, Newcastle, East Anglia and Northumbria. Alongside direct outsourcing, we identify three new and expanding markets, whose therapeutic modalities, technologies and discourses enact or 'deliver'the policy agendas we describe. They are: (a) wellbeing and mental health workshops and training; (b) digital tools for mental health; and (c) learning analytics for mental health. Finally, (d) we discuss plans to increase flows and sharing of student data between universities and private providers in the name of student mental health. (a) Workshops and training Recommended by IPPR (Thorley, 2017) and promoted by UUK and OfS, positive mental health and resilience workshops are increasingly run by forprofit social enterprises (e.g. Mental Health First Aid England), charities (e.g. Mind, Charlie Waller Memorial Trust, Student Minds), and collaborations between charities and private providers (e.g. Positive Group). We map this market in Supplemental Figure 2 (supplemental material, online only). The 'Mentally Healthy Universities' programme, worth £1.5 million and run by Mind (2019) in partnership with Goldman Sachs, aims to train students and staff 'to support their own mental health and that of others', in line with UUK's 'Stepchange' framework. 'Stepchange' adopts guidelines from the 'University Mental Health Charter' (Hughes and Spanner, 2019) by charity Student Minds. Workshops run by Student Minds range from peer-led courses on depression (e.g. 'Positive Minds') to resilience workshops (e.g. 'Sustain Your Brain', run 'in collaboration with Positive Group, a specialist consultancy focusing on the science of sustainable high performance' (Student Minds, 2014: 11)). Students' unpaid labour in peer-led programmes is said to 'enable them to develop their own skills and employability' (5). The workshops are a site at which power is exercised seductively, promising individual success and wellbeing, while enacting the cost-saving agenda of self-regulation, productivity, employability and service outsourcing. Critical ethnographic research in mental wellbeing workshops would be needed to explore how power and resistance operate in these HE settings, which are likely to be different from those in workfare (Friedli and Stearn, 2015). Yet we might glean from the rise in digital tools and analytics for mental health a desire to replace at least a proportion of mental health workshops by technologies that hardly employ any specialist staff. (b) Digital tools for mental health The rapid growth in digital tools for mental health (Bucci et al., 2019), especially for youth (Fullagar et al., 2017), is well documented. In universities, it enacts the labour-saving and student self-regulation agenda, incorporating positive psychology and behavioural economics discourses and methods. As shown on our network map -see Supplemental Figure 3 (supplemental material, online only) -significant players in online therapy tools are, in 2021, SilverCloud and Togetherall: more than half of UK universities and a large number of NHS trusts have contracts with at least one. They provide selfadministered programmes based on cognitive behavioural therapy (CBT) and other types of 'self-help courses'. Online communication with a counsellor or with other users is optional. Mobile apps are being widely adopted, including via OfS funded projects. Fika won over 35 UK university contracts between 2019 and 2020. Marketed as a 'mental fitness' tool consisting of short 'exercises' inspired by positive psychology -typically videos of students or sportspersons outlining their obstacles and coping methods -it targets 'motivation' and 'performance'. Students' ability to 'manage stress', find 'meaning', and maintain 'positivity' and 'focus' are rated and traced. Mental health serves academic performance, and indeed, Fika markets itself as a tool to enhance employability and 'to save universities millions by boosting student retention' (Hazlegreaves, 2019). Fika not only seeks to become integrated into university curricula (Bennett, 2020), but also to influence the direction of research in this area. It has gained funding by partner institutions to collaborate with their psychology researchers and research students -their labour now serving this market/agenda. Another category of mobile apps are designed to nudge healthy behaviour while delivering 'intelligence' from mined data to universities and plugging into learning analytics projects. Already mentioned, Enlitened (by Student Room Group), which has Mary Curnock Cook, former CEO of Universities and Colleges Admissions Service, on its advisory board, and Unihealth (by Thrive Ltd of BabyCentre) have both been piloted as part of OfS funded projects. However, after resistance to these forms of monitoring by students and staff, these contracts have not been extended. At Exeter, for example, students complained about Enlitened promotional talks during lectures and app surveys bypassing the student union (Church, 2019). Exeter UCU (2019) raised concerns that Enlitened data could monitor staff performance and voted a motion against it. Yet app entrepreneurs persist. UniWellBeing, adopted by 11 universities over 2020, is CampusM CEO's next project. CampusM was widely purchased by UK universities, but was criticised for tracing geolocation data to record attendance (Wellington, 2020). UniWellBeing can plug into Collabco's MyDay student app, used by many universities. In line with the current mental health agenda, it combines self-monitoring for mental/physical 'wellbeing', health messaging and nudging, data mining, and marketing of additional services (e.g. financial advice). (c) Learning analytics 'Learning analytics' are widely adopted to attract and retain students, promising to generate superior insights about their behaviour, as well as 'deliver increased efficiency' (UUK, Civitas and JISC, 2016: 2) -an imaginary of ever-expanding knowledge of subjects and their futures (Prinsloo, 2019). Analytics comprise databases and algorithms to mine, integrate and process data collected by students through their registration process and movement through security infrastructure to access proprietary resources (rooms, library, platforms). Informing, as already discussed, 'smart' interventions targeted to 'at risk' individuals and institutions, they are 'practical relays of policy objectives' (Williamson, 2018: 1). The turn towards analytics for mental health emerged with UUK's call to 'align learning analytics to the mental health agenda' so that institutions can 'identify changes in students' behaviours . . . address risks and target support' (UUK, 2017: n.p.). One of its major promoters is JISC, the not-for-profit company that mediates provision of digital infrastructure to UK universities. JISC provides its own learning analytics service and procurement platform. A map of this market is in Supplemental Figure 4 (supplemental material, online only). DTP Solutionpath's product StREAM currently leads this market, with its much publicised algorithm tracking 'student engagement' to predict student performance. 'No engagement' alerts enable (self-) monitoring and comparisons with cohort scores (known in behavioural economics as a 'social norm nudge'). Although the system accurately predicts student drop-outs, there is little evidence it helps prevent them (Foster and Siddle, 2020), and is criticised for fostering competition and anxiety among students (Jivet et al., 2017: 82). Nonetheless, 'nudging' students' behaviour though analytics monitoring, alerts and self-tracking, including to manage mental health, remains attractive, bringing the BIT to a symposium at Nottingham Trent University (2017), alongside JISC's Chief Innovation Officer, the OfS Head of Procurement, and psychometric analytics company Thomas International. We notice, here, a discourse that collapses student mental health into student retention, blurring the distinction between the risk of mental distress and the risk of withdrawal, instrumentalising the former to manage the latter. To avoid 'financial and reputational implications' and 'students drop Analytics algorithms that can identify those 'at risk' of mental distress and suicide are still at an experimental stage (Duffy et al., 2020), yet the rush to seize market opportunities renders the evidence-base an afterthought. As is typically the case with digital mental health technologies (Bucci et al., 2019), policy recommendations and adoption precede research. Critical research on the implications of learning analytics for mental health will be crucial, given that analytics research is currently led by the same teams tasked with implementation (e.g. Foster and Siddle, 2020). (d) Student data as resource The expansion of algorithmic student data processing into the area of mental health raises ethical issues around data privacy, value extraction from data, and the actions that might follow algorithmic profiling operations. JISC's (2020) code of practice for mental health analytics states that, although universities should ideally seek consent before collecting special category data and acting on analytics, under certain scenarios they do not have to request explicit consent (including when using the 'substantial public interest' justification under Data Protection Act 2018 (17)). Data sharing can also occur under the same premise. The 2017 Green Paper's endorsement of accommodation and other service providers' involvement in students' mental health has opened the way for actors such as the HE legal consultants Pinsent Masons (Watson and Blackey, 2018) and the British Property Federation (2019) to urge a 'much freer flow of information between providers and the institutions'. Going further, the JISC Horizons Group (2019) propose a 'wellbeing data trust' 'to enable a variety of organisations to share sensitive data related to student wellbeing' (12). In the name of their mental health and the 'public interest', sensitive student data are becoming (presumably freely) accessible to private companies. This not only adds layers of mediation to students' ability to control institutional responses to analytics alerts, but allows the valorisation of their data to develop services and products -including proprietary algorithms -sold back to students and universities. Conclusion Fifteen years on from Baker et al.'s (2006) analysis, 'mental health' as a problem for UK universities is largely defined through economic outcomes and, in turn, addressed through new markets that train, monitor, measure and 'nudge' students and staff towards these outcomes. The policy assemblage we have identified encompases key governmental technologies and disciplinary discourses that gained credibility under New Labour and are now embedded within mental health institutional structures as well as in commercial production targeting the HE sector. The discourse of positive psychology, training subjects to develop as competitive human capital (Binkley, 2011), combines with behavioural economics to become enacted through workshops, digital apps and analytics platforms procured as part of universities' restructuring of counselling services. Largely self-administered, these new 'interventions' correspond not only to the institutional rationality of cost-effectiveness, but also to the anticipation that, by producing students as self-regulating and resilient subjects, universities can improve retention, attainment and employability metrics, vital for competing in a restructured HE marketplace. This is another case of 'smart social investment' (McGimpsey, 2017) in post-financial crisis neoliberal governance. It comprises a drive to know subjects and to produce institutional and government-level 'outcomes' through accessing expanded masses of data and automating their analysis, as well as intervening to 'train' subjects cost effectively. In this way, policy agendas create motivational ecologies that allow them to seep through from the higher levels of governance, aiming to reduce the costs of social reproduction, down to the everyday level of individual students-subjects who are encouraged to monitor and rehabilitate themselves. Far from recognising vulnerability as a universal condition, or how social oppression affects the emotional dimensions of learning (Martinez-Cola et al., 2018), these interventions erase such awareness, geared to foster academic performance and employability. Our analysis contributes to the critique of governmental techniques and therapeutic industries promoting happiness, productivity and resilience (Ahmed, 2010;Binkley, 2011;Cederström and Spicer, 2015;Cabanas and Illouz, 2019) by demonstrating how these approaches to the student 'mental health crisis', especially in their digitised, automated form, are part of a broader policy assemblage that channels the contemporary restructuring of UK HE. The implications of these new technological interventions on the emotional life of students and staff are yet to be adequately explored, but we can already comment on their function of 'masking a practice which itself remains silent' (Foucault, 1980: 96), namely, disguising an instrumental HE strategy under a discourse of institutional concern, care and intervention. Although value circuits are not represented on our map, we can nevertheless see new layers of exploitation in the reducing ratio of counselling staff, the use of (typically unpaid) student and staff labour to deliver or legitimise interventions, and the valorisation of student data by external service providers. Meanwhile, excluded from the field of intervention are areas of UUK's 'whole university' known to cause stress to students: finance departments issuing penalties for unpaid fees or rents; competitive study environments heightened by learning analytics 'nudging'; stressed, overloaded and precarised staff, and risking student and staff health to contain financial risk during the COVID-19 crisis (Morrish, 2020). Resistance to these processes has already taken place, but the policy assemblage we describe develops at a rapid pace. Our research is a starting point, which, we hope, will help students and staff reclaim the agenda on mental health in universities.
v3-fos-license
2023-10-28T15:17:50.231Z
2023-10-26T00:00:00.000
264528161
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://f.oaecdn.com/xmlpdf/17c181ae-7a95-4a34-9241-c82d783b3b76/wecn2040.pdf", "pdf_hash": "f95ee9b41f3828b258b9c9ff8c343f693d87b0ca", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43991", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "sha1": "1279ea8a73706cf2aadc92eca8b5ad0c7d3e795e", "year": 2023 }
pes2o/s2orc
Application of polyoxometalates and their composites for the degradation of antibiotics in water medium this article INTRODUCTION With the shining advancement of medical science, the current world witnesses an uprising of a plethora of new-age drugs and pharmaceutical compounds (PCs), which are often utilized to combat different kinds of diseases in order to protect human lives.Obviously, usage of these updated PCs [1] helps in developing immunities against diversified categories of diseases.As a result, life expectancy is also enhanced.So, the natural outcome of this development is the huge demand for PCs and antibiotics globally.In particular, the COVID and post-COVID period witnessed a huge surge in drugs and antibiotic consumption [2] throughout the world.However, a recent review reports that the pharmaceutical industry consumes around 22% of the available freshwater for industrial purposes [3] .After manufacturing, a huge quantity of wastewater is generated from this unit containing residues of different antibiotics.These compounds are basically composed of organic molecules with a stable structure that is resistant to degradation under natural conditions.The prevalence of these recalcitrant compounds in the water bodies is undoubtedly a potential threat to overall ecosystems.The existence of PCs in water bodies obviously denotes hazard.For example, large quantity accumulation of tetracycline (TC) drug, a broad-spectrum antibiotic often used for fighting against several diseases, can lead to the development of drug-resistant genes [4] .The growth of these antibiotic-resistant genes (ARG) reduces the therapeutic potential to fight against pathogens.Apart from TCs, other antibiotics, such as macrolides, sulfonamides, fluoroquinolones, etc., are also capable of producing ARG, and they are often spread among different microorganisms via a horizontal gene transfer mechanism [5] .Moreover, during the treatment of different categories of industrial wastewater, these compounds are often partially degraded, producing more toxic intermediates.Li et al. highlighted the important result obtained in this respect during the photocatalytic degradation of acyclovir applying g-C 3 N 4 /TiO 2 photocatalyst under visible light irradiation [6] .One of the intermediates produced, guanine, is almost three times toxic in comparison to the parent compound.Rizzo et al. also concluded a similar report [7] .Degradation of a wastewater treatment plant effluent containing mixtures of amoxicillin, carbamazepine, and diclofenac was carried out using a heterogeneous TiO 2based catalyst.However, results indicate that complete degradation was not possible under the experimental condition. Hence, due to several risks associated with the residual PCs in the water bodies, they have been marked as one of the notable emerging pollutants in the present industrial period.Environmental scientists and engineers are carrying out research activities to develop technologies for eradicating these compounds from water bodies.Conventional biological processes often fail due to the non-biodegradable nature of the antibiotics.Adsorption, catalysis, and membrane filtration, like advanced techniques, are often recommended by the research groups for the abatement of PCs.Adsorption is a versatile water treatment technique that is useful for a wide category of pollutants [8][9][10] .Different novel adsorbents have already been reported in the literature.Many promising ones are yet to be explored for the same.However, adsorption does not lead to the ultimate mineralization of the organic compounds, which is a severe demerit.Membrane filtration is a costly technique and may not be feasible for industrial usage on a large scale.Nevertheless, catalytic degradation can be regarded as a green technology for eliminating PCs.However, the development of a promising catalyst with good degradation and reusability is a challenging task for the research community.Many new-age materials have evolved as promising catalysts.Recently, we reviewed the efficiency and applicability of g-C 3 N 4 -based composite catalysts for the degradation of PCs [11] .The use of photocatalysis for cleaning water and sewage has been studied extensively [12] .Researchers have done a significant amount of research over the past few decades on the development of a variety of semiconductors, including TiO 2 [13] and ZnO [4] , for their application in photocatalytic procedures.Very recently, polyoxometalates (POMs) have drawn the attention of scientists for pollutant degradation [14] .Yu et al. were successful in the synthesis of three distinct manganese POMs: [Mn 3 (H 2 O) 3 (AsW 9 O 33 ) 2 ] 12-, [Mn 3 (H 2 O) 5 (PW 9 O 34 ) 2 ] 9-, and [Mn 3 (H 2 O) 3 (SbW 9 O 33 ) 2 ] 12- [15] .Among these three, [Mn 3 (H 2 O) 3 (SbW 9 O 33 ) 2 ] 12- was found to be superior compared to the other two in terms of the catalytic property for the oxidation of water.Neither manganese oxide nor Mn 2+ (aq.) was discovered in the course of the photocatalytic oxidation of water.Cai and Hu studied the effects of light-emitting diode (LED) and ultraviolet A (UVA) radiation on the photocatalytic breakdown of sulfamethoxazole (SMX) and trimethoprim in a continuous photoreactor that contained TiO 2 [16] .Irradiation for 20 min resulted in > 90% degradation of SMX and trimethoprim at an initial concentration of 400 ppb.In the research on the antimicrobial activities that took place during the photocatalytic procedure, a reference strain of Escherichia coli was also examined and tested.A concomitant reduction in residual activity against bacteria was observed for each dilution of trimethoprim that was eliminated. POMs, which share a photocatalytic feature with conventional semiconductors, have recently been employed as efficient photocatalysts for the removal of organic compounds from polluted water.Clusters of three or more transition metal oxyanions that are bound together by the sharing of oxygen atoms make up POMs [17] .The metal atoms exist in higher oxidation states, and they belong to either group V or group VI of the transition elements.Molybdenum (VI), tungsten (VI), niobium (V), tantalum (V), and vanadium (V) are examples of transition metals [Figure 1] that can have unfilled d electron configuration [18] .Ammonium phosphomolybdate with the formula (PMo 12 O 40 ) 3-and a Keggin-type structure was the first POM to be created in 1826 by Berzelius and further characterized in 1934.Other POMs were discovered after this invention [19] .For example, the phosphotungstate anion [(PW 12 O 40 ) 3-] has a phosphate group at its core and is constructed from a framework that includes twelve octahedral tungsten oxyanions in the tungstate framework [20] .POMs are of considerable interest to scholars in a variety of subject areas, including the biological sciences, chemistry, molecular electronics, and materials science.This is due to the fact that POMs possess a variety of distinctive characteristics, including a high degree of tunability, oxidant features, and a high level of acidity.POMs have been employed as photocatalysts for the degradation of organic dyes [21] and pharmaceutical impurities [22] .In the recent past, ammonium phosphomolybdate was used by us as a photocatalyst for the hydroxylation of benzophenone [23] .Additionally, POMs have been used as photooxidation catalysts for the oxidation of a wide variety of organic compounds, including alcohols [24] , olefins [25] , and others.POMs find use in areas where their redox properties, photochemical action, ionic charge, and conductance are found advantageous.About 80%-85% of the published research papers on POMs relate to their usage as catalysts, while the remaining 15%-20% of papers discuss coatings, membranes, and thin films [26] .In reality, rising cancer rates and bacterial resistance to antimicrobials are two of the world's most pressing health issues.In addition, new medications to combat SARS-CoV-2 infection caused by the coronavirus are urgently needed in light of the current pandemic crisis.Many scientists are now interested in POMs [27][28][29] as potential replacements for traditional antiviral, antibacterial, and anticancer drugs.In this article, we summarize the findings of significant studies conducted in the twenty-first century on the environmental applications of POMs for the eradication of antibiotics such as TC, ciprofloxacin (CIP), SMX, and others.These studies investigate the stability of POMs as well as their future potential. The aim of the current paper is to highlight the efficiencies of POM-based catalysts in degrading different PCs in an aqueous medium.Among the papers cited here, most of them have been published within the last five years.Most of the studies reported in the literature have been performed with TC.Hence, the degradation of TC and some other important drugs such as CIP and SMX, has been kept as the main focus PREVALENCE OF ANTIBIOTICS AND CONTRIBUTING FACTORS TO ANTIBIOTIC RESISTANCE Many reported studies in recent times showed the existence of PCs and antibiotics in natural water bodies and wastewater streams.They are released from the industrial units along with the discharged wastewater.Often, a large amount that is not metabolized inside the animal body is also released through urine in the environment.Hou et al. reported the concentration of TC, oxytetracycline (OTC), and chlortetracycline (CTC) in the treated effluent of a wastewater plant as 11.9, 334.3, and 1.8 mg/L, respectively [30] .Among several types of wastewater, these compounds are mostly prevalent in hospital wastewater.Hospital wastewater is a complex matrix consisting of analgesics, anesthetics, β-blockers, psychiatric, antiinflammatories, etc.It is expected that in comparison to municipal wastewater, hospital wastewater possesses a 2-150 times higher concentration of PCs [31] .As a result of the huge discharge of PCs and their residues in the wastewater, they also reach the groundwater naturally via seepage.Hence, they are detected in groundwater in different parts of the world.Dai et al., in one of their recent review articles, reported the occurrence of TC in shallow groundwater in China at a concentration of 184.2 ng/L [32] .Javid et al. also found traces of TC in surface and groundwater in some parts of Iran [33] . Antibiotics are widely utilized in the treatment of bacterial diseases in both humans and animals [34][35][36] .Even at extremely low concentrations, pharmacological formulations maintain their bioactivity and bioaccumulation.As a result, they permeate vital biological cycles and wreak havoc on the bodies of many different kinds of organisms, which ultimately develop immunity to antimicrobial treatments.Antibiotics degrade and self-degrade in vivo, producing more hazardous chemicals than the initial drug [37,38] .TC, CIP, and SMX are commonly used antibiotics that have become increasingly prevalent in bacterial strains.The widespread and inappropriate usage of these antibiotics has led to the development and emergence of antibiotic resistance mechanisms such as altered drug uptake, drug target alternation, and drug inactivation [39] .This has resulted in reduced susceptibility to these antimicrobial agents, rendering them ineffective in treating bacterial infections.The emergence and spread of antibiotic resistance mechanisms arise from complex interactions between numerous factors.Overuse and inappropriate usage of antibiotics in both human and veterinary medicine, as well as the inclusion of antibiotics in animal feed, have been identified as significant contributors to the increasing prevalence of resistance [40] .In addition to these factors, non-compliance with treatment, uncontrolled use of antibiotics, and unsanitary environments could also play a role in facilitating the development and spread of antibiotic resistance.The use of substandard and counterfeit drugs, as well as the unauthorized sale of antibiotics without prescription in certain regions, further exacerbates this problem [41] .Figure 2 shows antibiotics and the factors that contribute to their presence throughout the entire ecosystem.This issue is of utmost concern, as antibiotic resistance has the potential to cause devastating consequences both within healthcare systems and throughout society at large.It is associated with higher mortality rates, prolonged illness, and increased healthcare costs due to the need for additional diagnostic tests and antibiotic therapies.It is imperative that healthcare professionals, policymakers, and the public work together to combat antibiotic resistance through responsible use of antibiotics, promoting awareness of the issue, and supporting ongoing research into effective strategies for addressing this global threat.Furthermore, it is important to address the contribution of other factors such as disinfectant usage, as their effect on resistance prevalence can be difficult to evaluate but may represent a critical piece in the puzzle of combating this issue.Therefore, there is an urgent need to develop and implement comprehensive strategies that prioritize the appropriate use of antibiotics in human and veterinary medicine while also addressing the issue of counterfeit drugs and the unauthorized sale of antibiotics [42] .In addition to these measures, improved hygiene practices and surveillance systems, as well as the development of new effective antimicrobial agents, will be essential in combating antibiotic resistance and reducing its impact.Previous studies have revealed that the prevalence of antibiotics, including TC, CIP, and SMX, is significant in contributing to the development and spread of antibiotic resistance [43] .Addressing this complex issue will require a multifaceted approach that involves cooperation among healthcare professionals, policymakers, and the public.The emergence and prevalence of antibiotic resistance pose a serious threat that must be addressed with urgency.Failure to take action could result in catastrophic consequences for efforts to combat infectious diseases and public health more broadly.As such, minimizing the impact of antibiotic resistance requires a coordinated response that ensures the availability of effective antibiotics and vaccines and widespread access to rapid and reliable diagnostics. POMS AGAINST DIFFERENT ANTIBIOTICS DEGRADATION The photocatalytic degradation of antibiotics on semiconductor materials, particularly n-type semiconductors with suitable band gaps such as CdS [44][45][46] , TiO 2 [47][48][49] , and ZnO [50][51][52] , has shown great promise.These photocatalysts can efficiently convert solar energy into chemical energy and promote redox reactions [53][54][55] .However, it is common for electrons and holes to recombine, which results in a low level of photocatalytic activity [56][57][58] .POMs are a form of metal-oxygen anion nanocluster that is made up of plentiful oxygen atoms and highly oxidized early transition metals (such as Mo, Nb, Ta, V, and W) [59][60][61][62][63] .The process of precipitation is frequently used in the production of POMs.This method includes dissolving metal salts in a solvent (usually water) and then adjusting the pH level and temperature of the resulting mixture until it meets the required parameters [64][65][66][67] .The functionalization of pure inorganic POMs is important because it makes it possible to tailor the physical and chemical properties of POMs to specific requirements, hence opening the door to the development of further applications in the real-world environment. The usefulness of POMs can be improved in three basic ways, e.g., (i) the solubility of POMs, which are normally negative charges, can be altered by exchanging their counter-ions with organic cations [68,69] ; (ii) due to the high surface oxygen content of POMs, they can be used as inorganic ligands to coordinate with metal ions and construct high-dimensional coordinated complexes, which is made possible by their chemical structure [70,71] ; (iii) POMs can be subjected to covalent modification with organic ligands, which paves the way for the rational development of inorganic-organic hybrid materials that are founded on POMs [72] .The extraordinary physicochemical properties of POMs, as well as their structural diversity and straightforward synthesis processes, can be of use in a wide variety of sectors, including photo-and electro-catalysis, pharmaceuticals, magnetism, and energy storage and conversion, to name just a few of the many possible applications [73][74][75][76] .POMs, because of their semiconducting properties, have recently been demonstrated to be good candidates for the effective photocatalytic elimination of antibiotic pollutants [77][78][79][80][81] . Advantages of POMs POMs have a structure that is analogous to that of semiconductors.Like semiconductors, POMs have a valence band (VB) that is filled with electrons and a conduction band (CB) that does not have any electrons in it.As can be seen in Figure 3A, the mechanism of photocatalysis utilized by POM catalysts is, in general, comparable to that of semiconductor catalysts.Photo-generated holes (h + ) and electrons (e -) are created when the catalyst is bombarded with energy equivalent to or greater than its band gap, causing the electrons in the VB to be driven to the CB.In an environment comprised of water, the positive holes (h + ) will react with organic materials like pharmaceutical products to form hydroxyl radicals (•OH).Because of its powerful oxidizing capabilities, hydroxyl radical takes part as an active species in the process of oxidation and degradation of pharmaceutical products [Figure 3B] [82] .Again, when photocatalytically active semiconductors like TiO 2 are put atop POMs, the photocatalytic activity of the POMs significantly increases [83] .In a circumstance like this, POMs will often act as scavengers, collecting the electrons that are produced when light passes through the semiconductor in order to form a reduced POM -species.By doing so, we delay the recombination of h + and e -pairs, which ultimately results in an increase in the efficiency with which hydroxyl radicals are created by h + from the semiconductor.In the meantime, the POM -species oxidizes the pharmaceutical products by donating an electron to the dissolved oxygen in the solution.This creates superoxide (•O 2 -) radicals, which react with water to produce hydroxyl radicals (•OH) and/or hydrogen peroxide (H 2 O 2 ) molecules [Figure 3C] [84] .Keggin-type POMs are frequently used in the photocatalytic destruction of emerging pharmaceutical contaminants because of their ideal band gap, high stability, and simple manufacturing.After modifying and loading to change the band gap, other types of POMs also have possible applications.POMs of different types also have the potential to be useful in a variety of applications after the band gap has been modified and loaded.There are a lot of positive aspects to employing POMs as photocatalysts, e.g., 1. POMs include a high concentration of transition metals (including Mo, Nb, Ta, V, and W) and a large number of surface-accessible active sites.2. The photocatalytic performance of POMs can be further enhanced by tuning the band gap in their structures.This can be accomplished by modifying the heteroatoms in their structures (such as P and Si) or the valence states of metal atoms.3. Matrix materials (such as carbon nanomaterials, TiO 2 , and other support materials) and organic ligandfunctionalized POMs can facilitate a synergistic impact between various constituents.4. Single crystal X-ray diffraction provides a definitive molecular structure of POMs, which is useful for investigating the correlation between structure and function at the atomic level. As a result, scientists are delving deeper into the design of the frames, photocatalytic character development, and analysis of the conversion methodology and kinematics of POM-based catalysts for the photodecomposition of emerging pharmaceutical contaminants.Recent studies have shown that POMbased photocatalysts are highly effective at degrading antibiotics such as TC, CIP, and SMX. Application of POMs and their composites in TC degradation TCs represent a family of antibiotics, most of which are broad-spectrum drugs such as TC, doxycycline, minocycline, etc.However, sarecycline is a narrow-spectrum drug that is useful for treating acne, a dermatologic condition [85] .They are often prescribed to fight against bacterial infections of the skin, intestine, respiratory tract, etc. TC restricts the bacteria from producing proteins required for further growth.Hence, their spread and growth are prevented.The widespread use of antibiotics such as TCs has led to the presence of these substances in environmental matrices, including surface water and groundwater.Therefore, there is an urgent need to remediate antibiotic pollution through effective methods such as degradation.POMs and their composites have gained attention as potential candidates for the degradation of TC.Recent studies have suggested that POMs and their composites with g-C 3 N 4 nanosheets, TiO 2 , and polyoxotungstates (PW 12 ) exhibit great potential for the degradation of TC [86] .Moreover, POMs have shown effectiveness in degrading other antibiotics, such as sulfasalazine [87] and OTC [88] .Furthermore, the research has indicated that the efficiency of TC degradation can be influenced by various factors such as pH, temperature, light intensity, soil type, and composition.Organic carbon present in the environment may also have an effect on the degradation of TC due to its influence on adsorption and desorption equilibria.Therefore, the application of POMs and their composites in the degradation of TC presents a promising avenue for addressing the issue of antibiotic pollution in the environment.Chen et al. further demonstrated the potential of POMs and their composites in TC removal from water [89] .Their study utilized polyvinylpyrrolidone-modified nanoscale zero-valent iron prepared through the liquid-phase borohydride method for TC degradation.The findings of their research indicated that the POM-based composite exhibited excellent catalytic performance, with a degradation efficiency of up to 98.4% within 2 h. In one of our recent works, microporous ammonium phosphomolybdate has been explored as the active catalyst for TC degradation purposes under ambient aerobic conditions [90] .The schematic is shown in Figure 4.With the initial concentration of TC being 20 mg/L, the dose of catalyst 0.75 g/L, almost complete degradation (~98%) took place within 90 min of reaction time.Detailed experimental investigation revealed that singlet oxygen and hydroxyl radicals played a major role in the degradation process.Further, it was observed that chemical oxygen demand (COD) removal was nearly 98%, while total organic carbon (TOC) removal was around 63.6% at the end of the reaction period.It indicates that intermediates formed during the catalytic oxidation were resistant to the mineralization process. Many other POM-based novel catalysts of the recent era showed promising performance towards TC degradation in aqueous medium.Beni et al. reported the preparation of Au nanoparticles loaded POM/ zeolite imidazolate nanocomposite as the active catalyst for TC degradation purposes [78] .The newly synthesized catalyst exhibited excellent performance in a wide pH range.However, at neutral pH, the removal efficiency was obtained highest.Shi et al. prepared composite material from Cs 3 PMo 12 O 40 and g-C 3 N 4 to develop a highly efficient Z scheme photocatalyst for the removal of various recalcitrant pollutants from the water medium [79] .Under visible light exposure, 83.11% degradation was achieved in the case of tetracycline hydrochloride with a rate constant of 0.01255 min -1 . In recent times, Sun et al. prepared a novel photocatalyst from a combination of phosphotungstic acid, Fe 2 O 3 , and carbon nanotubes by means of hydrothermal process and microwave irradiation [91] .The synergism between the Keggin structure of phosphotungstic acid and Fe 2 O 3 made it a highly efficient catalyst for TC oxidation under visible light irradiation.Heng different ratios with reduced graphene oxide (RGO) to synthesize a series of heterogeneous photocatalysts [92] .Among all the prepared catalysts, the one formed using 0.5 millimolar of K 7 HNb 6 O 19 showed excellent catalytic performance towards TC photodegradation, achieving removal efficiency of 74.69% in 9 min of reaction time.Yang et al. reported the promising application of Fe-POM/Bi 2 MoO 6 composite catalyst towards TC degradation in a wide range of pH (3-11) [93] .The removal process proceeded via the photo-Fenton technique.Zhu et al. developed a composite catalyst from cobalt acetate, H 4 PMo 11 VO 40 , and biochar to degrade different antibiotics in wastewater [94] .Degradation progressed in the presence of peroxymonosulfate (PMS).The catalyst was found to completely degrade TC in 30 min reaction time with a PMS concentration of 0.17 mM and a dose of catalyst as 0.15 g/L. In conclusion, the application of POMs and their composites in TC degradation demonstrates promising results (as discussed in Table 1) for addressing antibiotic pollution in environmental matrices.The widespread use of antibiotics, specifically TC, has resulted in their presence in environmental matrices such as surface water and groundwater, and there is an urgent need to develop effective interventions for remediation.The use of POMs and their composites in antibiotic degradation presents an exciting prospect for solving the environmental contamination problem.The issue of antibiotic pollution in the environment has been a growing concern, particularly due to their widespread use. As antibiotic residues can contribute to bacterial resistance, causing a serious risk to humans and other animals alike, new approaches, such as photocatalysis, have been explored to address such issues.The combination of POMs with nanoparticles has attracted significant attention due to the unique properties of this approach [106] .Various studies have reported the potential of POM-based composites in removing other commonly used antibiotics, such as SMX [81] and CIP [107,108] .These studies demonstrate the potential of POMbased composites to effectively reduce the concentration of antibiotics in environmental matrices, leading to a significant reduction in the risk of bacterial resistance and other adverse effects on human health and the environment. Additionally, factors such as pH, temperature, light intensity, soil type, and composition have been reported to affect the persistence of TC antibacterials [109] .Therefore, thorough consideration and management of these factors are necessary for the optimal performance of POM-based composites in tackling environmental antibiotic pollution.The issue of antibiotic pollution in the environment has been a growing concern, mainly due to the ubiquitous use of antibiotics such as TC.As a result, the implications of effective interventions for remediation cannot be overstated.Future studies should investigate the synergistic effect of POM-based composites with other remediation techniques such as activated carbon adsorption and bioremediation in order to develop comprehensive and sustainable approaches for managing antibiotic pollution in the environment.Furthermore, the development of innovative and sustainable antibiotic alternatives is crucial in reducing the widespread use of antibiotics that contribute to environmental pollution.The scientific community has also explored other approaches, such as photocatalysis, in addressing the issue of antibiotic pollution in the environment [37] . Application of POMs and their composites in CIP degradation CIP, a type of antibiotic belonging to the quinolone class, is extensively used in medical treatments, animal husbandry, agriculture, and aquaculture.It is useful for protecting against various diseases, including infection of bones and joints, endocarditis, gastroenteritis, urinary tract infections, respiratory tract infections, etc. Due to its versatility regarding various diseases, it is consumed globally at a large scale.After usage or production, a considerable quantity of residues come into the environment, and they easily infiltrate into water bodies, leading to the emergence of antibiotic-resistant strains and causing significant pollution to aquatic ecosystems.Moreover, CIP produces biotoxins that harm the human central nervous system, liver, kidney, and circulatory system.Photocatalysis has been widely recognized as an effective method for remediating CIP in water, offering advantages such as low energy consumption, cost efficiency, and operational convenience [110] .However, most catalysts suffer from drawbacks like rapid recombination of photoinduced electron-hole pairs and a slow reaction rate, limiting their practical application in water purification [111] .Consequently, the development of efficient materials with exceptional performance in treating CIP in the environment remains a challenging task. As a result, it demonstrates exceptional photocatalytic degradation capabilities and has found extensive applications in the treatment of wastewater containing CIP pollutants [112] .In recent times, there have been several noteworthy studies that have explored the use of POMs as photocatalysts in the degradation of CIP. Brahmi et al. conducted a study to assess the efficacy of newly developed POM/polymer composites for removing CIP from water [88] .The experiments conducted during the study highlighted the crucial role of the phosphomolybdic acid-based composite in completely degrading CIP.Through simple photolysis under UV-visible light irradiation, CIP could be effectively eliminated from water.However, the study did not experimentally demonstrate the recyclability of the developed immobilized photocatalyst.Nonetheless, regeneration capability was tested on the photodegradation of a selected dye, and it was observed that the composite's effectiveness decreased starting from the 5th cycle.This reduction may be attributed to the complete reduction of the POM metallic center, necessitating a slow reoxidation with air or rapid reoxidation with strong oxidants such as H 2 O 2 . He et al. conducted a synthesis of nitrogen-deficient g-C 3 N 4 loaded with POMs porous photocatalysts featuring P-N heterojunctions [108] .This synthesis involved the formation of chemical bonds between nitrogen-deficient C + in g-C 3 N x and bridging oxygen in POMs, including phosphomolybdic acid (PMA), phosphotungstic acid (PTA), and silicotungstic acid (STA).Adsorption and photocatalysis experiments were conducted to assess the efficacy of g-C 3 N x /POM nanosheets in the removal of CIP [Figure 5], employing synergistic effects of adsorption and photocatalysis.Among the composites prepared with different mass ratios, g-C 3 N x /POMs-30 (30% wt.%) demonstrated the highest degradation ability.Under visible light, g-C 3 N x /PMA-30, g-C 3 N x /PTA-30, and g-C 3 N x /STA-30 achieved CIP degradation up to 93.1%, 97.4%, and 95.6%, respectively, within a mere 5-min duration.The incorporation of Keggin-type POMs into porous g-C 3 N 4 nanosheets resulted in enhanced light absorption and improved efficiency in separating electron-hole pairs, thereby resulting in a much higher photocatalytic activity.Free radical scavenging and ESR free radical capture experiments confirmed that •OH and •O 2 -were effective radicals for the degradation of CIP. Application of POMs and their composites in SMX degradation Like TC and CIP, SMX is another widely used drug.It is one of the most prominent members of the sulfonamide group of drugs often used for the treatment of bacterial infections, urinary tract infections, bronchitis, etc. [113] .It works well against both gram-positive and gram-negative bacteria such as Streptococcus pneumoniae, Escherichia coli, Klebsiella species, etc.It is widely used for human and veterinary medication purposes and is classified as a high-priority drug.It is generally consumed orally with water.Some studies have been reported in recent times regarding the application of POM-based composite catalysts for degrading SMX in water medium.Zhang et al. prepared a novel ternary composite catalyst Cdots/SrTiO 3 /NH 4 V 4 O 10 and applied it for the oxidative abatement of drug compounds such as SMX, CIP, and aureomycin hydrochloride [114] .Under optimized conditions, an excellent degradation efficiency of 94.7% has been achieved for SMX.Experiments were carried out with an initial concentration of the drug compound as 15 mg/L under the exposure of simulated sunlight.The reaction proceeded via the Z-scheme mechanism and showed good reusability up to four cycles.A type II heterojunction mechanism was also proposed.However, it was not consistent with the ESR result.The schematic for both the mechanisms (type II and Z-scheme) is shown in Figure 6.Detailed investigations revealed that hydroxyl radicals played a crucial role in photodegradation of SMX. Yang et al. tested the photocatalytic activity of Fe-polyoxometalate decorated Bi 2 MoO 6 nanosheets towards the degradation of TC [93] .It showed promising performance in TC removal.Moreover, as reported by the authors, the intermediate products were also not found harmful, and the degradation efficiency was retained throughout the wide range of pH 3-11.Apart from TC, the newly synthesized catalyst was also tested against SMX, where remarkable performance was achieved.In another work, Zhang et al. explored the photocatalytic efficiency of the composite catalyst N-SrTiO 3 /NH 4 V 4 O 10 towards oxidative degradation of SMX in aqueous solution [115] .NH 4 V 4 O 10 , SrTiO 3 , and N-doped SrTiO 3 were abbreviated as NVO, STO, and NSTO.From the Scanning Electron Microscopic (SEM) analysis, it was observed that NVO possessed a layer-like structure, whereas STO revealed a nanosphere-like morphology.It was quite interesting to observe that due to N doping, the diameter of N-STO microspheres was reduced in comparison to the pristine STO.Moreover, the SEM and Transmission Electron Microscopic (TEM) image of the developed catalyst proved that a uniform loading of N-STO took place on the surface of NVO and heterojunction was successfully formed.Experimentally, it was found that 30 wt% loading of N-SrTiO 3 (NSN-30) was the most efficient catalyst.Primarily, the traditional type II heterojunction principle was proposed for the degradation mechanism.However, due to a contradiction with the EPR result, the S-scheme mechanism was found to be more suitable. .CB: Conduction band; CIP: ciprofloxacin; PMA: phosphomolybdic acid; VB: valence band.Figure 6.Z-scheme charge transfer mechanism at 5CSN-8 interface [114] .NVO: NH 4 V 4 O 10 ; SMX: sulfamethoxazole; STO: SrTiO 3 . Liu et al. utilized an Ag 3 PO 4 /Bi 4 Ti 3 O 12 heterojunction catalyst for the degradation of SMX drugs [116] .The composite catalyst exhibited higher removal efficiency in comparison to pure Ag 3 PO 4 and pure Bi 4 Ti 3 O 12 . The photocatalytic reaction was performed under the exposure of 300-W Xenon lamp irradiation using a filter (λ > 400 nm) with an initial concentration of SMX as 5 mg/L.Around 80% oxidation took place within 40 min with a reaction rate constant of 0.035 min -1 .It was deduced that the photocatalytic degradation occurred via direct Z scheme mechanism, and h + was the main species responsible for the SMX removal.POM-based catalysts explored for degrading CIP and SMX have been compiled in Table 2. Application of POMs and their composites in degradation of other antibiotics Apart from the commonly used pharmaceuticals mentioned in the previous sections, some other antibiotics reported in the literature were successfully degraded by means of the application of POM-based catalysts.7) and deployed it for the degradation of metronidazole [117] .The addition of NaCl slightly improved the degradation efficiency.In the absence of NaCl, nearly 71% degradation was achieved after being illuminated under UV light for 150 min.On the other hand, in the presence of NaCl, 81% removal efficiency was obtained.In addition, solution pH also played a significant role in metronidazole oxidation.At an acidic pH (~3), nearly 67% degradation occurred, while at a higher pH (~9), only 34% oxidation took place.In another study, Wang et [118] .In the presence of the second catalyst, a promising removal of 90.61% was obtained in the case of ceftiofur.Moreover, it showed excellent reusability up to five cycles without any appreciable loss in catalytic activity. Brahmi et al. prepared a novel POM/polymer composite for the eradication of four model PCs such as CIP, OTC, Ibuprofen (Ibu), and erythromycin through a photocatalytic mechanism [88] .For different drugs, different results were obtained.It was found that for degrading CIP and OTC, removal was feasible by simple photolysis and UV light exposure, even in the absence of catalysts.However, for Ibu, the presence of the POM/polymer composite enhanced degradation efficiency drastically.However, in the case of erythromycin, the removal efficiency was diminished in the presence of the POM/polymer catalyst. Decatungstate (W 10 O 32 4-) anion has been reported as an efficient POM for the degradation of organic pollutants.Cheng et al. utilized sodium decatungstate (Na 4 W 10 O 32 ) as a sacrificial agent for the degradation of the pharmaceuticals sulfasalazine and sulfapyridine [87] .In the presence of H 2 O 2 , W 10 O 32 4-is reduced to W 10 O 32 5-, leading to the generation of hydroxyl radical.Li et al. reported the synthesis and application of novel POM-metal organic framework(MOF)-based composite PW 12 @MFM-300(In) towards photocatalytic degradation of sulfamethazine in water medium [119] .Around 98% degradation took place within two hours under visible light irradiation.Further experimental analysis revealed that the indium-oxygen cluster and organic bridging ligands of the MOF host acted as the quantum dots and antennas.On being exposed to visible light, photo-generated electrons are transferred from the valence band of MFM-300(In) to the conduction band of PW 12 , causing photocatalytic activity.Further, these photo-generated electrons are captured by H 2 O 2 to produce strong hydroxyl radicals to facilitate the reaction. 3-nucleus and polygonal frame [117] . Selvakumar et al. used Sm 2 MoO 6 /ZnO/rGO composite for the photocatalytic degradation of the Ibu drug molecule [120] .The composite catalyst was prepared by simple hydrothermal method, and from the experimental investigation (UV-DRS study), it was revealed that it possessed the lowest bandgap (i.e., 3.12 eV) amongst Sm 2 MoO 6 , ZnO, and Sm 2 MoO 6 /ZnO.In the presence of visible light, within 90 min, almost complete (96.73%) degradation of Ibu took place. From the self-assembly of β-cyclodextrin and POM, a composite material was produced and deployed for the catalytic degradation of various dyes as well as antibiotic compounds such as nitrofurazone, TC, and berberine in the presence of H 2 O 2 [22] .The schematic of the composite formation and organic compound degradation is shown in Figure 8. Briefly, the catalytic potential was tested against an initial concentration of 0.033 mM of nitrofurazone.In the composite catalyst, the content of POM was kept as 0.055 mM and 0.03 mM polycationic per-6-deoxy-6-ethylenediamine-β-cyclodextrin (EDA-CD).After 15 min from the start of the reaction, 50 µL of H 2 O 2 was added, and the degradation percentage was monitored from time to time.Complete catalytic degradation of nitrofurazone occurred within 19 min under the exposure of a 50-W mercury lamp. Yang et al. prepared novel POM@MOF hybrid-derived hierarchical hollow Mo/Co bimetal oxides and deployed them for levofloxacin degradation purposes via peroxymonosulfate activation [121] .The as-prepared catalyst showed promising performance and excellent reusability.Moreover, it has been found workable .EDA-CD: Ethylenediamine-β-cyclodextrin; POM: polyoxometalate. throughout a wide pH range of 3-11, making it suitable for diversified types of wastewater.A tabular representation [Table 3] is provided related to the catalytic degradation of other drugs by application of POM-based composites. STABILITY OF POMS IN SOLUTION A catalyst should be chemically stable enough to be suitable for real field applications.It is already mentioned earlier that POMs are formed due to the lowering of pH of the neutral solutions of salts of transition metals, e.g., Mo, W, V, Nb, etc.In many studies, the stability of the POM-based catalysts has been reported as one of the crucial parameters affecting the overall removal efficiency of the target pollutant.Cheng et al. studied the effect of the solution pH on the stability of the composite photocatalyst as well as the removal efficiency of sulfasalazine and sulfapyridine [87] .The degradation rate constant of both the PCs was enhanced at a lower pH of 3 and decreased at a higher pH of 6.Moreover, as per other reported studies, the stability of decatungstate gets diminished at a higher pH.However, some other studies reported a reverse phenomenon regarding the degradation and stability of the catalyst with respect to the solution pH.Xing et al. reported that the optimum pH for POM-anchored zinc oxide nanocomposite to degrade TC is around 7 [103] .Liu et al. found that a high pH condition is favorable for the degradation [98] .In the study of degradation of five PCs, TC, ofloxacin (OFL), norfloxacin (NOR), CIP, SMX by application of Fe-POM nanodots decorated Bi 2 MoO 6 nanosheets, it was quite interesting to observe that the catalytic efficiency as well as the stability of the nanocomposite was retained in a wide pH range (3-11) [93] .The removal efficiency of TC throughout a wide pH range is shown in Figure 9. .TC: Tetracycline. Aureliano, in his recent review article, highlighted the importance of the stability of the POM-based composites regarding environmental application [123] .It is mentioned that decavandate (V 10 ) is often explored for various environmental applications.Many studies reported that in the presence of proteins (actin, Ca 2+ -ATPase), the stability of V 10 is improved significantly. CURRENT KNOWLEDGE GAP AND FUTURE PERSPECTIVE Undoubtedly, POMs are excellent new-age materials that can be effectively utilized for catalytic degradation of PCs in water bodies.They possess several distinguishing properties that can be of immense help in view of wastewater treatment.However, there are several issues and a huge knowledge gap that must be bridged in order to efficiently use these catalysts for industrial purposes. Recepoglu et al., in their recent review, highlighted the fact that relatively less research work has been carried out on forming magnetic composites with the POMs [124] .Formation of magnetic composite often helps in easy recovery of the catalyst material and helps in recyclability.Bastami and Ahmadpour [122] prepared a novel POM magnetic nanohybrid catalyst for the degradation of Ibu under solar light irradiation. MOFs have been proven to be an excellent modifier for POMs.However, to date, only a handful of MOFs have been explored for this purpose, as highlighted by D'Cruz et al. in their recent review article [125] .Other MOFs need to be explored in order to know their suitability for the purpose. For industrial applications, one of the most important criteria is the suitability of the catalyst in the presence of other coexisting ions.Hence, more studies are recommended to have a thorough idea regarding the influence of other ions on POM-based catalysts for PC degradation purposes. Lan et al., in their recent critical review, described several drawbacks that need to be overcome in the near future for making POM-based composites as novel materials for real wastewater treatment [126] .Most of the POM-based catalysts are based on some classic clusters such as Keggin, Dawson, and Anderson.So, it is clear that more exploration is still required at the atomic level.Additionally, a detailed study of the mechanism is recommended.One of the most critical drawbacks of the POM-based composites is their leaching into the medium during the course of the reaction.It is mentioned that most of these composites are prepared through hydrothermal, impregnation, self-assembly, and sol-gel methods, and they remain attached to the support material by simple non-covalent interaction.Hence, there is a huge possibility that these materials will get detached from the support materials after some cycles of catalytic reaction. Recepoglu et al. mentioned some major drawbacks apart from describing the universal application of POMbased composites for wastewater treatment [124] .It is also mentioned that more effort is required to monitor the design of the POM composite in order to achieve minimal leaching during the course of the reaction.Moreover, as it is an emerging field in the area of wastewater engineering, more studies on real wastewater effluents, along with modeling, are required.Emphasis is needed on the application of POM composites for the large-scale wastewater treatment plant as well as for commercialization purposes. CONCLUSIONS POM-based catalysts have proved to be an excellent new-age material for pharmaceutical wastewater remediation.These novel catalysts have been found promising in degrading commonly encountered antibiotics such as TC, SMX, CIP, Ibu, etc. Reactive oxygen species such as hydroxyl radical and singlet oxygen often play a vital role in the mineralization of PCs.In addition to the single compound, these catalysts have also shown good performance in degrading wastewater effluents containing multiple drugs.Hence, these catalysts are well-suitable for treating real wastewater.However, most of the catalysts reported are only applicable to lab-scale studies.So, further research work is recommended for scaling up.Only a handful of POMs have been explored till now.There is also a huge prospect of developing new catalysts by forming composites with other new-age materials. Figure 1 . Figure 1.Change of oxidation states of transition metals. Figure 2 . Figure 2. Antibiotics and their breakdown products are likely to travel and end up in a variety of ecosystems. Figure 3 . Figure 3.The photocatalytic mechanism of catalysts is represented in three different ways: (A) a diagrammatic photocatalytic scheme of POM-based catalysts; (B) the mechanism of pure semiconductor photocatalysts; and (C) the mechanism of POM-based composite photocatalysts.CB: Conduction band; POM: polyoxometalate; VB: valence band. Figure 4 . Figure 4. Schematic of TC degradation by application of microporous ammonium phosphomolybdate as catalyst. Figure 5 . Figure 5.The proposed mechanism for the photodegradation of CIP on the surface of g-C 3 N x /POMs nanocomposites[108] Figure 8 . Figure 8. Possible self-assembly models of EDA-CD and POM, and hybrid nanoparticles, chemical formula, and corresponding cartoon representations of EDA-CD and POM[22] Figure 9 . Figure9.Degradation of TC over a wide range of pH[93] et al. used Lindqvist-type K 7 HNb 6 O 19 in
v3-fos-license
2008-08-26T23:50:22.000Z
2007-07-03T00:00:00.000
6475945
{ "extfieldsofstudy": [ "Biology", "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/EBO.S555", "pdf_hash": "8044615327399260ca858649cc7005b012ecb5fa", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43994", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "8c813ff7b00d929e2d3b481527a90984b8a8feb1", "year": 2007 }
pes2o/s2orc
Expected Anomalies in the Fossil Record The problem of intermediates in the fossil record has been frequently discussed ever since Darwin. The extent of `gaps' (missing transitional stages) has been used to argue against gradual evolution from a common ancestor. Traditionally, gaps have often been explained by the improbability of fossilization and the discontinuous selection of found fossils. Here we take an analytical approach and demonstrate why, under certain sampling conditions, we may not expect intermediates to be found. Using a simple null model, we show mathematically that the question of whether a taxon sampled from some time in the past is likely to be morphologically intermediate to other samples (dated earlier and later) depends on the shape and dimensions of the underlying phylogenetic tree that connects the taxa, and the times from which the fossils are sampled. Introduction Since Darwin's book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life [2], there has been much debate about the evidence for continuous evolution from a universal common ancestor.Initially, Darwin only assumed the relatedness of the majority of species, not of all of them; later, however, he came to the view that because of the similarities of all existing species, there could only be one 'root' and one 'tree of life' (cf.[11]).All species are descended from this common ancestor and indications for their gradual evolution have been sought in the fossil record ever since.Usually, the improbability of fossilization or of finding existing fossils was put forward as the standard answer to the question of why there are so many 'gaps' in the fossil record.Such gaps have become popularly referred to as 'missing links', i.e. missing intermediates between taxa existing either today or as fossils. Of course, the existence of gaps is in some sense inevitable: every new link gives rise to two new gaps, since evolution is generally a continuous process whereas fossil discovery will always remain discontinuous.Moreover, a patchy fossil record is not necessarily evidence against evolution from a common ancestor through a continuous series of intermediates -indeed, in a recent approach, Elliott Sober (cf.[11]) applied simple probabilistic arguments to conclude that the existence of some intermediates provides a stronger support for evolution than the non-existence of any (or some) intermediates could ever provide for a hypothesis of separate ancestry.Moreover, some lineages appear to be densely sampled, whereas of others only few fossiliferous horizons are known (cf.[10]).This problem has been well investigated and statistical models have been developed to master it (see e.g.[6], [7]), [12]). In this paper, we suggest a further argument that may help explain missing links in the fossil record.Suppose that three fossils can be dated back to three different times.Can we really expect that a fossil from the intermediate time will appear (morphologically) to be an 'intermediate' of the other two fossils?We will explore this question via a simple stochastic model. In order to develop this model, we first state some assumptions we will make throughout this paper: firstly, we will consider that we are sampling fossil taxa of closely related organisms and which differ in a number of morphological characteristics.We assume this group of taxa has evolved in a 'tree-like' fashion from some common ancestor; that is, there is an underlying phylogenetic tree, and the taxa are sampled from points on the branches of this tree. It is also necessary to say how morphological divergence might be related to time, as this is important for deciding whether a taxon is an intermediate or not.In this paper, we make the simplifying assumption that, within the limited group of taxa under consideration (and over the limited time period being considered), the expected degree of morphological divergence between two taxa is proportional to the total amount of evolutionary history separating those two taxa.This evolutionary history is simply the time obtained by adding together the two time periods from the most recent common ancestor of the two taxa until the times from which each was sampled (in the case where one taxon is ancestral to the other, this is simply the time between the two samples).This assumption on morphological diversity would be valid (in expectation) if we view morphological distance as being proportional to the number of discrete characters that two species differ on, provided that two conditions hold: (i) each character has a constant rate of character state change (substitution) over the time frame T that the fossils are sampled from, and (ii) T is short enough that the probability of a reverse or convergent change at any given character is low.We require these conditions to hold in the proofs of the following results.We will discuss other possible relations of morphological diversification and distance towards the end of this paper.When the tree consists of only one lineage from which samples are taken at times T1, T2 and T3, then clearly the distance d1,3 is always larger than d1,2 and d2,3.Consequently, E1,3 > max{E1,2, E2,3}.For samples taken from different lineages of a tree, the distance d1,3 of one particular sample from time T1 to the one of T3 can be smaller than the distance of either of them to the sample taken at time T2.Yet in expectation we always have E1,3 > max{E1,2, E2,3} for two-branch trees.For more complex trees this can fail as we show in Example 2.7. The simplest scenario is the case where the three samples all lie on the same lineage, so that the evolutionary tree can be regarded as a path (cf. Figure 1).In this case, the path distance (and hence expected morphological distance) between the outer two fossils is always larger than the distance that either of them has from the fossil sampled from an intermediate time.But for samples that straddle bifurcations in a tree, it is quite easy to imagine how this intermediacy could fail; for example, if the two outer taxa lie on one branch of the tree and the fossil from the intermediate time lies on another branch far away (cf. Figure 2).But this example might be unlikely to occur, and indeed we will see that if sampling is uniform across the tree at any given time, in expectation the morphological distances remain intermediate even for this case (cf. Figure 2).Yet for more complex trees, this expected outcome can fail, and perhaps most surprisingly, the distance between the earliest and latest sample can, in expectation, be the smallest of the three distances in certain extreme cases.Thus, in order to make general statements, we will consider the expected degree of relatedness of fossils sampled randomly from given times.Our results will depend solely on the tree shape (including branch lengths) of the underlying tree and the chosen times. Results We begin with some notation.Throughout this paper, we assume a rooted binary phylogenetic tree to be given with an associated time scale 0 < T 1 < T 2 < T 3 .The number of T i -lineages (of lineages extant at time T i ) is denoted by n i .For instance, in Figure 3, the number n 1 of T 1 -lineages is 3, whereas the numbers n 2 and n 3 of T 2 -and T 3 -lineages are both 5.If not stated otherwise, extinction may occur in the tree.Every bifurcation in the tree is denoted by b i , where b 0 is the root.Note that in a tree without extinction, the total number of bifurcations up to time T 3 (including the root) is n 3 − 1.For every b i let t i denote the time of the occurrence of bifurcation b i .We may assume that the root is at time t 0 = 0. Now, for every b i , we make the following definitions: where n l j,i denotes the number of descendants the subtree with root b i has at time T j to the left of its root b i , and n r j,i is defined analogously for the descendants on the right hand side of b i . It can be seen that bifurcations for which at least one branch of offspring dies out in the same interval where the bifurcation lies always have P j,k i -value 0. Consequently, if either t 0 < t i < T 1 or T 1 < t i < T 2 or T 2 < t i < T 3 and one of b i 's branches becomes extinct in the same interval, respectively, then P j,k i is 0 for all j, k.Note that the number P j,k i denotes the number of different paths in the tree from time T j to time T k in the subtree with root b i and in which no edge is taken twice. Example 2.1.Consider the tree given in Figure 3. Here, the values P j,k i for bifurcation b 1 corresponding to time t In the sampling, select uniformly at random one of the T i -lineages as well as one of the T j -lineages to get the expected length E i,j of the path connecting a lineage at time T i with one at time T j in the underlying phylogenetic tree.Then, the expectation that a fossil from the intermediate time T 2 also will be an intermediate taxon of two taxa taken from T 1 and T 3 , respectively, refers to the assumption that E 1,3 > max{E 1,2 , E 2,3 }.We will show in the following lemma that this last inequality can fail and describe the precise condition for this to occur.Moreover, we later show that E 1,3 can be strictly smaller (!) than both E 1,2 and E 2,3 -that is the temporally most distant samples can, on average, be more similar than the temporally intermediate sample is to either of the two.Note that if P j,k i is 0, the corresponding branch does not contribute to the expected Figure 3: A rooted binary phylogenetic tree with three times T1, T2, T3 at which taxa have been sampled.The dotted branches refer to taxa that do not contribute to the expected distances from one of these times to another and thus are not taken into account.On the other hand, bifurcation b2 at time t2 shows that extinction may have an impact on the expected values.Such branches have to be considered. distance from one time to another.We can therefore assume without loss of generality that all bifurcations b i have at least one descendant on their left-hand side and at least one on their right-hand side, each in at least one of the times T 1 , T 2 , T 3 .In Figure 3, branches that therefore need not be considered are represented with dotted lines. In order to simplify the statement of our results, for all bifurcations b i set Lemma 2.2.Given a rooted binary phylogenetic tree with times 0 < T 1 < T 2 < T 3 and the root at time t 0 = 0.Then, E 1,3 ≤ E 1,2 if and only if Proof. (1) every T 3 -lineage has an ancestor in T 1 ways along the root In the above bracket, the three summands refer to different paths from time T 1 to time T 3 .The first summand belongs to those paths that go directly from T 1 to T 3 and thus have length T 3 − T 1 .There are n 3 such ways as every T 3 -lineage has an ancestor in T 1 .The second summand sums up all paths going along one of the bifurcations b i for i = 0.For every i, there are by definition exactly P 1,3 i such paths.Similarly, the third summand refers to all paths along the root b 0 , whose length is determined by taking the distance from T 1 to the root plus the distance from there to T 3 . Hence, there are no values 0 < T 1 < T 2 < T 3 such that T 3 − T 2 fulfills the required condition, and so E 1,3 > E 1,2 for all choices of T i .Conversely, suppose i:0<ti<T1 Then, select T 1 , T 2 with 0 < T 1 < T 2 and set Then, T 3 > T 2 and Corollary 2.4.If either (i) n 1 = 2 or (ii) no extinction occurs in the tree and Proof.(i) Note that if n 1 = 2, obviously only one bifurcation, say b î (for some î such that 0 ≤ t î < T 1 ), contributes to the number n 1 of lineages at time T 1 , all the branches added by additional bifurcations become extinct before T 1 .Thus: P 1,3 î , P 1,2 î = 0 and P 1,3 i , P 1,2 i = 0 for all i = î.Analogously to the proof of Lemma 2.2 we have for n 1 = 2: î .Thus, n 2 = P 1,2 î and = 0 for all i = î.Thus, i:0<ti<T1 (ii) In this case, obviously i for all i : 0 < t i < T 1 and therefore i:0<ti<T1 Lemma 2.2 essentially states that the expected degree of relatedness from taxa of time T 1 to taxa of time T 3 can be larger than the one to taxa of time T 2 , but it requires the distance from T 2 to T 3 to be "small enough".Whether such a solution is feasible can be checked via Corollary 2.3.Lemma 2.2 shows already how the role of intermediates depends on the times the fossils are taken from.Corollary 2.4(i) on the other hand shows how the tree itself has an impact on the expected values: if the tree shape (including branch lengths) is such that at time T 1 only two taxa exist, then the just mentioned scenario cannot happen as the condition of Corollary 2.3 is not fulfilled.However, we can prove an even stronger result, namely that not only E 1,3 < E 1,2 is possible, but E 1,3 < min{E 1,2 , E 2,3 } can be obtained for a suitable choice of times T 1 , T 2 , T 3 .For this, we need the following lemma.Lemma 2.5.Given a rooted binary phylogenetic tree with times 0 < T 1 < T 2 < T 3 and the root at time t 0 = 0. Then E 1,3 ≤ E 2,3 if and only if As in the proof of Lemma 2.2, we have (cf.( 3)) (5) Analogously, Thus, which holds precisely if With the help of the two lemmas we can now state the following theorem. Theorem 2.6.Given a rooted binary phylogenetic tree with times 0 < T 1 < T 2 < T 3 and the root at time 0.Then, E 1,3 ≤ min{E 1,2 , E 2,3 } if and only if the following two conditions hold: Proof.The Theorem follows directly from Lemmas 2.2 and 2.5. The following example demonstrates the influence of times 0 < T 1 < T 2 < T 3 according to the above theorem. Discussion The analysis of the fossil record provides an insight into the history of species and thus into evolutionary processes.Stochastic models can provide a useful way to infer patters of diversification, and they form a useful link between molecular phylogenetics and paleontology [8].Such models would greatly benefit from incorporation of potential fossil ancestors and other extinct data points to infer patterns of evolution.In this paper we have applied a simple model-based phylogenetic approach to study the expected degree of similarity between fossil taxa sampled at intermediate times. 'Gaps' in the fossil record are problematic [10] as they can be interpreted as 'missing links'.Therefore, numerous studies concerning the adequacy of the fossil record have been conducted (see, for example, [3], [9], [13]), and it is frequently found that even the available fossil record is still incompletely understood.This is particularly true for ancestor-descendant relationships (see, for instance, [4], [5]).For example Foote [5] reported the probability that a preserved and recorded species has at least one descendant species that is also preserved and recorded is on the order of 1%-10%.This number is much higher than the number of identified ancestordescendant pairs.Thus, it remains an important challenge to recognize such pairs [1].This is also essential with regard to ancestor-intermediate-descendant triplets, as it is possible that there are in fact fewer 'gaps' than currently assumed, i.e. that intermediates are present but not yet recognized.Such issues have an important bearing on any conclusions our results might imply concerning the testing of hypotheses of continuous morphological evolution, or concerning the shape of the underlying evolutionary tree based on the non-existence of certain intermediates. Another challenge is to investigate different phylogenetic models for describing the expected degree of morphological separation between different fossil taxa sampled at different times.Our findings strongly depend on the assumption that morphological diversification is proportional to the distance in the underlying phylogenetic tree.This is justified if morphological difference is proportional to the number of differing discrete characters, that each of these characters changes at a constant rate over the time period of sampling, and that homoplasy is rare.This last assumption requires the rate of character change to be sufficiently small in relation to the time period of the sampling -the appearance of reverse or convergent character states will lead to a more concave (rather than linear) relationship between morphological divergence and path distance.A similar concave relationship might be expected for continuous morphological evolution as described by neutral Brownian-motion. Thus, the impact of different assumptions on the role of intermediates could be further investigated.But even if we assume that diversification is proportional to time, there may be other ways to measure 'distance' that could be usefully explored -for instance, one could define the distance between two taxa to be the maximum (rather than the sum) of the two divergences times of the taxa back to their most recent common ancestor.This definition of distance allows the degree of relatedness to be higher for taxa on the same clade than for other taxa.In this case, there exist analogous results to Lemmas 2.2 and 2.5 (results not shown), but the formulae are somewhat different, particularly for Lemma 2.5. time 1 Figure 2 : Figure2: For samples taken from different lineages of a tree, the distance d1,3 of one particular sample from time T1 to the one of T3 can be smaller than the distance of either of them to the sample taken at time T2.Yet in expectation we always have E1,3 > max{E1,2, E2,3} for two-branch trees.For more complex trees this can fail as we show in Example 2.7.
v3-fos-license
2017-06-20T11:52:05.711Z
2013-06-18T00:00:00.000
14247317
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-13-138", "pdf_hash": "d6741d774246ca046d1ee88419e4d23f2cccc7ce", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43995", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "d6741d774246ca046d1ee88419e4d23f2cccc7ce", "year": 2013 }
pes2o/s2orc
Genetic diversity of clinical Pseudomonas aeruginosa isolates in a public hospital in Spain Background Pseudomonas aeruginosa is an important nosocomial pathogen that exhibits multiple resistances to antibiotics with increasing frequency, making patient treatment more difficult. The aim of the study is to ascertain the population structure of this clinical pathogen in the Hospital Son Llàtzer, Spain. Results A significant set (56) of randomly selected clinical P. aeruginosa isolates, including multidrug and non-multidrug resistant isolates, were assigned to sequence types (STs) and compared them with their antibiotic susceptibility profile classified as follows: extensively drug resistant (XDR), multidrug resistant (MDR) and non-multidrug resistant (non-MDR). The genetic diversity was assessed by applying the multilocus sequence typing (MLST) scheme developed by Curran and collaborators, and by the phylogenetic analysis of a concatenated tree. The analysis of seven loci, acsA, aroE, guaA, mutL, nuoD, ppsA and trpE, demonstrated that the prevalent STs were ST-175, ST-235 and ST-253. The majority of the XDR and MDR isolates were included in ST-175 and ST-235. ST-253 is the third in frequency and included non-MDR isolates. The 26 singleton sequence types corresponded mainly to non-MDR isolates. Twenty-two isolates corresponded to new sequence types (not previously defined) of which 12 isolates were non-MDR and 10 isolates were MDR or XDR. Conclusions The population structure of clinical P. aeruginosa present in our hospital indicates the coexistence of nonresistant and resistant isolates with the same sequence type. The multiresistant isolates studied are grouped in the prevalent sequence types found in other Spanish hospitals and at the international level, and the susceptible isolates correspond mainly to singleton sequence types. Background Pseudomonas aeruginosa is a non-fermenting Gramnegative bacterium that is widely distributed in nature. The minimum nutritional requirements, tolerance to a wide variety of physical conditions and intrinsic resistance against many antibiotics explain its role as an important nosocomial pathogen. Certain bacterial clones have been distributed worldwide and, in most cases, associated with multiresistance patterns [1][2][3]. Because the number of active antibiotics against P. aeruginosa is limited, it is a priority to perform a strict and regular follow up of the resistance patterns in individual hospitals. In the microbiology laboratory of the Hospital Son Llàtzer (Mallorca, Spain) the number of isolates of P. aeruginosa is increasing annually. In 2010, the number of isolates of P. aeruginosa was 1174, being the second pathogen isolated after Escherichia coli. When the P. aeruginosa resistance pattern of the P. aeruginosa isolates from this hospital were compared with the latest Spanish surveillance study of antimicrobial resistance [4], it was revealed that the resistance levels of the isolates in our hospital were higher against all of the antibiotics commonly used in the treatment of infections caused by P. aeruginosa, contributing to therapeutic difficulties. The introduction of molecular techniques has led to significant progress in both bacterial identification and typing. In P. aeruginosa, several schemes for molecular typing have been used, such as ribotyping [5], PCRbased fingerprinting [6], or pulsed-field gel electrophoresis (PFGE) [7], which is considered the 'gold standard' technique. Curran et al. (2004) developed a multilocus sequence typing (MLST) scheme that discriminates P. aeruginosa isolates by differences in the sequences of seven genes: acsA, aroE, guaA, mutL, nuoD, ppsA and trpE, providing a good comprehensive database that allows the comparison of results obtained in different locations for different sample types [8]. Since this work, MLST has been applied in several studies of P. aeruginosa to better understand the epidemiology of infections in patients with cystic fibrosis and to study multiresistant clones. The main objective of our study is to characterise the isolates of P. aeruginosa analysed routinely in the Hospital Son Llàtzer at the molecular level. A significant set of randomly selected clinical isolates (fifty-six), including multidrug and non-multidrug resistant isolates, was further studied to determine the population structure of this clinical pathogen in our hospital and to compare it with other Spanish and international multicentre surveillance studies. P. aeruginosa culture collection A total of 56 isolates of P. aeruginosa from 53 specimens recovered from 42 patients of the Hospital Son Llàtzer were randomly selected between January and February 2010. Three samples showed two distinct colony morphologies, and both types of each isolate were studied by MLST to establish possible differences between them (these morphologies are labelled by the number of the isolate, followed by the letters a or b). Isolates from different origins were taken as part of standard care ( Table 1). The hospital is a tertiary teaching hospital with 377 beds and serves a catchment population of approximately 250,000 inhabitants. All of the P. aeruginosa isolates were isolated and cultured on Columbia agar with 5% sheep blood (bioMérieux, Marcy d'Etoile, France). The cultivation and incubation times of the plates were performed under routine laboratory conditions (24 h at 37°C). The study was approved by the research board of our hospital. Individual patient's consent was not sought as isolates were derived from routine diagnostics and as data were processed anonymously. Phenotypic and antibiotic susceptibility characterisations The 56 isolates were biochemically and phenotypically characterised using the authomatized VITEK®2 GN method (bioMériux, Marcy d'Etoile, France) and the oxidase reaction test. Their antibiogram profiles were established by the disk diffusion method on Mueller-Hinton agar plates (bioMérieux, Marcy d'Etoile, France) following CLSI recommendations for all antibiotics, except for fosfomycin which followed the French Microbiology Society recommendations [9,10]. Borderline values were assessed by the E-test method (bioMérieux, Marcy d'Etoile, France). The antibiotics tested were amikacin, aztreonam, cefepime, ceftazidime, ciprofloxacin, colistin, gentamicin, fosfomycin, imipenem, levofloxacin, meropenem, piperacillin-tazobactam and tobramycin. For the isolates resistant to imipenem and/or meropenem, the determination of metallo-β-lactamases (MBLs) using E-test strips with Imipenem-EDTA was performed (bioMérieux, Marcy d'Etoile, France). The classification of multiresistance was performed according to Magiorakos et al. [11]. The isolates were classified according to the resistance pattern as multidrug resistant (MDR, nonsusceptible to at least one agent in three or more antimicrobial categories), extensively drug resistant (XDR, non-susceptible to at least one agent in all but two or fewer antimicrobial categories; i.e. bacterial isolates remain susceptible to only one or two categories), pandrugresistant (PDR, non-susceptible to all agents in all antimicrobial categories), and non-multidrug resistant (non-MDR). DNA extraction: PCR amplification and DNA sequencing Bacterial genomic DNA for PCR amplification was obtained as previously described [12]. The housekeeping genes acsA, aroE, guaA, mutL, nuoD, ppsA and trpE were amplified and sequenced for the 56 isolates using the primers described previously [8]. The PCR conditions have been slightly modified. The reactions were performed using an Eppendorf thermocycler, with an initial denaturation step at 96°C 2 min, followed by 35 cycles of denaturation at 96°C for 1 min for all of the genes, a primer annealing temperature, depending on the gene (55-58°C for aroE and nuoD; 58°C for acsA and guaA; and 58-60°C for mutL, ppsA and trpE), for 1 min and a primer extension at 72°C for 1 min for all of the genes, with the exception of aroE (1.5 min). A final elongation step was performed at 72°C for 10 min. The PCR amplification reactions were performed as previously described [12]. The amplified products were purified with Multiscreen HTS PCR 96-well filter plates (Millipore). Sequence reactions were carried out using the ABI Prism BigDye Terminator version 3.1 and the sequences were read with an automatic sequence analyser (3130 genetic analyzer; Applied Biosystems). Sequence analysis and allele and nucleotide diversity Sequence analysis was performed as described previously [12]. Individual phylogenetic trees and concatenated analyses of the sequenced gene fragments were constructed [12]. The allelic and nucleotide diversities were calculated from the gene sequences using the DnaSP package, [13]. For each isolate, the combination of alleles obtained at each locus defined its allelic profile or sequence type (ST). The ST and allele assignment were performed at the P. aeruginosa MLST website (http:// pubmlst.org/paeruginosa/). If a sequence did not match with an existing locus in the database, it was designated as a "new" allele. Moreover, the new STs that did not match any allele combination in the database were also numbered as "new". The clustering of the STs and the split decomposition were performed as previously described [12]. The new nucleotide sequences of each different allele of each locus determined in this study and the new sequence types were sent to curator Eleanor Pinnock for introduction into the P. aeruginosa MLST website (http:// pubmlst.org/paeruginosa/). The diversity and rarefaction indexes for the statistical analysis were calculated using the PAST v.2.0 program [14]. The coverage index (C) was calculated as C = 1 -(n/N), with n being the number of sequence types, and N the number of strains analysed. Description of the bacterial isolates In total, 227 P. aeruginosa isolates were obtained from 145 patients between January and February 2010. The antibiotic resistance patterns for the isolates were 21.4% XRD, 17.2% MRD and 61.4% non-MRD. In total, 56 P. aeruginosa isolates from 53 specimens were randomly chosen from the different groups of antibiotic resistance and further studied. Three of them showed two different colony morphologies, and both types were studied by MLST. The isolates were classified according to the resistance pattern as 21.4% MDR, 37.5% XDR and 41.1% non-MDR. The antibiotic pattern and the individual profiles are shown in Table 2. MLST analysis A total of 2,882 nucleotides were analysed for the 56 isolates: 390 bp for acsA, 498 for aroE, 373 bp for guaA, 442 bp for mutL, 366 bp for nuoD, 370 bp for ppsA and 443 bp for trpE. The number of polymorphic sites in the seven loci studied varied from 69 (aroE) to 11 (nuoD) in the 56 isolates studied. The number of alleles oscillated from 20 for acsA to 6 for nuoD. The guaA and trpE genes exhibited 15 different alleles, mutL and ppsA exhibited 14 different alleles and aroE exhibited 10 different alleles. The allelic and nucleotide diversity are shown in Table 3. The acsA, aroE, mutL and ppsA genes displayed new alleles not previously described. The three isolates with two different colony morphologies presented the same allelic profile, although one of them (PaC36a and PaC36b) had different antibiotic susceptibility profiles. The allelic profile for the different isolates and for each gene analysed are given in Table 1. The new alleles and the new sequence types not previously described are indicated with an asterisk mark. The MLST analysis of the 56 isolates showed 32 different sequence types. Individual phylogenetic trees for each gene were constructed and, to build a more robust phylogeny, a concatenated analysis considering the seven genes was also performed (Figure 1). Two isolates with mucoid phenotype, PaC7 and PaC16, both isolated from the same patient (number 6), were not included in the The isolates were classified according to the resistance pattern, as multidrug-resistant (MDR, non-susceptible to ≥1 agent in ≥3 antibiotic categories), extensively drug-resistant (XDR, non-susceptible to ≥ 1 agent in all but ≤2 antibiotic categories) and non-multidrug resistant (non-MDR). analysis because we were unable to amplify and sequence the mutL gene. All of the clinical isolates studied, except PaC46 and PaC49, were related with a similarity between 98.5 -100%. PaC46 and PaC49, belonged to the same clonal complex and shared a 99.8% similarity between them, less than 95.8% with the other clinical isolates and 95.7% with P. aeruginosa PA7, considered to be an outlier of the species [15]. The corresponding genes of P. aeruginosa PA7 and PAO1 have a similarity of 91.6%, and this percentage is lower when other species of the genus were considered. A SplitsTree was constructed with all of the isolates analysed ( Figure 2), and recombination was observed. The most abundant sequence types observed were ST-175, ST-235 and ST-253. Patients and antibiotic resistance pattern Thirty-five isolates were single isolates (one per patient), and, in seven patients, more than one isolate of P. aeruginosa was obtained during the two-month period studied (patients 1 and 8, four isolates each; patients 6, 9, 29, 32 and 38, two isolates each) (see Table 1). In two patients (9 and 38), all of the isolates studied belonged to the same ST and had the same antibiotic resistance profile. Isolates with different STs were isolated from three patients (patients 1, 6 and 8). Four isolates were isolated from patient 1 during the time of the study: PaC1 and PaC52 from a wound sample, and PaC49 and PaC51 from rectal smear. PaC1 and PaC52, were isolated with one month of difference, and belonged to the same ST and showed the same antibiotic resistance profile with the exception of gentamicin (intermediate susceptibility). PaC49 and PaC51 were assigned to different STs and showed differences in the antibiotic resistance profile. Patient 6 showed the same antibiotic profile (with the exception of meropenem). Four isolates with slight differences in the antibiotic profile were recovered from patient 8 (PaC10 and PaC19 from urine samples were isolated with three days of difference, PaC32 from a rectal smear and PaC40 was of respiratory origin). Isolate PaC10 was assigned to a different ST based on differences in guaA allele, although it belonged to the same Table 2 Susceptibility antibiotic pattern for each Pseudomonas aeruginosa isolate analysed (Continued) The isolates were classified according to the resistance pattern, as multidrug-resistant (MDR, non-susceptible to ≥1 agent in ≥3 antibiotic categories), extensively drug-resistant (XDR, non-susceptible to ≥1 agent in all but ≤2 antibiotic categories) and non-multidrug resistant (non-MDR). clonal complex. Two isolates were isolated the same day from patient 29 from two different samples (catheter and blood); both of the isolates showed the same ST but presented differences in their antibiotic profile and in the production of MBLs, as detected by phenotypic methods. Two isolates of patient 32 obtained from different origins with two weeks of difference showed differences in piperacilin/tazobactam-susceptibility, but belonged to the same ST (see Table 1 and 2). Population structure and susceptibility to antibiotics [16]. No relations statistically significant could be established in our study between antibiotic resistance and other variables as sex, age of patients, sample origin or STs, probably because the low sampling potential. However, a statistically significant association was observed between the prevalent ST (ST-175) and multiresistant isolates (p = 0.003). Diversity analysis To assess the extent of the diversity analysed in the study, a rarefaction curve was constructed. Despite the high diversity of the sequence types, the number of different sequence types referred to the number of isolates analysed did not reach a saturation curve, indicating that the diversity was higher than detected, a finding that was confirmed when the coverage index (C) was calculated (51%). Additional isolates should be analysed to ascertain the population structure of clinical P. aeruginosa present in our hospital completely. Diversity was evaluated using the Dominance (D), Shannon (H), Simpson and Evenness indices, and the values obtained for each index (0.075, 3.087, 0.925 and 0.684, respectively) indicate a highly diverse sample. However, when only the diversity of the multiresistant isolates (MDR and XDR) were considered, a softer saturation curve was detected and the coverage index was higher (62.5%), indicating that the diversity was better screened. This result was also supported by the diversity indices (D of 0.1621, H of 2.303, Simpson of 0.8379 and Evenness of 0.6255). Discussion The role of P. aeruginosa as a pathogen and its implication in nosocomial outbreaks has been widely studied. The present study was focused on the analysis of the population structure and diversity of P. aeruginosa clinical isolates randomly chosen from their different patterns of antibiotic resistance in a single hospital. The isolates include different antibiotic non-susceptibility profiles (21.4% MDR, 37.5% XDR and 41.1% non-MDR). The MLST analysis showed a high diversity, as reported in other previous studies. The 56 isolates were grouped into 32 different sequence types, 12 sequence types that were previously described (including 34 isolates) and 20 new ones (including 22 isolates). The singleton sequence types (26 isolates) corresponded mainly to the non-MDR isolates (16 isolates). Twenty-two of the isolates corresponded to new sequence types (not previously defined) of which 12 isolates were non-MDR, 6 isolates were MDR and 4 isolates were XDR. The clinical isolates studied showed a variable number of polymorphic sites and alleles, indicating the variability of the isolates selected. It is remarkable that we found the presence of new alleles (not previously described) of four genes, acsA, aroE, mutL and ppsA. The analysis of the seven loci demonstrated that the prevalent STs were ST-175, ST-235 and ST-253. ST-175 is widely distributed worldwide [17] and is the isolate most frequently isolated in this study, with twelve isolates obtained from eight patients. This ST is also the most prevalent in the studies of García-Castillo et al. and Cholley et al. [16,17]. ST-175 has been reported as a contaminant of the hospital environment, a coloniser of respiratory secretions in cystic fibrosis patients, and has been associated with the multiresistant isolates of P. aeruginosa. All of the isolates included in this group were multiresistant (eleven XDR isolates and one MDR) and were sensitive to colistin, 90% to amikacin, 37% to aztreonam and nearly 10% to ceftazidime and cefepime. All of the isolates were resistant to the other antibiotics tested, and only one of them was MBL positive. ST-235 is the second most frequently isolated sequence type, with six isolates (from five patients); four isolates were XDR, one isolate was MDR, and another was non-MDR. This ST has been involved in the dissemination of the genes encoding MBLs and has been associated to multiple resistance mechanisms [18], although Cholley et al. described strains with the same ST and none of them was MBL-positive [17]. Three of our five isolates were non-MBLs producers. In a previous study performed in another Majorcan hospital, ST-235 has been described as a VIM-13 producing β-lactamase [19]. ST-179, previously described in Mallorca as VIM-2 producer, was also MBL-positive [19]. The third most abundant sequence type was ST-253, with four isolates. These isolates were isolated from two patients; two isolates were MDR, and two were non-MDR. Only one isolate was colistin-resistant, corresponding to ST-244 reported previously in Korea as the isolate most frequently colistin non-susceptible and sensitive to other antibiotics [20]. Our isolate was isolated in a mixed culture with Morganella morganii and Serratia marcescens, both inherently resistant to colistin. The high discriminatory power of the MLST profiling allowed the differentiation among isolates obtained from the same patient at different dates and sampling sites. When the specimen was associated with the site of infection, the sequence type or clonal complex obtained and the antibiotic resistance profiles were the same. Conclusions The present results indicate that P. aeruginosa isolates revealed a significant frequency of recombination and a panmictic net-like population structure, as was suggested by Kiewitz and Tümler [21]. The population structure of clinical P. aeruginosa present in our hospital indicates the coexistence of nonresistant and resistant isolates with the same sequence type. The multiresistant isolates studied are grouped in the prevalent sequence types found in other Spanish hospitals and at the international level, and the susceptible isolates correspond mainly to singleton sequence types.
v3-fos-license
2017-08-27T16:53:42.396Z
2015-09-24T00:00:00.000
14679744
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/1824-7288-41-S1-A46", "pdf_hash": "d2caa1d6ea8f402f65b6f7ab3a6835a1fc287c27", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43997", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a068c57aed1c1a8bfe2c850d3cebc61204a43f6a", "year": 2015 }
pes2o/s2orc
Specific formulas for preterm infants, how and when Both ESPGHAN (2010) and AAP (2012), stated that “all preterm infants should receive human milk” for the many short-term and long termbenefits [1,2]. All kinds of breast milk (fresh by own mother or pastourized by donor) for preterm should be fortified, to gain the recommended requirements. In case of its absence the only alternative is represented by the formulas for preterm infants (PTF). It is not yet definitively established the ideal PTF composition, particularly for ELBW infants. Table 1 shows the main recommendations for nutrients [1-4]. A study compared the use of a soy-based formula (with calcium, phosphorus and vitamin D), with a PTF. Infants taking soy showed lower growth, levels of protein and albumin [5]. ESPGHAN in 2006 concluded that soy-based formulas should not be used in premature infants [6]. The use of hydrolyzed formulas has not shown a preventive role on cow’s milk protein allergy, it has proven helpful in improving food tolerance (acceleration of the intestinal transit time and faster achievement of full enteral feeding), but it has a reduced nutritional value, (especially protein intake) [7-11]. A recent study evaluated the usefulness of a thickened formula in reducing apnea of prematurity GERD-related. The authors conclude that these formulas are not effective in the reduced number of apneas GERD-related [12]. All kinds of breast milk (fresh by own mother or pastourized by donor) for preterm should be fortified, to gain the recommended requirements. In case of its absence the only alternative is represented by the formulas for preterm infants (PTF). A study compared the use of a soy-based formula (with calcium, phosphorus and vitamin D), with a PTF. Infants taking soy showed lower growth, levels of protein and albumin [5]. ESPGHAN in 2006 concluded that soy-based formulas should not be used in premature infants [6]. The use of hydrolyzed formulas has not shown a preventive role on cow's milk protein allergy, it has proven helpful in improving food tolerance (acceleration of the intestinal transit time and faster achievement of full enteral feeding), but it has a reduced nutritional value, (especially protein intake) [7][8][9][10][11]. A recent study evaluated the usefulness of a thickened formula in reducing apnea of prematurity GERD-related. The authors conclude that these formulas are not effective in the reduced number of apneas GERD-related [12]. 3,3 -3,6 Protein supply needs to compensate for the accumulated protein deficit observed in almost all small preterm infants. The quality of the provided protein may interfere with the recommended intake because the infant does not require proteins but requires specific aminoacids. Whey predominant protein with reduced glycomacropeptide and αactalbumin enrichment could be used to optimize the amino acid profile. Protein, g 3,6 -4,1 Carbohydrates, g 10,5 -12 According to the relatively reduced intestinal lactase activity, the lactose content could be relatively reduced and replaced by glucose polymers with the characteristic of maintaining the low osmolality of the formulas. Lipids, g 4,4 -6 In order to improve fat absorption, an important quota of fat could be given as medium-chain triglycerides with a maximum of 30-40% of lipid content. Calcium, mg 110 -130 The calcium to phosphorus ratio (1,5 -2) may be an important determinant of calcium absorption and retention. Phosphate, mg 55 -80 Iron, g 1,7 -2,7 Iron is essential for brain development, and prevention of iron deficiency is important. Prophylactic enteral iron supplementation (given as a separate iron supplement) should be started at 2 to 6 weeks of age (2-4 weeks in extremely-low-birth-weight infants) and should be continued after discharge, at least until 6 to 12 months of age depending on diet. In recent reviews post-discharge formulas does not seem to provide benefits, especially for the heterogeneity of the studies [3,13]. They may be useful for infants with GA <33 weeks, particularly those <30 weeks, with growth at discharge below the 10th percentile (the ESP-GHAN recommended their use up to 40 weeks, and for a further 12 weeks if necessary) [3]. Studies about GOS and FOS showed an increase of bifidobacteria in the stool, a reduction in their viscosity and an acceleration of intestinal transit time, resulting in an easier achievement of full enteral feeding [14,15]. It is also assumed a role in the prevention of NEC and LOS. Even though they have proven their beneficial role, further studies are needed to establish the type and dose [16]. Several RCTs and recent reviews have shown a benefit of prebiotics in reducing NEC andin the achievement of full enteral feeding [17][18][19]. Further studies are needed to establish dose, strains and routes of administration [1]. Lactoferrin, both human and bovine, seems to have a significant role as a protective agent against NEC and LOS [20][21][22][23]. The available data do not allow to recommend formula supplementation with these substances with functional properties.
v3-fos-license
2023-10-11T05:06:03.774Z
2023-10-01T00:00:00.000
263800180
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "973e64ef9e1512fe3a2cacfc497991dfd5bbca56", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:43999", "s2fieldsofstudy": [ "Medicine" ], "sha1": "973e64ef9e1512fe3a2cacfc497991dfd5bbca56", "year": 2023 }
pes2o/s2orc
Frequency of Medical Claims for Diastasis Recti Abdominis Among U.S. Active Duty Service Women, 2016 to 2019 Background: Diastasis recti abdominis (DRA) is a condition in pregnant and postpartum women. Proposed risk factors include age, sex, multiparity, cesarean delivery, diabetes, gestational weight gain, and high birth weight. This study aims to estimate the prevalence of DRA using medical claims data among U.S. active duty service women (ADSW) and determine associated risk factors. Materials and Methods: We conducted a cross-sectional study of ADSW aged 18 years and older in the U.S. Army, Air Force, Navy, and Marine Corps during fiscal years (FYs) 2016 to 2019. Utilizing claims data, we identified ADSW with a diagnosis of DRA during the study period. Risk factors, including age, race, socioeconomic status, branch of service, military occupation, delivery type, and parity, were evaluated through descriptive statistics, chi-square tests, and logistic regression analysis. Results: A total of 340,748 ADSW were identified during FYs 2016 to 2019, of whom 2,768 (0.81%) had a medical claim for DRA. Of those with deliveries during the study period, 1.41% were multiparous and 84.53% had a cesarean delivery. Increased risk of DRA was found in ages 30 to 39 years, Black women, ranks representing a higher socioeconomic status, and women with overweight and obese body mass indices. Conclusions: Although the prevalence of DRA, defined as a medical claim for DRA, in the study population is low, subpopulations may be disproportionately affected by the condition. Further research could potentially detail the impact of DRA on the functional impairment and operational readiness of ADSW in the U.S. military and any possible means of prevention. Introduction 2][3][4] Additionally, DRA is likely underdiagnosed, as documented by health care providers, due to low patient reporting or lack of provider awareness or understanding of the condition.As a result, most published studies on DRA have small sample sizes. 1,3o date, there is no consensus on the risk factors for developing DRA; however, proposed and researched factors include older age, sex, multiparity, cesarean delivery, diabetes, gestational weight gain, and high birth weight. 1,2,4,5Furthermore, one study reported an increased likelihood of DRA among women exposed to frequent heavy weight lifting. 5][3][4] Treatment options for DRA include both surgical and nonsurgical methods, but there is a lack of clear clinical practice guidelines for best practices for either method. 6Despite the lack of established best practices, treatment often begins with physical therapy and may escalate to surgical repair if insufficient improvement is made. Literature and research studies focusing on women and pregnancy-related conditions are lacking.An often overlooked population for research is the Military Health System (MHS), which serves over 1 million active duty service women (ADSW), including women at risk for DRA. 7The MHS is a bifurcated system that provides its beneficiaries, including active duty service members, retirees, and their dependents, access to care in two ways: through direct care from providers at military treatment facilities or through private sector care, from civilian fee-for-service facilities that accept the TRICARE insurance benefit. 8oreover, ADSW constitute 17% of the U.S. military force 7 and many serve in military occupations with physical requirements of heavy lifting and/or the operation of heavy machinery. 8All service members must also pass periodic physical fitness tests that assess cardiovascular endurance, muscular strength, and functional mobility-many components of which necessitate abdominal strength.DRA is of particular concern for ADSW given these occupational and physical fitness requirements. For postpartum ADSW in particular, the presence of DRA can impair their ability to meet physical fitness requirements and successfully return to duty.Despite the potential impact on readiness, aside from a case study, little is known about DRA in ADSW. 9The aim of this study is to determine the prevalence of DRA among ADSW and associated risk factors using medical claims data. Data source and study design We used the MHS Data Repository (MDR) to conduct a cross-sectional study of ADSW in the U.S. Army, Air Force, Navy, and Marine Corps during fiscal years (FYs) 2016 to 2019.The MDR houses administrative and health care claims data for MHS beneficiaries, including active duty service members, retirees, and their dependents; however, claims data do not capture care delivered in combat zones or through the Veterans Health Administration. 10Data from the MDR have been used in previous studies investigating the health of ADSW. 8,11udy population Using the Defense Enrollment Eligibility Reporting System (DEERS) in the MDR, we identified all ADSW aged 18 years and older from FY 2016 to 2019.Women in the National Guard or Reserves, both active and inactive, were excluded due to their inconsistent access to care in the MHS.Utilizing International Classification of Diseases, 10th Revision (ICD-10) codes, Medicare Severity Diagnosis-Related Group (MS-DRG) codes, and Current Procedure Terminology (CPT) codes, we identified ADSW with a diagnosis of DRA, with and without pregnancy, and concurrent treatments or repair procedures. A full list of the ICD-10 and CPT codes used can be found in Supplementary Appendix Table SA1.The earliest record within the study period was retained. Grouping variables (risk factors and characteristics) Risk factors for DRA, including age, parity, mode of delivery, and body mass index (BMI), were identified for ADSW in the study population.BMI classification was determined using the following standard categorization: underweight (<18.5 kg/m 2 ), healthy weight (18.5-24.9kg/m 2 ), overweight (25-29.9kg/m 2 ), and obese ( ‡30 kg/m 2 ).For those with DRA, BMI was calculated for the FY of diagnosis, and for those without DRA, BMI was calculated for the earliest FY of their DEERS record during the study period. Mode of delivery was identified using ICD-10 and MS-DRG codes in Supplementary Appendix Table SA1.Parity was determined by identifying all deliveries per person during the study period that were at least 40 weeks apart and preceded the date of DRA diagnosis.For those women who received a DRA diagnosis at the beginning of the study period (FY 2016), records from FY 2015 were reviewed for deliveries that occurred in the year preceding the diagnosis. Physical therapy, reevaluation physical therapy, and surgical repair were identified and coded as binary variables (0 = no and 1 = yes) and marked as ''yes'' if the patient received therapy at any time after the diagnosis of DRA.The combination of treatment received or the hierarchy of occurrence was not evaluated in this study. Patient demographics, such as race, branch of service, rank, BMI, and military occupational specialty (MOS), were obtained from the health care claim at the time of DRA diagnosis or, for those without DRA, from the earliest DEERS record within the study period. Statistical analyses Descriptive statistics were used to analyze patient demographics and military service-related characteristics (age group, race, military service rank, branch of service, MOS, and BMI category) for the total population and by DRA diagnosis.The prevalence of DRA in ADSW was calculated and expressed as a percentage.Group differences between ADSW with and without DRA diagnosis were analyzed utilizing the chi-square test for independence.Univariate logistic regression analysis was performed on each categorical variable to assess its association with DRA diagnosis in ADSW. To control for confounding factors, a subsequent multivariable logistic regression was performed and adjusted by all six predictive factors.Any observations with missing values were automatically removed from the logistic regression analyses.For all analyses, p-values <0.05 were considered statistically significant, and analyses were conducted using SAS, version 9.4.The study was considered exempt by the Institutional Review Board of the Uniformed Services University of the Health Sciences. Results We identified a total of 340,748 ADSW from the MDR in FYs 2016 to 2019, of whom 2,768 (0.81%) had a medical claim for DRA during the study period.Descriptive demographic data for the total study population of ADSW and ADSW with DRA during the study period are presented in Table 1.Due to a very small number of ADSW being classified as underweight, this category of BMI was censored from the descriptive statistics.The majority of ADSW in the total study population were between 20 and 29 years of age (51.62%), White (57.46%), in a junior enlisted rank (62.02%), and categorized as being of healthy weight (39.59%). DRA also appears to disproportionally affect Black ADSW, composing *26% of the total study population, but carrying *38% of DRA diagnoses.Likewise the frequency of claims for DRA in Black ADSW was 1%, more than the proportion experienced by each of the other racial groups.Similarly, those classified as obese compose over 23% of DRA diagnoses, but make up *9% of the total study population. However, the condition was most frequent in ADSW who were overweight (41.51%), followed by those with healthy weight (33.06%).The frequency of claims in obese ADSW was 2% compared with 1% for overweight ADSW.Among ADSW with a medical claim for DRA, *63% received physical therapy or were referred to a physical therapist for reevaluation and just over 8% underwent a surgical intervention. For ADSW with DRA, we identified whether or not they had given birth during the study period and the mode of delivery.The majority had not given birth during the study period nor in the year immediately preceding their diagnosis (none, n = 2,393 [86.45%]; delivered once, n = 336 [12.14%]; and delivered twice, n = 39 [1.41%]) (Fig. 1).Of those who gave birth, the majority had a cesarean delivery (n = 317 [84.53%]) (Fig. 2). Those with fewer medical claims for DRA when compared with their referent group included ADSW aged 18 to 20 years and 40 years or older (aOR 0.16, 95% CI 0.13-0.21,and aOR 0.52, 95% CI 0.42-0.63,respectively), who are Asian/Pacific Islanders (aOR 0.77; 95% CI 0.65-0.91),and in all other branches of services and those in the Navy with the highest protective odds (aOR 0.49; 95% CI 0.43-0.54).Similar results were observed in the unadjusted regression analyses, excluding the unadjusted analyses of military occupation. Discussion We identified 340,748 ADSW, of whom 2,768 had a medical claim for DRA from FY 2016 to 2019.Overall, prevalence was <1% and of those diagnosed through claims data, the majority received the standard treatment of physical therapy.In consideration of findings from both the unadjusted and adjusted logistic regression analyses, all six proposed risk factors contained statistically significant risks of DRA to ADSW in the study population.Of these, age, race, service, rank, and BMI significantly increase the odds of ADSW being diagnosed with DRA. Although the prevalence of DRA in this study population was low (0.81%) and estimated to be significantly lower than adult women in the general population (27%-100% during pregnancy and 30%-68% in the postpartum period), 1,4 our findings suggest that subpopulations of ADSW may be disproportionately affected by the condition and warrant further study.For example, ADSW under the age of 20 or over the age of 40 were least likely to suffer from the condition, which may be due, in part, to their reduced likelihood of being either pregnant or postpartum when compared with women between 20 and 29 years of age.Conversely, ADSW between the ages of 30 and 39 years were disproportionately affected by DRA.This corresponds with literature showing higher prevalence of DRA in women 20 to 40 years of age regardless of the mode of delivery. 12dditional risk factors for DRA, such as cesarean delivery and parity, were examined in this study and results showed that 11% of ADSW with DRA underwent a cesarean section and 12% had at least one pregnancy during the study period or in the year prior.These percentages are low given evidence in the literature suggesting that both cesarean deliveries 1 and parity increase the risk of DRA. 1,4,5dditional subpopulations of ADSW appear disproportionately affected by DRA.Black ADSW compose roughly 27% of the total population, but bear nearly 38% of DRA diagnoses.Racial disparity results from this study differ from past studies that show a higher percentage of DRA in White or Asian women. 13any racial disparities in health outcomes are mitigated in the MHS 14 ; however, as this study demonstrates, inequities still exist.Overall, very few studies assess race when determining the prevalence of DRA, which is concerning given the growing body of evidence showing racial disparities in women's health. 15,16s with previous studies using MHS data, rank is used as a proxy for socioeconomic status when analyzing MHS data. 17,18In this analysis, senior officers carried greater risk of DRA.This result is interesting given the existing evidence that enlisted service members traditionally experience worse health outcomes overall compared with service members of officer rank. 19,20dditionally, it can be speculated that junior enlisted women may have less structural access to medical care or that their complaints are not being documented.Alternatively, a higher prevalence in senior officers may be associated with age-related risk described above. Aside from the senior officer rank, the other factor showing the greatest odds for DRA in this study was BMI.Overweight and obese ADSW had greater odds of DRA compared with those with normal weight.This is in accordance with risk factors identified in other studies but conflicts with other studies showing no increased risk with BMI. 4,21,22The high percentage of overweight and obese ADSW is concerning given readiness standards; however, it is not surprising. Numerous studies over the years have cited increasing rates of overweight and obesity in the armed forces. 23,24While some weight gain in this population may be attributed to pregnancy, most women return to their previous BMI category postpartum, suggesting that the weight before pregnancy may be an important factor to focus on in terms of prevention. 25reatment in this study appears appropriate with recommendations.Treatment of DRA ranges from conservative approaches (core strengthening exercises) to surgical repair, with most providers favoring nonsurgical treatment.In our population, 63% of women with a diagnosis of DRA had an associated referral or visit for physical therapy.This finding is promising and indicates by proxy that the majority of women who were seen within the health care system received the standard of care treatment. For those ADSW not seen within the health care system, it is important to note that over the last two decades, Services have implemented evidence-based, comprehensive, postpartum physical training plans aimed at proactively addressing the physical needs of postpartum women. 26It is possible that postpartum patients might alternatively enroll into a militaryspecific program. While studies have evaluated the individual effectiveness of military programs, there is a need to comprehensively evaluate utilization and long-term functional outcomes between different arms of care (usual care vs. health care system vs. military physical therapy and training programs). Our study evaluates DRA diagnosis through medical claims data among ADSW, a universally insured population with no financial barriers in access to care.The finding of <1% of DRA cases in our population compared with estimated general population rates is unanticipated.One author's clinical experience with pregnant ADSW suggests that postpartum physical ailments such as DRA and urinary incontinence are widespread and undertreated. We hypothesize that the low prevalence rate in our study is due, in part, to electronic health record underreporting/undercoding or underdiagnosis by health care providers.Both of these factors could lead to low prevalence, as defined by ICD-10 coding, and echo other studies describing difficulties defining and studying this condition. 1,3While musculoskeletal (MSK) assessments and rehabilitation plans are critical to returning the service member to full duty, providers often lack confidence and/or training in MSK medicine or women's health.Additionally, these services may be lacking or hard to access, as shown in the The Women's Reproductive Health Survey of Active-Duty Service Members. 27nother consideration is that ADSW have fitness requirements that include a focus on core or abdominal strength measured by performance of the plank among others.These ADSW could be considered stronger than the average civilian and possibly tougher, not voicing their complaints, further leading to the underreporting and underdiagnosis of DRA among ADSW. Secondary analyses could compare DRA medical claims between ADSW and female dependents in the MHS to determine if there are differences between the two populations.Many factors come into play impacting DRA among ADSW, and our study can help guide future research as subpopulations at an increased risk for DRA among ADSW are identified. Future qualitative research would allow us to more fully understand whether the prevalence of DRA is actually lower among ADSW or whether the condition is being underreported. 1,3mitations This study had several limitations.First, the diagnosis of DRA was limited to women reporting their condition to a medical provider and the provider concurrently coding the condition correctly.We are most likely underreporting the true prevalence in our study population.Additionally, the use of claims data has the potential for coding errors and inadequate specificity for a condition, and as such, we are unable to describe factors impacting prevalence, as determined by claims data. Second, this study does not capture data for any health care received outside of the TRICARE benefit.Ascertainment of prior pregnancy was limited to 1 year before DRA diagnosis and pregnancies outside of this window were not included.Although this study identified ADSW with a DRA diagnosis and any concurrent treatments or repair interventions, the study failed to include any related information pertaining to the basis of the clinical visit, such as complaints of pain or discomfort, loss of function, or issues with mobility. Given the lack of consensus on the level of functional impairment, which can be attributed to DRA, more information is needed to understand the basis behind the complaint, diagnosis, subsequent interventions, and clinical outcomes. Conclusions This study using medical claims data lays the foundation for DRA research in ADSW.Our findings show that although the medical claims of DRA in the total population of ADSW are low, subpopulations may be disproportionately affected by the condition.Future research should include investigations into the disparities between subpopulations of ADSW at greater risk of developing DRA, which could provide information about the impact of DRA on functional impairment and operational readiness and possible means of prevention. Table 1 . Demographics of Active Duty Service Women, Fiscal Years 2016-2019Total ADSW (N = 340,748) ADSW w/diastasis recti (N = 2,768) n (column %) Row % Chi-square p-value a Missing occupation excluded from the table.bCensoreddue to small cell size and to prevent back calculation.ADSW, active duty service women; BMI, body mass index.
v3-fos-license
2022-09-14T06:18:08.359Z
2022-09-01T00:00:00.000
252209581
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41594-022-00819-2.pdf", "pdf_hash": "91c5d51e77ea54cbd5c6721884c02c1368793dce", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44002", "s2fieldsofstudy": [ "Biology" ], "sha1": "4feb40ed1c45bf823b73ec52633fc75d7fbaf49a", "year": 2022 }
pes2o/s2orc
Isoform-resolved mRNA profiling of ribosome load defines interplay of HIF and mTOR dysregulation in kidney cancer Hypoxia inducible factor (HIF) and mammalian target of rapamycin (mTOR) pathways orchestrate responses to oxygen and nutrient availability. These pathways are frequently dysregulated in cancer, but their interplay is poorly understood, in part because of difficulties in simultaneous measurement of global and mRNA-specific translation. Here, we describe a workflow for measurement of ribosome load of mRNAs resolved by their transcription start sites (TSSs). Its application to kidney cancer cells reveals extensive translational reprogramming by mTOR, strongly affecting many metabolic enzymes and pathways. By contrast, global effects of HIF on translation are limited, and we do not observe reported translational activation by HIF2A. In contrast, HIF-dependent alterations in TSS usage are associated with robust changes in translational efficiency in a subset of genes. Analyses of the interplay of HIF and mTOR reveal that specific classes of HIF1A and HIF2A transcriptional target gene manifest different sensitivity to mTOR, in a manner that supports combined use of HIF2A and mTOR inhibitors in treatment of kidney cancer. P recise regulation of transcription and translation is required to define patterns of protein synthesis in healthy cells. Nevertheless, attempts to understand disease have often focused on a single pathway of transcriptional or translational control, despite their simultaneous dysregulation. For instance, two major pathways that link the cellular environment to gene expression, the HIF and mTOR pathways, are both dysregulated in many cancers. The most common kidney cancer, clear cell renal carcinoma, manifests upregulation of HIF owing to defective function of its E3 ubiquitin ligase, the von Hippel-Lindau tumor suppressor (VHL), and hyperactivation of mTOR 1,2 . In addition, microenvironmental tumor hypoxia increases the activity of HIF 3 and also acts on translation via mTOR and other pathways [4][5][6][7][8] . HIF mediates responses to hypoxia through a well-defined role in transcription, but recent studies also report a role for it in translation. In the presence of oxygen, two isoforms of HIFα (HIF1A and HIF2A) are ubiquitinated by VHL and degraded. This prevents the formation of transcriptionally active heterodimers with HIF1B 3 . In addition, HIF2A is reported to regulate translation via non-canonical cap-dependent translation, mediated by eukaryotic translation initiation factor 4E family member 2 (EIF4E2) 9 . It was further reported that a large subset of genes, including HIF transcription targets, are translationally upregulated by the HIF2A-EIF4E2 axis, resulting in induction of protein in hypoxic cells, even when HIF-dependent transcription was ablated by HIF1B knockdown 10 . Evaluation of this action of HIF is important given efforts to treat VHL-defective kidney cancer through HIF2A-HIF1B dimerization inhibitors 11,12 , whose action to prevent transcription might be circumvented by effects of HIF2A on translation. mTOR forms two different complexes, mTORC1 and mTORC2. mTORC1 controls translation via phosphorylation of EIF4E binding protein (EIF4EBP) 13,14 . When mTORC1 is inhibited, such as by nutrient deprivation, unphosphorylated EIF4EBP binds to EIF4E and this blocks the EIF4E-EIF4G1 interaction, which is necessary to form a canonical translation initiation complex 14 . In contrast, mTORC2 controls cell proliferation and migration by phosphorylating AKT serine/threonine kinase and other targets 13 . Comprehensive characterization of the regulation of gene expression by the HIF-VHL and mTOR pathways is crucial to understanding the biology of VHL-defective kidney cancer, particularly as agents targeting both these pathways are being deployed therapeutically 15,16 . Although mTOR has been reported to be inhibited by HIF under hypoxia 8,17 , its interactions with the HIF system are poorly understood. In part, this reflects the lack of efficient methods to measure translational efficiency and to interface such methods with transcriptional data. Most existing methods capable of pan-genomic analysis rely on one of two principles; assessment based on the position of ribosomes on mRNAs by ribosomal foot-printing (ribosome profiling), or assessment of the number of ribosomes on mRNAs by polysome profiling 18 (see also Supplementary Information). Such methods have provided valuable information on translational control. This has enabled the definition of mRNA features that regulate translational efficiency 19,20 and has facilitated analyses of interventions on pathways that regulate translation 14,21 . However, scaling these methods to permit multiple comparisons remains a challenge. Moreover, reliance on internal normalization, as used in the majority of studies, allows changes in global translation to confound the measurements of transcript-specific translational efficiency 22 . Furthermore, ribosomal profiling cannot readily distinguish the translational efficiency of overlapping transcripts such as those generated by alternate TSSs. Resolution of specific transcripts by their TSS provides important insights into the mode of translational regulation 19,23,24 and is particularly important when assessing translation in the setting of a large transcriptional change, as occurs in cancer 25,26 . Here we describe a new method, high-resolution polysome profiling followed by sequencing of the 5′ ends of mRNAs (HP5), Isoform-resolved mRNA profiling of ribosome load defines interplay of HIF and mTOR dysregulation in kidney cancer Yoichiro Sugimoto 1 and Peter J. Ratcliffe 1,2 ✉ Hypoxia inducible factor (HIF) and mammalian target of rapamycin (mTOR) pathways orchestrate responses to oxygen and nutrient availability. These pathways are frequently dysregulated in cancer, but their interplay is poorly understood, in part because of difficulties in simultaneous measurement of global and mRNA-specific translation. Here, we describe a workflow for measurement of ribosome load of mRNAs resolved by their transcription start sites (TSSs). Its application to kidney cancer cells reveals extensive translational reprogramming by mTOR, strongly affecting many metabolic enzymes and pathways. By contrast, global effects of HIF on translation are limited, and we do not observe reported translational activation by HIF2A. In contrast, HIF-dependent alterations in TSS usage are associated with robust changes in translational efficiency in a subset of genes. Analyses of the interplay of HIF and mTOR reveal that specific classes of HIF1A and HIF2A transcriptional target gene manifest different sensitivity to mTOR, in a manner that supports combined use of HIF2A and mTOR inhibitors in treatment of kidney cancer. that addresses these challenges, and demonstrate its use in defining the interplay between transcriptional and translational regulation by the HIF-VHL and mTOR signaling pathways in VHL-defective kidney cancer cells. Results Establishment of HP5 workflow. HP5 encompasses two key features. First, through the use of external RNA standards, it robustly measures ribosome load of mRNAs. Second, by the exclusion of mRNA or cDNA purification steps before the first PCR amplification and multiplexing of samples at an early stage of the protocol, the method enables the processing of a large number of samples. (Fig. 1a and Extended Data Fig. 1). We first evaluated the basic performance of HP5 using RCC4 VHL cells, in which constitutive upregulation of HIF in VHL-defective RCC4 cells is restored to normal by stable transfection of VHL (Extended Data Fig. 2). We obtained an average of 3.3 million reads per fraction, with ~80% of reads mapping to mRNA (Supplementary Data 1). Importantly, HP5 successfully generated each library from 100-fold less total RNA than a similar method (~30 ng compared with 3 µg) 19 . HP5 was highly reproducible: principal component analysis of mRNA abundance data demonstrated tight clustering of each polysome fraction, across three clones of RCC4 VHL cells (Fig. 1b). Furthermore, the 5′ terminus of HP5 reads precisely matched annotated TSSs in RefSeq or GENCODE at nucleotide resolution, confirming the accuracy of 5′ terminal mapping (Fig. 1c). To further test the performance of HP5, we compared the polysome distribution of a set of TSS-defined mRNA isoforms analyzed by both HP5 and RT-qPCR. Very similar results were obtained, verifying that HP5 can accurately resolve the translation of these isoforms (Extended Data Fig. 3a). We then examined the overall relationships between translational efficiency and selected mRNA features, including those with known associations with translational control. Translational efficiency was calculated as the mean ribosome load for each of 12,459 mRNA isoforms resolved by their TSS from 7,815 genes. Using a generalized additive model, we found that the four most predictive features together explained around 36% of variance in mean ribosome load between mRNAs (Extended Data Fig. 3b). Notably, coding sequence (CDS) length showed the clearest association with mean ribosome load: values were greatest for mRNAs with a CDS length of around 1,000 nucleotides (nt) and declined progressively as the CDS became longer (Fig. 1d), probably owing to a lower likelihood of re-initiation of translation by mRNA circularization 27 . In agreement with previous studies [28][29][30] , analysis of HP5 data identified the negative effect on translation of upstream open reading frames (uORFs) and RNA structures near the cap, as well as the positive effect of the Kozak sequence ( Fig. 1e and Extended Data Fig. 3c,d). Importantly, the association of mean ribosome load with mRNA features that affect translation extended to comparisons between mRNA isoforms arising from alternative TSS usage (Fig. 1f). Overall, HP5 reproduced and extended known associations between mRNA features and translation, verifying its performance in the measurement of translational efficiency at transcript resolution (see Supplementary Information for further validation of the method). mTOR-dependent translational regulation greater than reported. We next applied HP5 to the analysis of mTOR pathways, which are frequently dysregulated along with hypoxia signaling pathways in VHL-defective kidney cancer. To analyze translational changes that arise directly from mTOR inhibition, RCC4 VHL cells were treated for a short period (2 hours) with Torin 1, an ATP-competitive inhibitor of mTORC1 and mTORC2 (ref. 31 ). mTOR inhibition globally suppressed translation, as shown by a marked reduction in polysome abundance (Fig. 2a). Measurements of changes in translational efficiency were initially analyzed at the level of the gene. This provided the first direct display of both a general reduction in translation by mTOR inhibition and of its heterogeneous effects on individual genes across the genome (Fig. 2b). To assess the performance of HP5 against other methods, we next compared the HP5 data on translational responses to mTOR inhibition with data in four previous studies that reported mTOR hypersensitive genes 14,21,32,33 . Although the mTOR hypersensitive genes identified by these studies did not always strongly overlap, HP5 revealed the translational downregulation of mTOR targets identified in each of the four previous studies (Extended Data Fig. 4a,b). By contrast, at least within these studies, ribosome profiling appeared less powerful in identifying the mTOR hypersensitive genes defined by polysome profiling (Extended Data Fig. 4b). Note that one caveat to this is that ribosomal load is not a direct measure of translational efficiency, as translation can be regulated not only by initiation but also elongation 22 . mTOR has been reported to regulate a wide range of processes by different mechanisms 13 , while the identification of the direct translational targets has been more limited, for instance, involving proteins that function in translation itself. Our data confirmed many of these known mTOR translational targets, as well as the previously described resistance of many transcription factors 14 . Importantly, our data also demonstrated directly that the translation of genes encoding proteins with many other functions, such as in different metabolic pathways, and in proteasomal degradation is hypersensitive to mTOR inhibition (Fig. 2c). The accurate resolution of the TSS provided by HP5 also offered an opportunity to improve the understanding of transcript-specific mRNA features associated with mTOR hypersensitivity or resistance. mTOR has been shown to regulate mRNAs with a 5′ terminal oligopyrimidine (TOP) motif in a tract-length-dependent manner 34 . Our analysis confirmed this (Fig. 2d). By contrast, although it has been reported that TOP motifs starting between +2 and +4 nt downstream of the cap mediate mTOR control 14 , the high-resolution analysis permitted by HP5 revealed that any such association with Torin 1 sensitivity was much weaker if the TOP motifs did not start immediately after the cap (Fig. 2d). Although these data confirmed the importance of the TOP motif for translational regulation by mTOR, the proportion of mRNAs containing a TOP motif immediately after the cap was low (only 6% of mRNAs had a TOP motif of more than 2 nucleotides, Extended Data Fig. 5a) compared with the global extent of translational alteration by mTOR inhibition, suggesting that additional mechanisms contribute to the mTOR sensitivity 24 . To explore this, we examined the interaction of Torin 1-induced changes in translation with uORF frequency and CDS length, the two most important mRNA features affecting translational efficiency under mTOR-active conditions (Extended Data Fig. 3b). We observed that uORF number retained only a very weak association with mean ribosome load under mTOR inhibition (Fig. 2e). With respect to CDS length, the increased translational efficiency of mRNAs with a CDS of close to 1 kb was not observed upon mTOR inhibition ( Fig. 2f and Extended Data Fig. 5b). Rather, there was a progressive increase in mean ribosome load with increasing CDS length, as might be expected if CDS length was not affecting translational initiation. These differences suggest that mTOR pathways also impinge on the translational effects of these mRNA features. For instance, EIF4EBP activation by mTOR inhibition might prevent mRNAs from forming a loop through blocking EIF4E and EIF4G1 interactions. Note that an association of mRNA length with mTOR sensitivity was also observed but was slightly weaker (Extended Data Fig. 5c). Interactions between the mTOR sensitivity of mRNAs and features such as the TOP motif or number of uORFs were also observed when comparing mRNA isoforms of the same gene (Extended Data Fig. 5d). Overall, the analyses revealed that the extent of translation regulation by Pentose phosphate pathway (20) TCA cycle (24) Fatty acid biosynthesis (28) Fatty acid degradation (30) Oxphos ( Abs (254 nm), absorbance at 254 nm. b, Comparison of the MRL of genes with and without Torin 1 (data presented are the mean of 2 and 3 independent RCC4 VHL clones). c, box plots showing changes in translational efficiency of genes (expressed as log 2 (fold change) in MRL) with Torin 1, among different functional classes. Responses within a functional class were compared against responses for all other genes using the two-sided Mann-Whitney U test; classes that are hypersensitive and resistant to mTOR inhibition are colored red and blue, respectively (P < 0.05); numbers of genes in each class are indicated in parentheses. Known mTOR regulation by any mechanism or by translation is indicated above the box plots. d, Changes in translational efficiency with Torin 1 as a function of TOp motif (pyrimidine tract) length and starting position with respect to the mRNA cap (individual panels). MRL for mRNAs with the indicated TOp motif length was compared to that without a TOp motif using the two-sided Mann-Whitney U test; boxes representing fewer than ten mRNAs are faded. e,f, MRL as a function of uORF number (e) or CDS length (f) in the presence (purple) or absence (blue) of Torin 1. For e, MRL for mRNAs with the indicated uORFs number was compared with that of those without a uORF using the two-sided Mann-Whitney U test. box plots show the median (horizontal lines), first to third quartile range (boxes), and 1.5× interquartile range from the box boundaries (whiskers). *P < 0.05, **P < 0.005. P values were adjusted for multiple comparisons using Holm's method. Details of the sample sizes and exact P values for c-f are summarized in the Supplementary Information. downregulation of translation, with more genes showing reduced translational efficiency in VHL-defective RCC4 cells. Particularly striking, in view of the reported role of HIF2A in translational upregulation 9,10 , was the absence of clear upregulation in translational efficiency in VHL-defective RCC4 and 786-O cells, either generally or for those genes reported to be translationally upregulated by HIF2A 9,10 ( Fig. 3a,b and Extended Data Fig. 6a), although we confirmed strong induction of HIF2A in both of these cell lines (Extended Data Fig. 2). It is possible that HIF2A upregulates the translation of only a small number of mRNAs, for instance a subset of HIF-induced mRNAs. We therefore compared changes in mRNA abundance induced by VHL with changes in translational efficiency. However, we saw no correlation between regulation of transcript abundance and translation, as might have been anticipated if a set of HIF transcriptional targets were also regulated by translation (Spearman's ρ = 0.02 and −0.003, P = ~0.1 and ~0.8 for changes in translational efficiency against changes in mRNA abundance in RCC4 and 786-O cells, respectively; Fig. 3b and Extended Data Fig. 6b). Because HIF2A's ability to promote translation has been proposed to be mediated by EIF4E2 (ref. 9 ), we engineered EIF4E2-defective 786-O and 786-O VHL cells by CRISPR-Cas9-mediated inactivation and examined the effects on translational efficiency. In both 786-O and 786-O VHL cells, EIF4E2 inactivation weakly but globally downregulated the translational efficiency of genes (Fig. 3c). If co-operation of EIF4E2 and HIF2A had a major role in translation, it would be predicted that EIF4E2 inactivation would have a larger effect in the absence of VHL. However, we observed no evidence of this, for either global translation or reported HIF2A-EIF4E2-target genes 9,10 (Fig. 3c, compare upper and lower panels, and Extended Data Fig. 6c). Finally, to exclude the possibility that HP5 analysis did not capture the effect of HIF2A-EIF4E2-dependent translational regulation, we used immunoblotting to examine changes in the abundance of proteins encoded by reported target genes of HIF2A-EIF4E2 (refs. 9,10 ), as a function of VHL or EIF4E2 status in 786-O cells. This further confirmed that the effect of the HIF2A-EIF4E2 pathway was considerably weaker than or undetectable compared with that of HIF2A-VHL-dependent transcriptional regulation (Extended Data Fig. 6d). Taken together, the data revealed little or no role for the HIF2A-EIF4E2 axis in regulation of translation under the analyzed conditions. Although we did not observe systematic upregulation of translational efficiency, either of HIF transcriptional targets or other genes in VHL-defective cells, we did observe downregulation of translational efficiency, particularly in RCC4 cells. To examine whether this might reflect interaction of HIF and mTOR pathways, we first Only TOp motifs starting immediately after cap were analyzed. MRL for mRNAs with indicated TOp motif length was compared to that without a TOp motif using the two-sided Mann-Whitney U test. box plots show the median (horizontal lines), first to third quartile range (boxes), and 1.5× interquartile range from the box boundaries (whiskers). *P < 0.05, **P < 0.005. P values were adjusted for multiple comparisons using Holm's method. Details of the sample sizes and exact P values are summarized in Supplementary Information. compared the gene-specific effects on translation that are associated with VHL-defective status in RCC4 cells with those observed by inhibition of mTOR in RCC4 VHL cells. This revealed a moderate, but highly significant, correlation between responses to the two interventions in RCC4 cells (Pearson's r = 0.33, P < 1 × 10 −10 , Fig. 3d left panel). Furthermore, mRNAs with a longer TOP motif were more strongly repressed by VHL loss in RCC4 cells ( Fig. 3e upper panel). Earlier work has suggested that induction of HIFα, particularly the HIF1A isoform, can suppress mTOR pathways 8,35 . Consistent with this, we observed that VHL loss in RCC4 cells was associated with a significant upregulation of mRNAs that encode negative regulators of mTOR (BNIP3 and DDIT4) or its target, the translational repressor EIF4EBP1 (Extended Data Fig. 6e). In contrast, in 786-O cells, which do not express HIF1A, we observed less downregulation of translation by VHL loss, less association of any gene-specific effects with mTOR targets (defined either by responsiveness to Torin 1, or the length of the TOP sequence) and weaker regulation by VHL of mRNAs that repress mTOR pathways ( Fig. 3d right panel, Fig. 3e lower panel, and Extended Data Fig. 6e). Although VHL may have other effects on gene expression beyond regulation of HIF, the findings suggest that modest downregulation of translation occurs in RCC4 cells, most likely as a consequence of HIF1A-dependent actions on mTOR pathways. HIF promotes alternate TSS usage to regulate translation. Although transcription may regulate translation by promoting alternative TSS usage and altering the regulatory features of the mRNA, the effects of HIF on this have not been studied systematically. To address this, we first compared 5′ end sequencing (5′ end-seq) reads from total (that is, unfractionated) mRNAs in RCC4 VHL versus RCC4 and identified 149 genes with a VHL-dependent change in TSS usage (false-discovery rate (FDR) < 0.1). For these genes, we defined a VHL-dependent alternative TSS (which showed the largest change in mRNA abundance with VHL loss). Discordant regulation of the alternative and other TSSs (that is, up versus down) was rare (9/149): following VHL loss, the alternative TSS was induced in 85 genes and repressed in 64 genes (Supplementary Data 2). To test the generality of these findings and to consider the mechanism, we performed similar analyses of alternative TSS usage among these 149 genes in sets of related conditions and compared the results (Extended Data Fig. 7). A strong correlation (Pearson's r = 0.60, P < 1 × 10 −10 ) was observed with alternative TSS usage in 786-O VHL versus 786-O cells. In contrast, there was no correlation with the alternative TSS usage in 786-O VHL versus 786-O cells in which HIF transcription had been ablated by CRISPR-Cas9-mediated inactivation of HIF1B (Pearson's r = −0.01, P = ~0.9) indicating that the effects were dependent on HIF. In keeping with this, a strong correlation was observed between changes mediated by loss of VHL in RCC4 and those induced by hypoxia in RCC4 VHL cells (Pearson's r = 0.85, P < 1 × 10 −10 ). We next sought to determine the effects of HIF-dependent altered TSS usage on mRNA translation by comparing the different isoforms of the same genes. Among the 129 genes whose CDS could be predicted for different isoforms, 71 (55%) have differences in predicted CDS (Supplementary Data 2). Among 117 genes whose different mRNA isoforms were expressed at sufficient levels for calculation of mean ribosome load, 75 (64%) have differences in translational efficiency (FDR < 0.1, Extended Data Fig. 8 and Supplementary Data 2). We again found an inverse relationship between the translational efficiency of mRNA isoforms and the number of the uORFs (see Extended Data Fig. 9 for overall analysis and examples). We then examined which of two modes of regulation contributes the most to VHL-dependent changes in translation of these genes: (1) the effect of VHL on translation is a direct consequence of the altered TSS usage, or (2) the effect of VHL on translation is observed across all transcripts associated with these genes, irrespective of their TSS. To assess this, we recalculated changes in translational efficiency for each gene, omitting either the effect of (1) or (2) from the calculation and compared the results with the experimental measurement, as derived from both parameters. The correlation was much stronger using (1) than (2) (Pearson's r = 0.83 and r = 0.54, P < 1 × 10 −10 and P < 1 × 10 −5 respectively, Fig. 4a), indicating that the changes in translational efficiency of these genes were primary due to altered TSS usage. Importantly, some of the largest effects on translation were associated with alternative TSS usage (y axis of Fig. 4a). Of these, Max-interacting protein 1 (MXI1), an antagonist of Myc proto-oncogene (MYC) 36 , showed the most striking increase in translational efficiency upon VHL loss (Fig. 4a,b). 5′ end-seq identified the three most abundant MXI1 mRNA isoforms, defined by alternative TSS usage (TSS1-TSS3, Fig. 4c), in RCC4 cells. TSS2 and TSS3 isoforms were the dominant isoforms in HIF-repressed RCC4 VHL cells. However, the TSS1 transcript (which has been reported to be HIF1A dependent 37 and bears a different CDS than the other isoforms) was strongly upregulated in VHL-defective RCC4 cells (Fig. 4d). Notably, TSS2 and TSS3 mRNA each contain an uORF that is excluded from TSS1 by alternative first exon usage (Fig. 4c). Consistent with the negative effects of uORFs on translation, the TSS1 mRNA isoform was much more efficiently translated than were the TSS2 and TSS3 isoforms (Fig. 4e). Thus, alternative TSS usage associated with VHL loss specifically upregulated the translationally more potent isoform, enhancing overall translation. Interestingly, the isoform that is orthologous to this transcript in mice has been reported to manifest stronger transcriptional repressor activity 38 . Taken together, these findings indicate that alternative TSS usage makes major contributions to altered translational efficiency among a subset of HIF-target genes. Sensitivity to mTOR among classes of HIF target gene. Since concurrent dysregulation of HIF and mTOR pathways is frequently observed, we sought to determine how HIF-dependent transcriptional regulation and mTOR-dependent translational regulation interact. Comparison of changes in translational efficiency with mTOR inhibition in RCC4 VHL cells with those in RCC4 cells showed a strong correlation, with the slope of the regression line being slightly less than 1 (Pearson's r = 0.89, P < 1 × 10 −10 , slope = 0.85; Fig. 5a), indicating that mTOR inhibition regulates translation similarly, regardless of HIF status. The effect of mTOR inhibition was slightly weaker in VHL-defective cells, probably reflecting a small negative effect of HIF1A on mTOR-target mRNAs, as outlined above. We also analyzed the effect of mTOR inhibition on the expression of genes involved in the HIF signaling pathway. This revealed that two oxygen-sensitive 2-oxoglutrarate-dependent dioxygenases, FIH1 and PHD3 (ref. 39 ), were more strongly downregulated than other HIF-pathway-related genes, indicating that mTOR has the potential to affect the cellular responses to hypoxia by several mechanisms (Extended Data Fig. 10a). We then considered the relationship of HIF-dependent changes in transcription to mTOR-dependent changes in translation. Somewhat surprisingly, we observed no overall association between the two regulatory modes (Spearman's ρ = 0.04, P < 1 × 10 −3 ; Fig. 5b). However, more detailed examination of the data revealed that distinct functional classes of mRNAs responded differently. Among transcripts that were induced in VHL-defective cells, those encoding glycolytic enzymes were hypersensitive to mTOR inhibition, whereas the translation of genes classified as involved in angiogenesis or vascular processes was much more resistant (P < 1 × 10 −6 , Mann-Whitney U test, Fig. 5c, Extended Data Fig. 10b,c and Supplementary Data 3). To confirm this, we re-analyzed published data using ribosome profiling 14,21 and observed a similar contrast (Extended Data Fig. 10d). Consistent with our overall findings that mRNAs with no uORF and/or a CDS around 1 kb in length were hypersensitive to mTOR, a higher proportion of glycolytic genes were found to bear these features than of genes associated with angiogenesis or vascular processes (Extended Data Fig. 10e). Overall, these findings indicate that full upregulation of the glycolysis pathway requires both HIF and mTOR activity, as would be predicted to occur in VHL-defective kidney cancer with mTOR hyperactivation 2 . Of the two mTOR complexes, it is widely accepted that mTORC1 regulates translation 13 . Interestingly, the protein level of HIF1A has been shown to be positively regulated by both mTORC1 and mTORC2, whereas HIF2A is dependent on only mTORC2 activity 40 . This raises the question of whether the HIF-induced, mTOR-resistant genes that function in angiogenesis or vascular processes might be principally regulated by HIF2A and hence transcriptionally, as well as translationally, resistant to mTORC1 inhibition. To this end, we interrogated pan-genomic data on HIF binding 41 (1) and (2), respectively). The blue line indicates the linear model fit by ordinary least squares, and the gray shade shows the standard error. Right panel (the same data as in the upper panel of Fig. 3a), is provided to reference the distribution of changes in translational efficiency amongst the subset of genes manifesting alternative TSS usage to all expressed genes. b, proportion of MXI1 mRNA distributed across polysome fractions; the line indicates the mean value, and the shaded area shows the s.d. of the data from the three independent clones. c, Schematics of the 3 most abundant mRNA TSS isoforms of MXI1; the 5′ and 3′ UTR are colored white, and the position of uORFs is indicated by red arrows. d, mRNA abundance of each MXI1 mRNA TSS isoform estimated as transcript per million (TpM) from 5′ end-seq data. Data presented are the mean of the measurements of the three independent clones. e, Similar to b, but the proportion of each MXI1 mRNA TSS isoform in RCC4 cells is shown separately. HIF-binding sites near this class of genes had a lower HIF2A/ HIF1A binding ratio than did other genes (P = ~0.003, Mann-Whitney U test, Fig. 5d). This contrasted with a higher HIF2A/ HIF1A binding ratio for angiogenesis or vascular-process genes induced in VHL-defective RCC4 cells (P = ~0.009, Mann-Whitney U test, Fig. 5d). Consistent with this, mRNAs of HIF-target angiogenesis or vascular-process genes were also upregulated to a greater extent than other HIF-target genes upon VHL loss in 786-O cells, which express only HIF2A (P = ~0.007, Mann-Whitney U test, Extended Data Fig. 10f). This suggests that they are primarily HIF2A targets, as well as resistant to effects of mTOR inhibition on translation, consistent with a role in correcting a hypoxic and nutrient-depleted environment. Discussion Using a new technology to measure the ribosome load of mRNAs resolved by their TSS, we have characterized the pan-genomic interplay of HIF-and mTOR-dependent transcriptional and translational regulation in VHL-defective kidney cancer cells. Importantly, the increased throughput of the technology and use of external Spearman's rank-order correlation coefficient was used to assess the association (n = 8,580). c, Analysis of changes in translational efficiency of genes produced by Torin 1 among the specified functional classes of genes whose mRNAs were induced by VHL loss. Functional classes were defined by gene ontology and KeGG orthology. The distributions are shown using kernel density estimation, and compared using the two-sided Mann-Whitney U test (n = 12 and 29 for glycolysis and angiogenesis or vascular-process genes respectively). d, Relative ratio of HIF2A and HIF1A binding at the nearest HIF-binding sites to genes induced by VHL loss, among the specified functional class of genes. HIF2A and HIF1A binding across the genome were analyzed by ChIp-seq. The ratios within a functional class were compared against the ratios for all other genes using the two-sided Mann-Whitney U test (n = 12, 25, and 268 for glycolysis, angiogenesis or vascular process and others, respectively). box plots show the median (horizontal lines), first to third quartile range (boxes), and 1.5× interquartile range from the box boundaries (whiskers). normalization enabled us to directly compare translational effects across the genome for a larger number of interventions than most studies to date. Our analysis revealed that mTOR inhibition heterogeneously downregulates translation of a very wide variety of mRNAs and demonstrated the hypersensitivity of many genes encoding metabolic enzymes. This suggests a greater role for translational alterations in gene expression and metabolism in mTOR-dysregulated cancer than previously thought. Our findings confirmed that the HIF pathway primarily regulates transcription, but also revealed that HIF1A represses global translation moderately via mTOR and that HIF regulates the translation of a subset of genes bidirectionally through alternative TSS usage. HIF-dependent alternative TSS usage was often associated with altered translational efficiency and/or altered CDS. Apart from these transcripts, we were surprised to find little or no evidence for HIF-dependent upregulation of translation in VHL-defective cells, in contrast to previous reports of a major role for HIF2A in promoting EIF4E2-dependent translation. The original studies demonstrated this action of HIF2A in hypoxia and in VHL-defective cells (786-O) 9,10 , as were used in this study, but the effect size of HIF2A-dependent translational regulation was not compared with other interventions, such as mTOR inhibition. Although we cannot exclude small effects on some targets, our findings indicate that, at least under the conditions of our experiments, the role of HIF2A-EIF4E2 in promoting translation is at best very limited, even for the genes reported to be regulated by this pathway 9,10 . Previous studies have reported that HIF inhibits mTOR activity through the transcriptional induction of antagonists of mTOR signaling 8,43 , raising a question as to whether the use of mTOR inhibitors constitutes a rational approach to the treatment of VHL-defective cancer. Our comparative analysis of interventions revealed that the mTOR inhibition by HIF was very much weaker than that by pharmacological inhibition, offering a justification for this therapeutic approach. To pursue this further, we compared transcriptional targets of HIF and translational targets of mTOR across the genome. Although little or no overall correlation was observed, these analyses revealed marked differences in mTOR sensitivity among HIF transcriptional targets, according to the functional classification of the encoded proteins. HIF1A-targeted genes encoding glycolytic enzymes were hypersensitive to mTOR, whereas HIF2A-targeted genes encoding proteins involved in angiogenesis and vascular process were resistant to mTOR inhibition. Clinically approved mTOR inhibitors primarily target mTORC1 (ref. 16 ), and are therefore unlikely to affect HIF2A abundance 40 . Our results suggest that they are unlikely to affect the expression of these classes of HIF2A-target gene. Recently, a new class of drug that prevents HIF2A from dimerizing with HIF1B and hence blocks HIF transcriptional activity has shown promise in the therapy of VHL-defective kidney cancer 11,12,16 . Given that we observed few, if any, effects of HIF2A on translation, our results suggest that the combined use of these HIF2A transcriptional inhibitors, together with mTOR inhibitors, should therefore be considered as a rational therapeutic strategy for this type of cancer. Online content Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/ s41594-022-00819-2. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. © The Author(s) 2022 Methods Overview of the cell line and experimental conditions. VHL-defective kidney cancer cell lines, RCC4 and 786-O, were from Cell Services at the Francis Crick Institute and were maintained in DMEM (high glucose, GlutaMAX Supplement, HEPES, Thermo Fisher Scientific, no. 32430100) with 1 mM sodium pyruvate (Thermo Fisher Scientific, 12539059) and 10% FBS at 37 ˚C in 5% CO 2 . Cells were confirmed to be of the correct identity by STR profiling and to be free from mycoplasma contamination. Hypoxic incubation was performed using an InvivO 2 workstation (Baker Ruskinn) in 1% O 2 and 5% CO 2 for 24 hours. To inhibit mTOR, cells were treated with 250 nM of Torin 1 (Cell Signaling Technology, no. 14379) for 2 hours. An To prepare the gRNA, 100 µM of crRNA and 100 µM of tracrRNA (Integrated DNA Technologies, no. 14899756) were annealed in duplex buffer (Integrated DNA Technologies, 11-01-03-01) by incubation at 95 ˚C for 5 minutes, then at room temperature for 30 minutes. Cas9-gRNA RNP was formed by mixing 10 µM of the annealed tracrRNA-crRNA and 16.5 µg of TrueCut Cas9 protein (Thermo Fisher Scientific, A36498) in PBS, followed by incubation at room temperature for 30 minutes. The RNP was transfected into 786-O cells or 786-O VHL cells (pools of cells were used for HIF1B inactivation, whereas clone 1 of each sub-line was used for EIF4E2 inactivation). Transfections were performed using a 4D-Nucleofector System (Lonza) with a SF Cell Line 4D-Nucleofector X Kit L (Lonza, V4XC-2024) and the EW-113 transfection program. The transfected cells were cultured in DMEM (high glucose, GlutaMAX Supplement, HEPES) with 1 mM sodium pyruvate and 10% FBS at 37 ˚C in 5% CO 2 for at least 3 days, and single clones were isolated using flow cytometry. Inactivation of the target genes was confirmed by Sanger sequencing of the gRNA target region using TIDE analysis 78 and by immunoblotting. The cytoplasmic lysate was homogenized by passage through a 25-G syringe needle 5 times. To remove debris, the lysate was centrifuged at 1,200g for 10 minutes at 4 ˚C, and the supernatant was collected. This material was centrifuged again at 1,500g for 10 minutes at 4 ˚C, and the supernatant was collected. The protein and RNA concentrations were measured using 660-nm Protein Assay Reagent (Thermo Fisher Scientific, 22660) with Ionic Detergent Compatibility Reagent (Thermo Fisher Scientific, 22663) and Qubit RNA BR Assay Kit (Thermo Fisher Scientific, Q10210), respectively. Lysate was then normalized according to the protein concentration, and 500 µL of the normalized lysate was overlaid on the sucrose gradient, as prepared above. The gradient was ultracentrifuged at 287,980g (average; 55,000 r.p.m.) for 55 minutes at 4 ˚C, with max acceleration and slow deceleration using an Optima LE-80K Ultracentrifuge and SW55Ti rotor (Beckman Coulter). The sucrose gradient was fractionated according to the number of associated ribosomes (from 1 to 8 ribosomes; material lower in the gradient was pooled with the 8 ribosome fraction), as determined by the profile of the absorbance at 254 nm using a Density Gradient Fractionation System (Brandel, Model BR-188). The fractionated samples were then snap-frozen on dry ice. External control RNA addition and RNA extraction. Equal amounts of external control RNA were added to the polysome-fractionated samples after thawing the snap-frozen samples on ice. Commercially available external control RNA, including the ERCC RNA Spike-In Mix-1 kit (Thermo Fisher Scientific, 4456740) that we used, does not have a canonical mRNA cap. This can influence the template-switching reaction efficiency. Thus, the amount of external control RNA added to the polysome-fractionated samples was determined by preliminary experiments, so as to result in a library containing around 0.1% of reads from the external control RNA. RNA was extracted from 150 µL of the fractionated samples using an RNA Clean & Concentrator-5 kit (Zymo Research, R1016), using the same procedure to extract RNA from unfractionated cell lysate (described above), and was eluted into 10 µL of water. For a subset of samples, as indicated in Supplementary Data 1, half of the input volume was used, and RNA was eluted into 8 µL of water. The integrity of the purified RNA was confirmed using a Bioanalyzer (Agilent); the median value of RNA integrity number (RIN) for the samples from RCC4 VHL cells was 9.5, indicating that the RNA was largely intact. 5′ end-seq protocol. Primer sequences. The sequences of oligonucleotide primers used for 5′ end-seq are summarized in Supplementary Data 4. All the primers were synthesized and HPLC-purified by Integrated DNA Technologies. The 5′ end-seq method involves the following steps. Step 1: reverse transcription and template switching. cDNAs with adapter sequences at both the 5′ and 3′ ends were generated from full-length mRNAs using a combined reverse-transcription and template-switching reaction. ); and 72 ˚C for 5 minutes; and the mixture was then held at 4 °C. The number of PCR cycles for each amplification was determined by a pilot experiment using quantitative PCR (qPCR) to ensure that the amplification was at the early linear phase. The amplified cDNA library was purified using ProNex beads, as above, and eluted into 26 µL of 10 mM Tris-HCl, pH 7.4. The purified cDNA library was quantified using a Qubit dsDNA HS Assay Kit (Thermo Fisher Scientific, Q32851). Step 5: tagmentation. Tagmentation with Tn5 transposase was performed on 90-ng aliquots of the cDNA library using an Illumina DNA Prep kit (Illumina, 20018704), according to the manufacturer's instructions. Step 6: PCR amplification of mRNA 5′-end library. The 'tagmented' library was attached to the beads of an Illumina DNA Prep kit. Limited-cycle PCR amplification was performed by adding 50 µL of the following reaction mix (2.5 µL of 10 µM each of the PCR primer 2 forward/reverse, 20 µL of Enhanced PCR Mix (supplied with an Illumina DNA Prep kit), and 27.5 µL of water) and using a program of 68 ˚C for 3 minutes; 98 ˚C for 3 minutes; 98 ˚C for 45 seconds, 62 ˚C for 30 seconds, and 68 ˚C for 2 minutes (3 cycles); and 68 ˚C for 1 minute; and it was then held at 10 ˚C. The PCR primers used here anneal to the TSO and an adapter added by tagmentation, and thus specifically amplify DNA fragments containing 5′ ends of mRNAs. The amplified mRNA 5′-end library was purified using ProNex beads, as above, and eluted into 25 µL of 10 mM Tris-HCl (pH 7.4). The mRNA 5′-end library was reamplified by preparing a PCR reaction mix (10 µL of the mRNA 5′-end library, 25 µL KAPA HiFi HotStart ReadyMix, 2.5 µL of 10 µM each of PCR primer 3 forward/reverse (containing i5 and i7 index sequences), and 12.5 µL water), and the mixture was kept at 98 ˚C for 3 minutes; 98 ˚C for 20 seconds, 62 ˚C for 15 seconds, and 72 ˚C for 30 seconds (5 cycles (cycle number determined by a pilot experiment to define the early linear phase, as described above)); and 72 ˚C for 5 minutes; and it was then held at 4 °C. The mRNA 5′-end library was again purified using ProNex beads (1.4:1 vol/vol ratio of beads to sample) according to the manufacturer's instructions, and eluted into 20 µL of 10 mM Tris-HCl, pH 7.4. The purified mRNA 5′-end libraries were multiplexed again and then sequenced on HiSeq 4000 (Illumina) using paired-end (2×100 cycles) and dual-index mode. RT-qPCR. RNAs extracted from polysome-fractionated samples were converted into cDNAs using the same protocol as the 5′ end-seq protocol described above, except that the anchored oligonucleotide dT primer (Integrated DNA Technologies, 51-01-15-08) was used, and the TSO was omitted from the reaction. The cDNA was purified using an RNA Clean and concentrator-5 kit (Zymo Research, R1016) according to the manufacturer's instructions. qPCR was performed using TaqMan Fast Advanced Master Mix (Thermo Fisher Scientific, 4444557) according to the manufacturer's instructions with the mRNA isoform-specific primers and Taqman probes summarized in Supplementary Data 4. All the primers were synthesized by Integrated DNA Technologies. Quantification of mRNAs in each fraction was normalized to the quantification of ERCC-0002 RNA in the same fraction. Prior to the high-throughput DNA-sequencing data analysis, sequencing data from the technical replicates were concatenated. Data are presented as the mean value of the biological replicates. TSS boundaries and their associated mRNA isoforms were identified by 5′ end-seq of total (unfractionated) mRNAs. The TSSs assigned to a particular gene were those mapping within 50 base pairs of that gene locus, as specified by RefSeq and GENCODE. The abundance of the mRNA isoform associated with each TSS is the number of reads starting from that TSS. The gene-level mRNA abundance is the sum of these isoforms for the relevant gene. Statistics. The correlation of two variables was analyzed with the cor.test function of R to calculate statistics on the basis of Pearson's product moment correlation coefficient or Spearman's rank correlation coefficient. The difference between two distributions was tested using the two-sided Mann-Whitney U test (for two independent samples) or the two-sided Wilcoxon signed-rank test (for paired samples). To analyze the effect size of the Wilcoxon signed-rank test, the matched-pairs rank biserial correlation coefficient 53 was calculated using the wilcoxonPairedRC function of the rcompanion package (2.3.26) 54 . Kernel density estimation was performed using the geom_density function of the ggplot2 package with the parameter, bw = SJ. Sequencing read alignment. Read pre-processing. The sequence at positions 1-22 of read 1 is derived from the TSO and was processed before mapping. First, the UMI located at positions 10-16 was extracted using UMI-tools (1.0.1) 55 . Note that the UMI was not used in the analyses because we found that the diversity of UMI was not sufficient to uniquely mark non-duplicated reads. Next, the library was demultiplexed using an index sequence located at positions 1-8, after which the constant regions of the TSO located at position 9 and positions 17-22 were removed using Cutadapt (2.10) 56 with the parameters, -e 0.2-discard-untrimmed. Definition of TSS peaks and boundaries. To define TSS clusters, we considered two widely used peak callers, paraclu (9) 59 and decomposition-based peak identification (dpi, beta3) 60 software. Our preliminary analysis indicated that paraclu software was more accurate in determining total peak area, whereas dpi was more accurate in resolving peaks within multimodal clusters. To obtain the most accurate resolution and quantification of TSS clusters, we therefore combined the strength of these programs and included information from existing large-scale database using the following four-step procedure. Step 1: definition of cluster areas. Using the standard workflow of paraclu software on pooled data from normoxic cells, RCC4, RCC4 VHL, 786-O, and 786-O VHL, cluster areas of 5′ termini were identified. Step 2: definition of TSS clusters within cluster areas. The cluster areas defined above were further resolved by combining above data with FANTOM5 data and using dpi software, as was originally used for FANTOM5, to resolve bona fide subclusters within the data. Internal sub-cluster boundaries were defined as the midpoint between adjacent dpi-identified peaks. Step 3: quality controls and filters. Artifactual clusters of 5′ termini, potentially generated by internal TSO priming, were filtered on the basis of a low (<15%) proportion of reads bearing non-genomic G between the TSO and mRNA, as the template-switching reaction commonly introduces such bases at the mRNA cap but not following internal priming 4 . Since mitochondrial mRNAs are not capped, these transcripts were filtered if they did not overlap an annotated site. A further filter was applied to remove TSS subclusters of low-abundance mRNA isoforms whose biological significance is unclear; low abundance was defined as ≦10% of the most abundant mRNA isoform for the relevant gene in any of the analyses. Step 4: final assignment of TSS boundaries. To provide the most accurate identification of the TSS peaks and their boundaries, the resolved and filtered peaks from step 3 were mapped back onto the input cluster areas as defined in step 1, and boundaries were set at the midpoint between filtered peaks. Assignment of transcripts to TSS. To identify mRNA features that might affect translational efficiency, we used base-specific information on 5′ termini and assembled paired-end reads starting from each TSS (StringTie software, 2.1.2 (ref. 61 )) to define the primary structure of the 5′ portion of the transcript. We then used homology with this assembly to assign a full-length transcript from RefSeq and GENCODE. The CDS of the assigned transcript was then used for the analysis. In small number of cases, where this TSS was downstream of the start codon, we took the most upstream in-frame AUG sequence to redefine the CDS. The most abundant primary structure from each TSS and its CDS were then used for calculation of the association of mRNA features with mean ribosome load (see below). Details of this process are given in the computational pipeline. mRNA feature evaluation. Features within the mRNA (for example TOP motif, structure near cap) were evaluated at base-specific resolution using the following formula: ( RNA feature value for an mRNA TSS isoform where i is a base position within the TSS, n is the linear sequence extent of the TSS, mRNA feature value i is the value of mRNA feature for the isoform transcribed from position i, and mRNA abundance i is the mRNA abundance of the isoform transcribed from position i. The values were rounded to the nearest integer; a rounded value of 0 being taken as the absence of the feature. All non-overlapping uORFs, starting from an AUG, were identified using the ORFik package (1.8.1) 62 . Kozak consensus score was calculated by the kozakSequenceScore function of the ORFik package. Using the mode including G-quadruplex formation, the minimum free energy (MFE) of predicted RNA structures was estimated using RNALfold (ViennaRNA package, 2.3.3) 63 . The MFE of RNA structures near the cap was that of the first 75 nucleotides. The MFE of the region distal to the cap was that of entire 5′ UTR minus the first 75 nucleotides. The position of a TOP motif was defined as the position of the 5′ most pyrimidine base, and its length was defined as that of the uninterrupted pyrimidine tract from that base. The effect of HIF-dependent alternate TSS usage on CDS was defined by alteration in the genomic position of the start codon (Extended Data Fig. 8 and Supplementary Data 2). Expressed isoforms of a gene were defined as those with an abundance greater than 10% of that of the most highly expressed isoform of the same gene in either RCC4 VHL or RCC4 cells. Genes associated with angiogenesis or vascular process were defined by referencing to gene ontology (GO) 65 database: GO:0003018, vascular process in circulatory system; GO:0001525, angiogenesis. Analysis of existing literatures describing mTOR targets. In the analyses comparing HP5 data with previously published studies reporting the effects of mTOR inhibition 14,21,32,33 , we followed the definition of mTOR hypersensitive genes in the original reports; for Hsieh et al. and Larsson et al., the genes showing changes in translation with PP242 were used; for Morita et al., genes described in Fig. 1b of the paper 33 were used. Since the data of Thoreen et al. were obtained using mouse cells, we mapped mouse genes to human genes using the gorth function of the gprofiler2 package (0.1.9) 66 . Since Hsieh et al. did not supply values for changes in translational efficiency for all genes, we took this data from Xiao et al. 67 , who calculated the relevant values using the data from the original report. To define known activities of mTOR via any mode of regulation except translational regulation (as indicated in Fig. 2c, first row), we considered review articles by Saxton et al. 13 and Morita et al. 68 . Known systematic translational downregulation by mTOR inhibition (as indicated in Fig. 2c, second row) was defined from previous genome-wide studies listed above 14,21,32 . A class of targets was defined as systematically regulated if ≥10% of genes in the class were identified as mTOR hypersensitive or resistant in any of these previous studies 21,32 or highlighted in the original report. Analyses of differential mRNA expression upon VHL loss. The identification of differentially expressed genes and the calculation of log 2 (fold change in mRNA abundance) upon VHL loss were performed using the DESeq2 package (1.28.0) 69 . Genes with an FDR < 0.1 and either log 2 (fold change) > log 2 (1.5) or < -log 2 (1.5) were defined as upregulated or downregulated, respectively. HIF-target genes (as considered in Extended Data Fig. 10f) were defined as those upregulated upon VHL loss in RCC4 cells. For this analysis, genes with very low expression in both 786-O and 786-O VHL cells, as identified by the DESeq2 package, were excluded from the analysis. Analysis of alternative TSS usage upon VHL loss. Genes manifesting alternative TSS usage upon VHL loss were identified using the approach described by Love et al. 70 . Briefly, TSSs for mRNA isoforms with very low abundance were first filtered out using the dmFilter function of the DRIMSeq package (1.16.0) 71 with the parameters min_samps_feature_expr = 2, min_feature_expr = 5, min_samps_ feature_prop = 2, min_feature_prop = 0.05, min_samps_gene_expr = 2, min_ gene_expr = 20. The usage of a specific TSS relative to all TSSs was then calculated by DRIMSeq with the parameter add_uniform = TRUE. The significance of changes in TSS usage upon VHL loss for a particular gene was analysed by the DEXSeq package 72 . The FDR was calculated using the stageR package (1.10.0) 73 , with a target overall FDR < 0.1. For genes with significant changes in VHL-dependent TSS usage, a VHL-dependent alternative TSS was selected as that showing the largest fold change upon VHL loss (FDR < 0.1), and a base TSS was selected as that showing the highest expression in the presence of VHL. In these calculations, the DESeq2 and apeglm (1.10.0) package 74 were used to incorporate data variance to provide a conservative estimate of fold change and standard error. To provide the highest stringency definition, genes manifesting VHL-dependent alternative TSS usage were further filtered by the proportional change > 5%, the absolute fold change > 1.5, and the significance of the difference in fold change between the alternate TSS and the base TSS (assessed by non-overlapping 95% confidence intervals). For the comparative analysis of the VHL-dependent alternate TSS usage in various conditions (Extended Data Fig. 7), genes with very low expression that did not meet a criterion of 20 read counts in more than 1 sample were excluded. Calculation of mean ribosome load. Mean ribosome load was calculated using the following formula: normalized read count of the mRNA normalized read count of the mRNA The mRNA abundance values for each polysome fraction were normalized by the read count of the external control using the estimateSizeFactors fraction of the DESeq2 package. Very-low-abundance mRNAs that did not meet a criterion of six read counts in more than six samples were excluded. Statistical analysis of differences in polysome distribution. VHL-dependent alternative TSS mRNA isoforms. To define VHL-dependent alternative mRNA isoforms with a different translational efficiency with reference to all other isoforms from the same gene, the significance of changes in their polysome profile was determined by considering the ratio of mRNA abundances as a function of polysome fraction using the DEXSeq package (1.34.0) 72 . The false-discovery rate (FDR) was calculated using the stageR package 73 , with the target overall FDR < 0.1. Differentially translated mRNA isoforms from the same gene. In analysis of two most differentially translated mRNA isoforms transcribed from the same gene (for Fig. 1f), each of these isoforms was censored for statistically significant differences from all other isoforms of the same gene using the same analysis as above. Changes in response to mTOR inhibition. To identify genes that were hypersensitive or resistant to mTOR inhibition, genes manifesting a significant change in polysome distribution upon mTOR inhibition, compared to the population average, were first identified using the DESeq2 package 72 with the internal library size normalization and the likelihood ratio test. The genes with a significant change (FDR < 0.1) were classified as hypersensitive or resistant to mTOR inhibition if the log2 fold change of the mean ribosome load was lower or higher than the median of all expressed genes. Simulation of changes in translational efficiency with omitting a parameter. Log 2 (fold change) in mean ribosome load of a gene upon VHL loss can be expressed by the following formula: In this formula, i is mRNA isoform i (out of n mRNA isoforms), MRL no VHL orMRL VHL,i is the mean ribosome load of isoform i in RCC4 or RCC4 VHL cells, and % mRNA abundance no VHL or% mRNA abundance VHL,i is the percentage abundance of isoform i relative to that of all isoforms in RCC4 or RCC4 VHL cells. To assess the contribution of alternative TSS usage to changes in mean ribosome load of a gene, we tested a simulation that omitted the VHL-dependent changes in translational efficiency within each mRNA isoform using the following formula: In this formula, MRL average, i is the combined average of MRL no VHL, i and MRL VHL, i as defined above. When values for either of MRL no VHL, i and MRL VHL, i are missing, these values are excluded from the calculation of the average. To assess the contribution of VHL-dependent changes in translational efficiency within each mRNA isoform to changes in mean ribosome load of a gene, we tested a simulation which omitted the VHL-dependent changes in TSS usage using the following formula: In this formula, % mRNA abundance average, i is the combined average of % mRNA abundance no VHL, i and % mRNA abundance VHL, i defined above. When values for either of MRL no VHL, i and MRL VHL, i are missing, these genes were excluded from the analysis. Generalized additive model to predict mean ribosome load. A generalized additive model was used to predict mean ribosome load of mRNAs from the preselected mRNA features. To test the model, a cross-validation approach was deployed to predict the MRL of the top 50% expressed genes on 4 randomly selected chromosomes, which were excluded from the training data used to derive the model. To provide an accurate estimate of the model's performance, this process was repeated ten times, and the median value of the coefficient of determination (R 2 ) was calculated. For model construction, the gam function of the mgcv package (1.8-31) 75 of R was used, deploying thin-plate regression splines with an additional shrinkage term (with the parameter, bs = 'ts') and restricted maximum likelihood for the selection of smoothness (with the parameter, method = 'REML'). The analysis was restricted to mRNAs with a 5′ UTR length longer than 0 nt and a CDS length longer than 100 nt; 5′ UTR and CDS length were log 10 -transformed, and the MFE values of RNA structures were normalized by the segment length (nt). Principal component analysis. Library-size normalization and a variance-stabilizing transformation were applied to the mRNA abundance data using the vst function of the DESeq2 package 69 with the parameter, blind = TRUE. Principal component analysis of the transformed data was performed for genes showing the most variance (top 25%) using the plotPCA function of the DESeq2 package. GO or KEGG orthology enrichment analysis. GO or KEGG orthology enrichment analysis of the selected set of genes compared to all the expressed genes in the data was performed using the gost function of the gprofiler2 package 66 . Analysis of HIF2A/HIF1A binding ratio near VHL-regulated genes. HIF1A and HIF2A ChIP-seq data from Smythies et al. 41 were used to analyze HIF-binding sites across the genome. HIF1A-or HIF2A-binding sites were defined as the overlap of the peaks identified by ENCODE ChIP-seq pipeline (https://github. com/ENCODE-DCC/chip-seq-pipeline2) and those by MACS2 software (2.2.7.1) 76 . For this purpose, the ChIP-seq reads were aligned to the human genome using Bowtie2 software, and the aligned reads were analyzed by ENCODE ChIP-seq pipeline to identify the peaks. The blacklist filtered and pooled replicate data generated by the pipeline were analyzed by MACS2 software with the following parameters (callpeak -q 0.1-call-summits). The position of the binding sites was defined as the position of the hypoxia response element (HRE, RCGTG sequence) closest to the peak summits identified by MACS2 software. If the binding site did not contain an HRE within 50 bp of the peak summit, it was filtered out. Data on HIF1A and HIF2A binding, as defined above, were merged, and the HIF2A/ HIF1A binding ratio was estimated using the DiffBind package (2.16.0) 77 with the parameters minMembers = 2 and bFullLibrarySize = FALSE. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Sequence data generated during this study are available from ArrayExpress (HP5: E-MTAB-10689, 5′ end-Seq of total mRNAs: E-MTAB-10688). Additional unprocessed data are provided as Source data. The following reference data were used; human genome: hg38, obtained via BSgenome.Hsapiens.UCSC.hg38 (1.4.3); human transcripts: RefSeq57 (GRCh38.p13) and GENCODE58 (GENCODE version 34: gencode.v34.annotation.gtf). Processed data files are provided as Supplementary Data and Source Data. The list of samples that were analyzed for this study is provided as Supplementary Data. Source data are provided with this paper. code availability Extended Data Fig. 1 | Overview of 5′ end-Seq protocol. Schematic representation of the 5′ end-Seq protocol (see also Methods). 1. The reverse transcription is primed with an adapter containing an oligo (dT) sequence. The reverse transcriptase used for 5′ end-Seq adds additional non-templated cytidine residues beyond the cap, to the 3′ end of the cDNAs. This polycytidine sequence anneals to a polyriboguanosine sequence contained in the template switch oligo (TSO), and the reverse transcriptase switches the template from the mRNA to the TSO to add the complementary sequence of the TSO at the 3′ end of the cDNAs. An indexing sequence contained in the TSO to identify the sample source of the cDNAs is reverse transcribed in this process. 2. Unused primers and RNA are degraded using the combination of a single-stranded DNA specific exonuclease (exonuclease I), an enzyme cleaving DNA at deoxyuridine (Thermolabile USeR II enzyme), and RNase H. This step leaves the adapter sequence of the TSO (the constant region) annealing to the cDNA due to the high melting temperature of this duplex, which protects the cDNA from exonuclease I. 3. The full-length cDNA library is amplified using limited cycle pCR amplification. 4. The libraries from different samples are multiplexed and the multiplexed libraries are amplified using pCR and an optimized cycle number. 5. Amplified libraries are fragmented and adapter tagged using tagmentation. 6. mRNA 5′ end library suitable for high-throughput DNA sequencing is generated using pCR amplification with primers annealing to the TSO and the appropriate tagmentation adapter. Fig. 2 | establishment of cell lines. Immunoblotting analysis of RCC4 or 786-O cells re-expressing either wild type VHL or empty vector alone (n = 3 or 4 experiments in independent clones of RCC-4 and 786-O cells). The successful reintroduction of VHL was confirmed by the expression of VHL protein and degradation of HIF1A and/or HIF2A protein. Similar protein loading across lanes was confirmed by total protein staining. Note that multiple species of VHL were observed consistent with previous studies. In part, they arise from an internal start codon in VHL that produces an 18 kDa isoform 81 , but the precise origin of additional species has not been established 82 . Fig. 3 | mRNA features predicting mean ribosome load. (a) proportion of mRNA isoforms in relation to the total mRNA across polysome fractions for selected genes, as measured by Hp5 (upper panel) or RT-qpCR (lower panel). The line indicates the mean value while the shaded area shows the standard deviation of assays using 3 independent clones for the Hp5 data, or 2 technical replicates for the RT-qpCR data, respectively. The examples have been selected to compare data on genes where Hp5 defined different mRNA isoforms (the schematics are shown below the line plots). In some cases, the resolution provided by RT-qpCR was less than Hp5, in which case integration of the Hp5 data was performed to permit quantitative comparisons between Hp5 and RT-qpCR. *Different upstream or downstream mRNA isoforms not resolved by RT-qPCR and are grouped. **Downstream mRNA isoform not separately resolved by RT-qPCR therefore resolved species comprise upstream and upstream + downstream mRNAs. (b) proportion of variance in mean ribosome load (MRL) between mRNAs that is explained by a single mRNA feature (expressed as R 2 ) using a generalized additive model (mean ± s.e.m of 10 iterations of cross validation). The significance of mRNA features in predicting MRL was determined by the Wald test. Length, log10 sequence length (nucleotides, nts); Structure (near cap, first 75 nts; distal to cap, rest of the 5′ UTR), inverse of minimum free energy per nucleotide of predicted RNA structures; Kozak consensus, match score to the consensus sequence. The analysis identified that CDS length, uORF number, stability of RNA structures near cap, and Kozak consensus score were the four most predictive features. (c) MRL as a function of the stability of RNA structures near cap. mRNAs were ranked by their RNA structural stability, and split into 5 groups according to the rank; the intervals of the stability are indicated on the x-axis. MRL for mRNAs with less stable structures was compared with the most stable group using the two-sided Mann-Whitney U test. (d) Similar to c, but MRL as a function of Kozak consensus score. The median value of MRL for all mRNAs is shown by a dashed line. MRL for mRNAs with the indicated Kozak consensus score was compared to that with the score of 0.642 to 0.712, using the two-sided Mann-Whitney U test. boxplots showing changes in translation upon mTOR inhibition as measured by the indicated study (Hp5, left-hand panel; ribosome profiling, centre and right-hand panels) for the genes identified as mTOR hypersensitive in each of the previous studies 14,21,32,33 . In each panel, the left-hand boxplot shows the changes in mean ribosome load (MRL) or translational efficiency of all expressed genes in that study; horizontal line, median value. Responses of mTOR hypersensitive genes identified by the indicated study were compared against responses for all expressed genes using the two-sided Mann-Whitney U test. * p < 0.05, ** p < 0.005. p values were adjusted for multiple comparisons using Holm's method. Details of the sample sizes and exact p values for (b) are summarized in Supplementary Information. boxplots show the median (horizontal lines), first to third quartile range (boxes), and 1.5× interquartile range from the box boundaries (whiskers). Fig. 5 | HP5 refined mRNA features influencing the mTOR sensitivity of mRNAs. (a) proportion of mRNAs as a function of the TOp motif length (n = 9,589). (b) boxplots showing changes in translational efficiency of mRNAs with Torin 1 (log2 fold change in mean ribosome load, MRL) as a function of CDS length. Responses of mRNAs with the indicated CDS length were compared against responses of all other mRNAs using the two-sided Mann-Whitney U test; classes more downregulated or less downregulated compared to all other mRNAs (that is hypersensitive or resistant to mTOR inhibition, p < 0.05) are colored red or blue respectively. (c) boxplots showing MRL as a function of transcript length, in the presence (purple) or absence (blue) of Torin 1. (d) boxplots illustrating associations between mRNA features (length of TOp motif, left panel; number of uORFs, right panel) and sensitivity to mTOR inhibition, for alternate TSS mRNA isoforms of same gene. The mRNA isoforms are classified as sensitive or resistant based on their sensitivity to mTOR inhibition (the isoform with a larger or smaller mean ribosome load, MRL, log2 fold change with Torin 1, respectively). When more than two isoforms were expressed from the same gene, the isoforms with the largest and smallest MRL log2 fold change were selected for the analysis. The comparison was performed by the groups binned by their difference in MRL fold change of the two isoforms (x-axis). Distributions of the length of TOp motifs or the number of uORFs were compared using the two-sided Wilcoxon signed rank test. (a-d) Data are for RCC4 VHL cells. boxplots show the median (horizontal lines), first to third quartile range (boxes), and 1.5× interquartile range from the box boundaries (whiskers). * p < 0.05, ** p < 0.005. (b and d) P values were adjusted for multiple comparisons using Holm's method. Details of the sample sizes and exact p values for (b-d) are summarized in Supplementary Information. Fig. 8 | VHl dependent alternate TSS usage generates mRNAs with an altered translational efficiency. The plot shows the differences in translational efficiency (expressed as mean ribosome load, MRL) between mRNA isoforms that are generated from VHL-dependent alternative TSSs and all other isoforms transcribed from the same gene. Data are for RCC4 cells. Genes are sorted by log2 fold difference in MRL; significant differences in polysome distribution (FDR < 0.1) are indicated by black colouring. The magnitude of changes in alternative TSS usage is shown by the size of point. Genes whose alternate TSS isoform contains a different predicted CDS are indicated with a check mark; NA, CDS could not be predicted. Genes with too little alternate TSS isoform expression for MRL calculation were excluded from the analysis (see Methods).
v3-fos-license
2018-08-20T15:32:05.388Z
2018-06-29T00:00:00.000
52990624
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-3263/8/7/241/pdf?version=1530270632", "pdf_hash": "5a3d51183b46e023bf9c191e88fc90e1780ad926", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44003", "s2fieldsofstudy": [ "Geology" ], "sha1": "5a3d51183b46e023bf9c191e88fc90e1780ad926", "year": 2018 }
pes2o/s2orc
Experimental Determination of Impure CO 2 Alteration of Calcite Cemented CapRock , and Long Term Predictions of CapRock Reactivity Cap-rock integrity is an important consideration for geological storage of CO2. While CO2 bearing fluids are known to have reactivity to certain rock forming minerals, impurities including acid gases such as SOx, NOx, H2S or O2 may be present in injected industrial CO2 streams at varying concentrations, and may induce higher reactivity to cap-rock than pure CO2. Dissolution or precipitation of minerals may modify the porosity or permeability of cap-rocks and compromise or improve the seal. A calcite cemented cap-rock drill core sample (Evergreen Formation, Surat Basin) was experimentally reacted with formation water and CO2 containing SO2 and O2 at 60 ◦C and 120 bar. Solution pH was quickly buffered by dissolution of calcite cement, with dissolved ions including Ca, Mn, Mg, Sr, Ba, Fe and Si released to solution. Dissolved concentrations of several elements including Ca, Ba, Si and S had a decreasing trend after 200 h. Extensive calcite cement dissolution with growth of gypsum in the formed pore space, and barite precipitation on mineral surfaces were observed after reaction via SEM-EDS. A silica and aluminium rich precipitate was also observed coating grains. Kinetic geochemical modelling of the experimental data predicted mainly calcite and chlorite dissolution, with gypsum, kaolinite, goethite, smectite and barite precipitation and a slight net increase in mineral volume (decrease in porosity). To better approximate the experimental water chemistry it required the reactive surface areas of: (1) calcite cement decreased to 1 cm2/g; and, (2) chlorite increased to 7000 cm2/g. Models were then up-scaled and run for 30 or 100 years to compare the reactivity of calcite cemented, mudstone, siderite cemented or shale cap-rock sections of the Evergreen Formation in the Surat Basin, Queensland, Australia, a proposed target for future large scale CO2 storage. Calcite, siderite, chlorite and plagioclase were the main minerals dissolving. Smectite, siderite, ankerite, hematite and kaolinite were predicted to precipitate, with SO2 sequestered as anhydrite, alunite, and pyrite. Predicted net changes in porosity after reaction with CO2, CO2-SO2 or CO2-SO2-O2 were however minimal, which is favourable for cap-rock integrity. Mineral trapping of CO2 as siderite and ankerite however was only predicted in the CO2 or CO2-SO2 simulations. This indicates a limit on the injected O2 content may be needed to optimise mineral trapping of CO2, the most secure form of CO2 storage. Smectites were predicted to form in all simulations, they have relatively high CO2 sorption capacities and provide additional storage. Introduction Cap-rocks traditionally act as low porosity and permeability structural seals of CO 2 plumes stored geologically in high porosity reservoirs.Cap-rocks consisting of shales, mudstones, siltstones, sandstones, carbonates or evaporites have been documented internationally [1][2][3][4][5][6].In many cases, these sealing units are thick interbedded formations of variable lithology and mineralogy.When stored at depths greater than ~800 m and temperatures above 31 • C, CO 2 exists as a supercritical fluid.Water can dissolve into the buoyant plume (forming wet supercritical CO 2 ), and CO 2 can dissolve in formation water to form a weak carbonic acid [7][8][9].CO 2 rich fluids have been shown to be reactive to some rock forming minerals, especially carbonates, grain coating Fe-oxides, and more reactive silicates such as plagioclase and Fe-rich clays [10][11][12][13]. CO 2 streams from industrial sources e.g., coal-fired post combustion capture (PCC), oxyfuel firing, cement or steel processing, have been reported to contain ancillary (or impurity) gases such as N 2 , Ar, O 2 , and acid forming gases such as NOx, SOx, H 2 S [14,15].The type and concentration of these gases depend on the capture and purification process, with generally <5% impurity gases recommended for CO 2 storage, although acid gas streams with up to ~25% H 2 S have been successfully stored in Canada or Iceland for example [16][17][18].The majority of storage demonstrations, experimental and modelling studies however use pure food grade CO 2 and studies using impure CO 2 are needed. The Precipice Sandstone and Evergreen Formation of the Surat Basin, Australia, form a storage reservoir and seal pair that has been reported to have high prospectivity for CO 2 storage, and is currently undergoing a study for its future feasibility as a large scale storage site [19,20].The Precipice Sandstone is a quartz rich, low salinity fresh to brackish aquifer, increasing in depth towards the central part of the Basin [21,22].The Precipice Sandstone outcrops in the north, and reaches depths of ~1236 m near the West Wandoan 1 well (Figure 1), and ~2134 m near Cabawin 1 further south.The lower Precipice sandstone reservoir ranges from ~60-115 m thick in the central fairway, and pinches out to the West.The Evergreen Formation has been reported to contain interbedded shales, mudstone, sandstone, and carbonate cemented sections [23,24].This formation has a total thickness of ~175 m near the West Wandoan 1 well.Generally more mineralogical and petrophysical data is available from the Northern parts of the basin, with a more limited number of wells drilled in the deeper central region.The West Wandoan 1 well for example, drilled for a small scale CO 2 storage demonstration feasibility study, has reported micro CT calculated porosities for the Evergreen Formation of 0.1-20.4%,and helium porosities of 4.6-21.5% (Figure 1) [25]. Calcite is kinetically one of the most reactive rock-forming mineral phases to CO 2 bearing fluids, therefore calcite cemented zones may be the most susceptible to mineral and porosity changes [26,27].Minerals with high available surface areas such as clays; or Fe-oxide grain coatings have also been shown elsewhere to be the most reactive mineral phases in sandstone reservoirs or cap-rocks, demonstrating the importance of the minerals reactive surface areas [28].This study aims to use experimental CO 2 -SO 2 -O 2 -water-cap-rock reaction data to modify and validate a geochemical model at the experimental scale using reactive surface areas as a variable.Geochemical models will be subsequently upscaled to estimate mineral and porosity alteration with a higher degree of confidence over longer timescales for common cap-rock compositions including mudstones and shales. Materials and Methods Drill core was sampled at 1056.10-1056.18m KB from the Evergreen Formation of the West Wandoan 1 well (latitude −26.181622, longitude 149.812422) in the Surat Basin (Figure 1).This core depth section has been characterized mineralogically previously (Table 1) [29].Reported reservoir temperature and pressure near this well are ~60 °C and 120 bar. A 1.5 cm 3 cube, a block and offcuts were cut from the core.Unstressed N2 gas permeability was performed on the cube in the vertical and two horizontal directions, with the methods described previously [30].Brine permeability was also attempted but was below the resolution of the technique (<0 mD).Scanning electron microscopy with energy dispersive spectroscopy (SEM EDS) was performed on uncoated block surfaces before and after reaction on a JEOL JSM-6460LA environmental SEM with a Minicup EDS, and a TM3030 with a Brucker EDS.In addition the block was broken after reaction and the inner surfaces surveyed with SEM EDS.Disaggregated grains collected from the bottom of the reactor were dried, fixed to a carbon stub, and also analyzed by SEM-EDS.After reaction the core offcuts were also dried and crushed, and powder X-ray diffraction (XRD) performed with the methods reported elsewhere [30]. For reaction, the core blocks and offcuts (6.27 g) were submerged in 100 mL of a low salinity water with 1500 mg/kg NaCl in reactors based on Parr vessels.The reactors and general methods have been reported previously in detail and so are only briefly described below [31].The rectors were maintained at 60 °C, purged of air, and pressurized to 120 bar initially with inert N2 gas to perform a water-rock soak and obtain a baseline water measurement.After 7 days, the reactors were sampled and depressurized, then repressurised to 120 bar with a gas mixture containing 0.2% SO2, 2% O2 and a balance of CO2.At this temperature and pressure CO2 was in a supercritical state.The experiment was then run for 720 days and fluid sampled periodically.After the reaction, the reactor was depressurized, and the remaining fluid (referred to as quench) was sampled.Solution pH and conductivity were immediately measured on fluid sampling.The degassing of CO2 on sampling will Materials and Methods Drill core was sampled at 1056.10-1056.18m KB from the Evergreen Formation of the West Wandoan 1 well (latitude −26.181622, longitude 149.812422) in the Surat Basin (Figure 1).This core depth section has been characterized mineralogically previously (Table 1) [29].Reported reservoir temperature and pressure near this well are ~60 • C and 120 bar. Table 1.Quantified minerals in the cap-rock from 1056.10-1056.18 m.Where QEMSCAN was previously reported on two adjacent sub-plug slices [29] and is reported in area %.XRD was performed on a powdered section of the whole core, and also on the subsample after reaction, reported in wt.%. * Fe-rich chlorite/chalmosite.A 1.5 cm 3 cube, a block and offcuts were cut from the core.Unstressed N 2 gas permeability was performed on the cube in the vertical and two horizontal directions, with the methods described previously [30].Brine permeability was also attempted but was below the resolution of the technique (<0 mD).Scanning electron microscopy with energy dispersive spectroscopy (SEM EDS) was performed on uncoated block surfaces before and after reaction on a JEOL JSM-6460LA environmental SEM with a Minicup EDS, and a TM3030 with a Brucker EDS.In addition the block was broken after reaction and the inner surfaces surveyed with SEM EDS.Disaggregated grains collected from the bottom of the reactor were dried, fixed to a carbon stub, and also analyzed by SEM-EDS.After reaction the core offcuts were also dried and crushed, and powder X-ray diffraction (XRD) performed with the methods reported elsewhere [30]. Mineral For reaction, the core blocks and offcuts (6.27 g) were submerged in 100 mL of a low salinity water with 1500 mg/kg NaCl in reactors based on Parr vessels.The reactors and general methods have been reported previously in detail and so are only briefly described below [31].The rectors were maintained at 60 • C, purged of air, and pressurized to 120 bar initially with inert N 2 gas to perform a water-rock soak and obtain a baseline water measurement.After 7 days, the reactors were sampled and depressurized, then repressurised to 120 bar with a gas mixture containing 0.2% SO 2 , 2% O 2 and a balance of CO 2 .At this temperature and pressure CO 2 was in a supercritical state.The experiment was then run for 720 days and fluid sampled periodically.After the reaction, the reactor was depressurized, and the remaining fluid (referred to as quench) was sampled.Solution pH and conductivity were immediately measured on fluid sampling.The degassing of CO 2 on sampling will however have resulted in the pH rising as dissolved CO 2 exsolves in ex situ samples.Therefore the experimental measured pH will be higher than the in situ and predicted pH.Sampled fluid was filtered, diluted ten times, and acidified with ultrapure nitric acid for analysis by Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES) with a Perkin Elmer Optima 3300DV and Inductively Coupled Plasma Mass Spectrometry (ICP-MS) as described previously [32].Selected sample aliquots were not acidified and analyzed for bicarbonate alkalinity by titration, and sulphate and chloride concentrations by ion chromatography (ALS environmental).The bicarbonate concentration can be expected to represent a maximum value owing to CO 2 degassing.No precipitated carbonates were identified in sampled waters. The experiment was also geochemically modelled at 60 • C using the react module of Geochemist Workbench 9 (GWB) and a thermodynamic database based on the EQ3/6 database [33,34].The experimental N 2 -water-rock soak data was used as the initial water chemistry for 100 mL of fluid, and the mass of minerals determined in the core supplemented by SEM-EDS information for the reacted subsample (Table 2 "Exp WW1CalCem").The methods are similar to those described in detail previously, with the fugacity of CO 2 calculated for 60 • C and 120 bar from the work of Duan and Sun, with a mass of SO 2 and an O 2 fugacity to approximate the experiment data (e.g., the dissolved sulphate concentration) [31,35].The input kinetic and thermodynamic parameters are given in Table 2 and Supplementary Material and are from published sources [36][37][38][39][40]. Minerals were input as script files, with an Fe-rich chlorite (Fe:Mg 3:1), ordered ankerite (Ca:Fe:Mg 1:0.7:0.3), and siderite (Fe:Mg 0.9:0.1)composition most closely representing those minerals observed in drill core [31].The reactive surface areas were modified to improve the prediction of the experimental water chemistry.The initial mineral surface areas were based on a geometric calculation from SEM observations of grain size and morphology [41].These were decreased to account for surface coating and armouring as described previously [31].Reactive surface areas of minerals were then increased or decreased to improve the fit to the experimental water chemistry.The minerals observed to be corroded or that dissolved completely during the experiment via SEM observations guided the changes.Calcite mainly controlled the Ca concentration, and in this case mainly chlorite controlled the Fe and Mg concentrations.Illite, plagioclase, kaolinite and K-feldspar surface area modification additionally improved the Si, Al and K concentrations.The final reactive surface areas values used are in Table 2, the pre-exponential crystal nucleation factors Г were also modified for kaolinite and goethite with the values used in Supplementary Material (Table S1).Geochemical models were also run at 60 • C for an upscaled calcite cemented caprock with the mineral input from XRD data for 10 kg of rock, with an amount of fluid added to occupy the pore space from the measured rock porosity (Table 2 "WW1 CalCem", and Supplementary Material) as described elsewhere [42].Reactive surface areas were upscaled by decreasing by a factor of 10-100 based on methods described elsewhere [27,42].The water chemistry used was from published data equilibrated with the minerals [22].The mudstone cap-rock "WW1 MudS" input data is from 981 m drill core from the same well (West Wandoan 1) and this model was also run at the same PT conditions. The siderite cemented and shale cap-rocks "Cab1SidCem" and "Cab1Shale" are based on drill core data from the Cabawin 1 well in the deeper part of the Basin [43].These were run at 70 • C with a fugacity of CO 2 for 200 bar pressure based on reported pressure and temperature data for this well [44].The water chemistry was based on published water chemistry data equilibrated with the rock, and the porosities given in Supplementary Material [22]. Calcite Cemented Caprock The sampled drill core was previously characterized by XRD, and sub-plug slices by QEMSCAN, and contains mainly quartz, plagioclase, K-feldspar, illite and calcite (Table 1) [29,30].Micro CT porosities of sub-plugs from this core depth section were previously reported at 4.4-6.8%,helium porosity at 6.8%, and mercury intrusion porosity of adjacent material at 8.2% [25]. Experimental Results The N 2 permeabilities of the core cube were low at 0.96 mD vertical, 1.28 mD horizontal, and 2.55 mD horizontal. SEM-EDS before reaction showed calcite cemented framework grains, with the calcite EDS signatures indicating trace Mn and Fe content (Figure 2).Pore filling clays including illite and Fe-Mg-chlorite were partly covered by the calcite.Trace amounts of sphalerite or ZnFeS, organic matter, Ti-oxide, and apatite were occasionally present (Figure 2E).Both a Na-plagioclase albite, and Ca-Na-plagioclases close to oligoclase and andesine compositions were observed. Geosciences 2018, 8, x FOR PEER REVIEW 6 of 24 SEM-EDS before reaction showed calcite cemented framework grains, with the calcite EDS signatures indicating trace Mn and Fe content (Figure 2).Pore filling clays including illite and Fe-Mgchlorite were partly covered by the calcite.Trace amounts of sphalerite or ZnFeS, organic matter, Tioxide, and apatite were occasionally present (Figure 2E).Both a Na-plagioclase albite, and Ca-Naplagioclases close to oligoclase and andesine compositions were observed.After reaction, on the outer rock cube surface, calcite cement had been dissolved exposing framework grains and pore filling clays (Figure 3A-D).Some chlorite and illite surfaces appeared altered; however other framework silicates did not show obvious corrosion features.A Ca-sulphate mineral was occasionally observed growing from the pore space formed by calcite dissolution, and After reaction, on the outer rock cube surface, calcite cement had been dissolved exposing framework grains and pore filling clays (Figure 3A-D).Some chlorite and illite surfaces appeared altered; however other framework silicates did not show obvious corrosion features.A Ca-sulphate mineral was occasionally observed growing from the pore space formed by calcite dissolution, and barite crystals were mainly clustered on those precipitates (Figure 3E,F, and Supplementary Material Figures S3 and S5). Geosciences 2018, 8, x FOR PEER REVIEW 7 of 24 barite crystals were mainly clustered on those precipitates (Figure 3E,F, and Supplementary Material Figures S3 and S5).On the inner cube surface, calcite cement had also been dissolved (Figure 4A).Chlorite near the outer edges of the cube had a corroded appearance, with material precipitated on it (Figure 4D).Precipitated coatings on framework grains were present, including with a smectite-like morphology (Figure 4C).Barite crystals were also present over the surfaces (Figure 4D and Supplementary Material).A precipitated Si-Al layer covered grains on inner cube surfaces and was occasionally intermingled with barite crystals indicating that at least part of the barite had precipitated during the reaction rather than on reactor depressurization (Figure 5A,D,E).Again a Ca-sulphate mineral had On the inner cube surface, calcite cement had also been dissolved (Figure 4A).Chlorite near the outer edges of the cube had a corroded appearance, with material precipitated on it (Figure 4D).Precipitated coatings on framework grains were present, including with a smectite-like morphology (Figure 4C).Barite crystals were also present over the surfaces (Figure 4D and Supplementary Material).A precipitated Si-Al layer covered grains on inner cube surfaces and was occasionally intermingled with barite crystals indicating that at least part of the barite had precipitated during the reaction rather than on reactor depressurization (Figure 5A,D,E).Again a Ca-sulphate mineral had precipitated "growing" out of the formed pore space and was in higher abundance on the inner surfaces (Figure 5B,C).Occasionally illite had a skeletal or corroded appearance indicating that it had been altered (Figure 5F). Geosciences 2018, 8, x FOR PEER REVIEW 8 of 24 precipitated "growing" out of the formed pore space and was in higher abundance on the inner surfaces (Figure 5B,C).Occasionally illite had a skeletal or corroded appearance indicating that it had been altered (Figure 5F).Disaggregated grains collected from the bottom of the reactor had Fe-oxide coatings, and occasional barite crystal coatings (Supplementary Material FigureS6).Reacted core offcuts were analyzed by XRD which indicated a total loss of calcite, and formation of a smectite (montmorillonite) phase and kaolinite (Table 1).It should be noted however that the smaller changes may also be affected by rock heterogeneity.No Ca-sulphate minerals were detected but these were likely below detection.Disaggregated grains collected from the bottom of the reactor had Fe-oxide coatings, and occasional barite crystal coatings (Supplementary Material S6).Reacted core offcuts were analyzed by XRD which indicated a total loss of calcite, and formation of a smectite (montmorillonite) phase and kaolinite (Table 1).It should be noted however that the smaller changes may also be affected by rock heterogeneity.No Ca-sulphate minerals were detected but these were likely below detection. The solution alkalinity was 109.8 mg/kg as bicarbonate after the N2-water-rock soak.Ex situ measured pH initially decreased from 6.12 to 5.07 after CO2-SO2-O2 gas addition, and was subsequently buffered to 5.61 (Table 3).Solution conductivity increased after CO2-SO2-O2 gas addition along with the concentration of several dissolved elements including Al, Ba, Ca, Fe, Si, Mn, Mg, S, Sr which subsequently decreased (Table 3, Figures 6 and 7).Dissolved Rb increased gradually during the reaction likely from clays (Table 4, Figure 7).Dissolved Ca and Mn were strongly correlated from the dissolution of calcite (Figure 7).Dissolved sulphate and chloride concentrations were measured at the end of the reaction to be 1206.7 and 972.0 mg/kg respectively.The solution alkalinity was 109.8 mg/kg as bicarbonate after the N 2 -water-rock soak.Ex situ measured pH initially decreased from 6.12 to 5.07 after CO 2 -SO 2 -O 2 gas addition, and was subsequently buffered to 5.61 (Table 3).Solution conductivity increased after CO 2 -SO 2 -O 2 gas addition along with the concentration of several dissolved elements including Al, Ba, Ca, Fe, Si, Mn, Mg, S, Sr which subsequently decreased (Table 3, Figures 6 and 7).Dissolved Rb increased gradually during the reaction likely from clays (Table 4, Figure 7).Dissolved Ca and Mn were strongly correlated from the dissolution of calcite (Figure 7).Dissolved sulphate and chloride concentrations were measured at the end of the reaction to be 1206.7 and 972.0 mg/kg respectively.Table 3. Solution pH and conductivity (ms/cm) during the cap-rock reaction, note this was measured ex situ immediately on sampling.Dissolved element concentrations (mg/kg), and the associated detection limit (DL) and effective detection limit multiplied by the dilution factor (DL*DF). <DL indicated the measurement was below detection.Time zero refers to the sample after the N 2 -water-rock soak.Quench refers to the residual fluid in the reactor after depressurization.The associated error for the analyses by ICP-OES is <5%. Modelling the Experiment The modified mineral reactive surface areas to achieve an improved approximation to the experimental results are given in Table 2 "AsMod".The simulated in situ pH was calculated to decrease to 3.8 and then increase to 4.5 (Figure 6A).The experimental dissolved Ca and sulphate Table 4. Water chemistry sampled during the cap-rock reaction, with dissolved element concentrations (µg/kg), and the associated detection limit (DL) and effective detection limit multiplied by the dilution factor (DL*DF). <DL indicated the measurement was below detection.Quench refers to the residual fluid in the reactor after depressurization.The associated error for the analyses by ICP-MS is <10%.The modified mineral reactive surface areas to achieve an improved approximation to the experimental results are given in Table 2 "AsMod".The simulated in situ pH was calculated to decrease to 3.8 and then increase to 4.5 (Figure 6A).The experimental dissolved Ca and sulphate trends were captured by calcite dissolution and gypsum precipitation (Figure 6B).The calcite reactive surface area necessitated a decrease to 1 cm 2 /g to approximate the experimental data, which is reasonable given the cementing nature of the calcite would have decreased its available surface area.The predicted sulphate concentration after 720 days is lower than the measured concentration assuming all measured dissolved S was present as sulphate.However it is in better agreement with the measured sulphate concentration at 720 days, this indicates that part of the measured total dissolved total S existed as a different S species, or that the model over estimates gypsum precipitation.Dissolution of a small amount of chlorite, illite, plagioclase, and pyrite/sphalerite were also needed to capture the experimental data; with precipitation of barite, kaolinite, and goethite (Supplementary Material Figure S7).The reactive surface areas of chlorite and illite were increased to 7000 cm 2 /g to improve Mg, Al, Fe and Si prediction overall.With dissolution of plagioclase, kaolinite, and K-feldspar additionally contributing to dissolved Si, Al and K. Dissolved concentrations of lower concentration elements such as Si and Fe were however harder to replicate, likely owing to the presence of Fe also in the calcite structure which was not in the calcite used in the model, and potentially the fast release of Fe and Si from ion exchange with clays also not included in the model.Occasional amorphous silica appeared to be present mixed with calcite and may have been another Si source in the experiment.The Si and Al trends were replicated by predicted precipitation of kaolinite (Figure 6C,D).Precipitation of kaolinite appeared to be initially overestimated, with predicted concentrations of Si and Al lower than the experimental data (Supplementary Material Figure S7).The kaolinite script file was subsequently modified to change the pre-exponential crystal nucleation factor (Г) from 2E10 to 2E11 to improve the prediction (Supplementary Material Table S1).Fe-oxide coatings were observed in the experiment, and the decreasing experimental dissolved Fe trend was closest replicated with the precipitation of goethite.Again goethite precipitation appeared to be initially overestimated with a much lower predicted Fe concentration than observed experimentally.The goethite reactive surface area was decreased to 0.0001 cm 2 /g and in the script file Г modified to from 1E10 to 9E10 to improve the predicted trend (Supplementary Material Table S1).Hematite was also saturated in the model; however allowing its precipitation resulted in a very low predicted Fe concentration (Supplementary Material Figure S7).Generally the precipitation of amorphous or oxyhydroxide Fe minerals would be expected on the short timescale of the experiment, with hematite precipitation over longer timescales, therefore goethite was used for the experiment simulation [45].Additionally, a smectite (Na-nontronite) was saturated in the model and smectite precipitation observed in the experiment.Allowing its precipitation resulted in a very low predicted Si concentration (Supplementary Material Figure S8).This indicates that a small amount of smectite may have also precipitated in the experiment. Overall the net predicted change in mineral volume was a slight net increase of 1.2% of the initial total volume, indicating potentially a slight decrease in porosity. Calcite Cemented Cap-Rock During the upscaled calcite cemented cap-rock reaction with CO 2 -SO 2 -O 2 , pH was buffered to 4.93 after 30 years (Figure 8A).Calcite, andesine, pyrite, chlorite and albite were the main minerals dissolving (Figure 8B); with initially gypsum and subsequently anhydrite precipitating along with kaolinite, smectite, hematite, alunite and barite.The amounts of mineral change (<20 cm 3 ) were however small compared to the total rock volume (3735 cm 3 ).Overall there was a slight net increase in mineral volume, owing to the higher molar volume of anhydrite, with only 0.1% change, indicating a slight loss of porosity. On reaction with pure CO 2 , pH was buffered to 5.32 (Figure 8C).Calcite initially dissolved and subsequently re-precipitated as the pH was buffered.Andesine, chlorite, and albite also dissolved; smectite, ankerite, siderite and kaolinite precipitated (Figure 8D).Again there was a very slight increase in mineral volume with a 0.01% change, indicating overall a negligible loss of porosity. Geosciences 2018, 8, x FOR PEER REVIEW 13 of 24 0.0001 cm 2 /g and in the script file Г modified to from 1E10 to 9E10 to improve the predicted trend (Supplementary Material Table S1).Hematite was also saturated in the model; however allowing its precipitation resulted in a very low predicted Fe concentration (Supplementary Material Figure S7).Generally the precipitation of amorphous or oxyhydroxide Fe minerals would be expected on the short timescale of the experiment, with hematite precipitation over longer timescales, therefore goethite was used for the experiment simulation [45].Additionally, a smectite (Na-nontronite) was saturated in the model and smectite precipitation observed in the experiment.Allowing its precipitation resulted in a very low predicted Si concentration (Supplementary Material Figure S8).This indicates that a small amount of smectite may have also precipitated in the experiment. Overall the net predicted change in mineral volume was a slight net increase of 1.2% of the initial total volume, indicating potentially a slight decrease in porosity. Calcite Cemented Cap-Rock During the upscaled calcite cemented cap-rock reaction with CO2-SO2-O2, pH was buffered to 4.93 after 30 years (Figure 8A).Calcite, andesine, pyrite, chlorite and albite were the main minerals dissolving (Figure 8B); with initially gypsum and subsequently anhydrite precipitating along with kaolinite, smectite, hematite, alunite and barite.The amounts of mineral change (<20 cm 3 ) were however small compared to the total rock volume (3735 cm 3 ).Overall there was a slight net increase in mineral volume, owing to the higher molar volume of anhydrite, with only 0.1% change, indicating a slight loss of porosity. On reaction with pure CO2, pH was buffered to 5.32 (Figure 8C).Calcite initially dissolved and subsequently re-precipitated as the pH was buffered.Andesine, chlorite, and albite also dissolved; smectite, ankerite, siderite and kaolinite precipitated (Figure 8D).Again there was a very slight increase in mineral volume with a 0.01% change, indicating overall a negligible loss of porosity. The pH was buffered to 5.19 on reaction with CO2-SO2, with again chlorite, calcite, andesine and albite mainly dissolving along with siderite and ankerite.Smectite, ankerite, siderite and kaolinite were again formed along with anhydrite and pyrite (Figure 9E,F). The pH was buffered to 5.19 on reaction with CO 2 -SO 2 , with again chlorite, calcite, andesine and albite mainly dissolving along with siderite and ankerite.Smectite, ankerite, siderite and kaolinite were again formed along with anhydrite and pyrite (Figure 9E,F). The pH was buffered to 5.19 on reaction with CO2-SO2, with again chlorite, calcite, andesine and albite mainly dissolving along with siderite and ankerite.Smectite, ankerite, siderite and kaolinite were again formed along with anhydrite and pyrite (Figure 9E,F). Siderite Cemented Cap-Rock The siderite-cemented cap-rock was based on core from the deeper Cabawin 1 well.Geochemical models for this well were run out to 100 years since deeper sections of the Precipice Sandstone are a more likely target for longer term CO2 storage.The predicted pH was buffered to 5.08 with CO2-SO2-O2 by dissolution of albite, siderite, K-feldspar and chlorite after 100 years (Figure 10A,B).Smectite, kaolinite, alunite, and hematite were predicted to be precipitated.With pure CO2, albite, chlorite, Kfeldspar and siderite were altered to smectite, siderite and chalcedony/amorphous silica with pH increasing to 5.10.For the reaction with SO2, the pH increase from 30 to 100 years to 4.94 (Figure 10E).Albite, siderite, K-feldspar, chlorite and illite dissolved with siderite and illite later re-precipitating (Figure 10F).Smectite, chalcedony, Fe-siderite, kaolinite, pyrite and barite were also predicted to precipitate. Siderite Cemented Cap-Rock The siderite-cemented cap-rock was based on core from the deeper Cabawin 1 well.Geochemical models for this well were run out to 100 years since deeper sections of the Precipice Sandstone are a more likely target for longer term CO 2 storage.The predicted pH was buffered to 5.08 with CO 2 -SO 2 -O 2 by dissolution of albite, siderite, K-feldspar and chlorite after 100 years (Figure 10A,B).Smectite, kaolinite, alunite, and hematite were predicted to be precipitated.With pure CO 2 , albite, chlorite, K-feldspar and siderite were altered to smectite, siderite and chalcedony/amorphous silica with pH increasing to 5.10.For the reaction with SO 2 , the pH increase from 30 to 100 years to 4.94 (Figure 10E).Albite, siderite, K-feldspar, chlorite and illite dissolved with siderite and illite later re-precipitating (Figure 10F).Smectite, chalcedony, Fe-siderite, kaolinite, pyrite and barite were also predicted to precipitate. Siderite Cemented Cap-Rock The siderite-cemented cap-rock was based on core from the deeper Cabawin 1 well.Geochemical models for this well were run out to 100 years since deeper sections of the Precipice Sandstone are a more likely target for longer term CO2 storage.The predicted pH was buffered to 5.08 with CO2-SO2-O2 by dissolution of albite, siderite, K-feldspar and chlorite after 100 years (Figure 10A,B).Smectite, kaolinite, alunite, and hematite were predicted to be precipitated.With pure CO2, albite, chlorite, Kfeldspar and siderite were altered to smectite, siderite and chalcedony/amorphous silica with pH increasing to 5.10.For the reaction with SO2, the pH increase from 30 to 100 years to 4.94 (Figure 10E).Albite, siderite, K-feldspar, chlorite and illite dissolved with siderite and illite later re-precipitating (Figure 10F).Smectite, chalcedony, Fe-siderite, kaolinite, pyrite and barite were also predicted to precipitate. Experimental Results and Relevant Comparison Studies In the experimental reaction of the calcite cemented cap-rock, CO2 dissolved to form carbonic acid, and the co-injected SO2 and O2 dissolved to form a stronger sulphuric acid (Equation ( 1)). Oxidised Fe from chlorite dissolution in the presence of co-injected O2 precipitated as goethite (Equation ( 5)). Barite precipitation was observed, with the source of the Ba likely from the calcite or clay structure, and sulphate from the co-injected SO2 (Equation ( 6)). There are relatively few published experimental studies reacting rock core with impure CO2 containing SO2 or O2, compared to pure CO2 studies.Renard and co-workers reacted a dolostone with saline brine and a gas mixture of CO2 with small amounts of SO2 O2, N2, and Ar at 150 °C and 150 bar [46].They observed complete dissolution of calcite (in agreement with our work), with partial dissolution of dolomite and clays, and pyrite oxidation to hematite.Vermiculite clay and barite were precipitated, along with a Ca-sulphate phase, anhydrite, which is more stable than gypsum at the higher temperature employed.With pure CO2 reaction they instead observed formation of the smectite beidellite (from illite alteration), in agreement with the pure CO2 cap-rock models performed here where beidellites were predicted to be formed.Clay and feldspar rich cap-rocks have also been reacted with CO2 containing SO2 and O2 and a low salinity brine at 60 °C and 120 bar [47].In that case Experimental Results and Relevant Comparison Studies In the experimental reaction of the calcite cemented cap-rock, CO 2 dissolved to form carbonic acid, and the co-injected SO 2 and O 2 dissolved to form a stronger sulphuric acid (Equation ( 1)). Barite precipitation was observed, with the source of the Ba likely from the calcite or clay structure, and sulphate from the co-injected SO 2 (Equation (6)). There are relatively few published experimental studies reacting rock core with impure CO 2 containing SO 2 or O 2 , compared to pure CO 2 studies.Renard and co-workers reacted a dolostone with saline brine and a gas mixture of CO 2 with small amounts of SO 2 O 2 , N 2 , and Ar at 150 • C and 150 bar [46].They observed complete dissolution of calcite (in agreement with our work), with partial dissolution of dolomite and clays, and pyrite oxidation to hematite.Vermiculite clay and barite were precipitated, along with a Ca-sulphate phase, anhydrite, which is more stable than gypsum at the higher temperature employed.With pure CO 2 reaction they instead observed formation of the smectite beidellite (from illite alteration), in agreement with the pure CO 2 cap-rock models performed here where beidellites were predicted to be formed.Clay and feldspar rich cap-rocks have also been reacted with CO 2 containing SO 2 and O 2 and a low salinity brine at 60 • C and 120 bar [47].In that case the pH initially decreased to ~2 as only small amounts of carbonate minerals were present to dissolve and buffer pH, chlorite and other silicates also were observed to dissolve.Precipitation of clays and Fe-oxide minerals contributed to a decrease in the meso-porosity of the cap-rocks after reaction.A calcite rich mineral assemblage was reacted at 110 • C with CO 2 and SO 2 by Chopping and Katsuba [48].They also noted a strong pH buffering from the dissolution of calcite either with or without the co-injection of SO 2 , in agreement with the geochemical modelling in this study.The precipitation of the Ca-sulphate anhydrite trapped dissolved S species at the higher employed temperature in their experiments.The precipitation of gypsum or anhydrite observed in studies co-injecting SO 2 may be significant as these minerals have high molar volumes and can reduce porosity potentially sealing cap-rock, additionally they provide a sink for mineral trapping of SO 2 . Comparison of Modelling Outputs to Natural Analogue or Field Trial Observations Modifying and validating geochemical models with experimental or field data by the alteration of parameters such as mineral reactive surface areas have been performed by several authors, while other researchers have employed adjustments to other model parameters e.g., using an adjustable incongruent factor especially where the exact dissolving mineral composition is not known or not available for input [4,27,31,49].Since model outputs are subject to the amount of available input data and user selection, comparing model outputs to natural analogues of CO 2 storage is useful to determine if the outputs are reasonable. In the longer term cap-rock reaction models performed here with pure CO 2 or SO 2 -CO 2 calcite was dissolved, with gypsum or anhydrite formed when SO 2 was co-injected.K-feldspar and plagioclase was converted to kaolinite and chalcedony (Equations ( 7) and ( 8)), or smectite formed from e.g., plagioclase (Equation ( 9)). In the long-term cap-rock models with CO 2 , SO 2 and O 2 the oxidizing conditions resulted in predicted formation of minerals containing oxidized Fe 3+ including hematite and nontronite, and also sulphate minerals including alunite (Equations ( 12) and ( 13)). Sites of natural CO 2 accumulation or "natural analogues" give unique insights into the long term CO 2 -water-rock reactions in various reservoir or cap-rock types [50].In the siliciclastic Ladbroke Grove Field, Australia, Fe-rich chlorite was altered to siderite, and calcite or plagioclase altered to ankerite/Fe-dolomite by naturally occurring CO 2 and CH 4 [51,52].Kaolinite and quartz were also precipitated with local porosity maintained or slightly increased.Higgs and co-workers demonstrated that while the reservoir showed localized porosity generation, tighter rocks (such as cap-rocks) in the CO 2 -rich Kapuni Field, New Zealand, tended to undergo kaolinite or carbonate cementation reducing porosity and permeability [53].They observed dissolution of K-feldspar and plagioclase, and kaolinitisation of mica.Formation of illite/smectite, siderite, ankerite, calcite, ferroan dolomite and quartz cement occluded porosity. Exhumed sandstones in or near Green River, USA, have been altered by leaking CO 2 ± CH 4 or H 2 S bearing fluids [28].Alteration of K-feldspar to kaolinite and illite, dissolution of Fe-hydroxide grain coatings, and precipitation of ferroan carbonates (Fe-dolomite, ankerite or siderite) and Fe-oxide were reported in the Navajo Sandstone, Utah by CO 2 -CH 4 bearing fluids [50].CO 2 -H 2 S leakage at Green River has altered cap-rocks with the dissolution of K-feldspar, dolomite and hematite locally increasing porosity [6].The precipitation of ferroan dolomite, gypsum, Fe-oxides, pyrite, Cu-sulphides and illite/smectite over centimeter scales has then decreased the porosity. The Madison Limestone on the Moxa Arch, USA, has been exposed to supercritical CO 2 , H 2 S and dissolved sulphate and hydrogen sulphide over 50 million years [54].Secondary anhydrite, pyrite, and native S has reportedly filled porosity in dolomite, and calcite is also present.Analcime and dolomite re-precipitation was also reported.The CarbFix project in Iceland has sequestered CO 2 and H 2 S from a geothermal power plant into basalts [17].Mineral trapping of CO 2 as calcite and siderite has been reported, along with trapping of S as pyrite.Chalcedony, analcime and kaolinite were also saturated in sampled fluids. Our predicted mineral alterations are in good agreement overall with natural analog studies above, predicting alteration of plagioclase, chlorite, and carbonates, and precipitation of ferroan carbonates, kaolinite, smectite, pyrite and gypsum/anhydrite ± Fe-oxides.The long-term predictions of cap-rock reactivity reported in our study did not result in significant predicted changes to porosity, expected to be favorable for cap-rock integrity.However, while mineral trapping of CO 2 as siderite or ankerite was predicted for pure CO 2 or CO 2 -SO 2 reactions; when O 2 was present in the gas mixture mineral trapping was generally not predicted.Instead smectites, sulphate minerals and Fe-oxides were predicted to form.In simulations of a reservoir scale CO 2 injection field trial (CO 2 containing 50 ppm O 2 ) into the Frio Sandstone, dissolution of calcite and oxyhydroxides and precipitation of dolomite, ankerite, kaolinite, nontronite and montmorillonite were predicted [55].The predicted changes in porosity were also low over 1000 years at ~0.002 vol % in reasonable agreement with our study.Our results generally are also in agreement with Gaus and co-workers who predicted only a very small decrease in the porosity after CO 2 reaction of a clayey cap-rock at Sleipner, which may improve the sealing capacity in the lowest meters [56].Seismic observations after a CO 2 injection field trial at Sleipner, and associated modelling, have shown growing CO 2 accumulations under thin (~1 m) mudstone beds in the Utsira sand [57].This is also evidence for the field scale integrity of mudstone toward CO 2 reaction.However, reactive transport modelling of CO 2 and H 2 S or SO 2 reaction with a saline sandstone reservoir by Xu and co-workers resulted in more significant predicted changes to porosity, with increases in the reservoir, and decreases in the far field from precipitation of carbonates, alunite, anhydrite and pyrite [58].Although the magnitude of porosity change is less in our study (which may be somewhat site specific), the minerals predicted to form with co-injection of impurities are in agreement. Significance The cap-rock lithologies studied here are generally representative of many cap-rocks worldwide.The experimental results, and long-term modelled cap-rock reactions with dissolved pure CO 2 , CO 2 -SO 2 or CO 2 -SO 2 -O 2 show mineral alterations but no significant net porosity changes are predicted.This is favorable as it indicates CO 2 seal integrity is not likely to be significantly affected by mineral corrosion from acid gases.It should be noted however that while initial generalizations can be made, changes will be site specific and predictions are limited by available data.This and other studies show that in more mineralogically reactive formations, co-injected SO 2 can be expected to be sequestered as minerals including pyrite, anhydrite/gypsum, barite, and alunite.For the Evergreen Formation cap-rocks studied here predicted siderite and ankerite precipitation mineral trapped CO 2 for pure CO 2 or CO 2 -SO 2 reactions.However, for the CO 2 -SO 2 -O 2 reactions siderite and ankerite were not predicted instead oxidized Fe minerals, Fe-oxides and nontronite were formed.In sites where ferroan carbonates, rather than calcite, are predicted to be the mineral traps for CO 2 , this suggests that a limit would be needed on co-injected O 2 content to optimize mineral trapping.In all predictions performed here smectites were formed.These have relatively high CO 2 sorption capacities, and their formation may be favorable for providing additional storage potential through gas sorption. Potential Issues, Limitations, and Future Work Although our study predicts only minor changes in net porosity, minerals were altered.The alteration of minerals may result in rock mechanical property changes.Hangx and co-workers (2013) reported no change in mechanical properties or rock strength of calcite cemented cap-rock as the framework grains were not dissolved on the experimental CO 2 -brine reaction timescale [59].However they noted that longer term dissolution of silicates or precipitation of minerals may affect these parameters.In addition, other parameters such as cap-rock hardness could be altered by e.g., replacement of K-feldspar (Mohns hardness of 6) by kaolinite (hardness of 2).Arman and co-workers reported recently a decrease in sandstone or siltstone drill core toughness and hardness after CO 2 -brine reaction, which could affect the mechanical properties of cap-rock [60].Other potential site specific questions related to storage reservoirs have not been addressed here, e.g., acidification near the wellbore, release of trace metals from pyrite oxidation or Fe-oxide reduction, fines migration and its effect on permeability etc. Future work is suggested on coupled geochemical, petrophysical and mechanical parameter changes to reservoir or cap-rocks via experimental CO 2 -fluid reactions, field injection studies, and natural analog studies, especially for O 2 co-injection Further studies of natural analogue sites, especially those where S or O 2 bearing fluids where present with CO 2 , are suggested to understand alterations on a geological timescale.The Surat, Bowen, and Eromanga Basins have recently been shown to have previously undergone natural CO 2 alteration especially around faults, this deserves further work to understand the long term potential for CO 2 storage mineral trapping and metal sequestration [61,62].The models performed here were limited by the availability of data on the Evergreen Formation in the central and southern Surat Basin.For the Surat Basin, further data is needed on the reservoir and cap-rock especially in the deeper central basin areas including mineralogies, porosities and permeabilities of drill core to populate predictive models.Uncertainties also remain around the expected temperature in the region of the plume, since several field studies globally have reported significant cooling below reservoir temperature which persists over long time scales, this would have an effect on the reaction rates. • Experimental CO 2 -SO 2 -O 2 reaction of calcite cemented cap-rock resulted in calcite dissolution and chlorite corrosion, pH buffering and gypsum, barite, goethite and clay precipitation. • To model the experimental data, reactive surface areas needed for calcite cement were low at 1 cm 2 /g, and high for silicates including plagioclase (300 cm 2 /g) and clays chlorite and illite (7000 cm 2 /g). • Upscaled longer-term calcite cemented, siderite cemented, mudstone or shale cap-rock reactivity models predicted minimal net changes to porosity, favorably indicating cap-rock integrity was likely not significantly affected at these conditions. • Smectite formation was predicted in all the long-term reactions, smectite has a high CO 2 sorption capacity, favorable for trapping. • Mineral trapping of CO 2 as siderite and ankerite was only predicted to occur with pure CO 2 or CO 2 -SO 2 .With O 2 present, smectites, sulphate and oxide minerals were instead predicted to form.A limit on the O 2 content co-injected may be needed to optimize CO 2 mineral trapping, the most secure form of storage. Supplementary Materials: The following are available online at http://www.mdpi.com/2076-3263/8/7/241/s1, Figure S1: SEM image and EDS element maps after reaction, Figure S2: SEM and EDS spectra of corroded chlorite, Figure S3: SEM images and EDS spectra of the inside surface of the block after reaction, Figure S4: SEM images and EDS spectra of the exposed surface after reaction, Figure S5: SEM images and EDS spectra of precipitated material, Figure S6: SEM images and EDS spectra of disaggregated grains, Table S1: Geochemical modelling parameters, Table S2: Cap-rock porosities, Figure S7: Additional experimental models, Figure S8: Additional experimental models. Author Contributions: J.K.P. conceived and designed the experiments; J.K.P. performed the experiments; J.K.P. and G.K.W.D. analyzed the data and performed SEM-EDS; G.K.W.D. performed permeability and XRD measurements; J.K.P. performed geochemical modelling and wrote the manuscript. Funding: Part of this work was funded by the UQ Surat Deep Aquifer Appraisal Project (UQ-SDAAP) 2016001337. For their contribution and support, UQ would like to acknowledge: the Commonwealth Government of Australia and ACA Low Emissions Technology Pty Ltd. (ACALET). Acknowledgments: Part of this work was funded by the UQ Surat Deep Aquifer Appraisal Project (UQ-SDAAP). For their contribution and support, UQ would like to acknowledge: the Commonwealth Government of Australia and ACA Low Emissions Technology Pty Ltd. (ACALET).The information, opinions and views expressed here do not necessarily represent those of The University of Queensland, the Australian Government or ACALET.Researchers within or working with the UQ-SDAAP are bound by the same policies and procedures as other researchers within The University of Queensland, which are designed to ensure the integrity of research.A. Garnett, S. Golding, J. Undershultz and A. La Croix are acknowledged for support and helpful discussions.D. Kirste is thanked for providing mineral reaction scripts and helpful discussions.M. Mostert of the UQ environmental geochemistry lab is acknowledged for performing ICP-OES and MS, and D. Biddle is thanked for assistance with experiments and permeability measurements.We acknowledge the facilities, and the scientific and technical assistance, of the Australian Microscopy and Microanalysis Research Facility at the Centre for Microscopy and Microanalysis, The University of Queensland.M. Grigorescu, J. Esterle, R. Heath and CTSCo Pty Ltd. are thanked for access to drill core and data.Two anonymous reviewers are thanked for their comments that significantly improved this manuscript. Conflicts of Interest: The authors declare no conflict of interest.The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. Figure 1 . Figure 1.(A) Map of the Surat Basin, Australia, also showing the Bowen Basin and Clarence Moreton Basin.The West Wandoan 1, and Cabawin 1 wells are marked in green; (B) Generalized stratigraphic column showing the Precipice Sandstone and overlying Evergreen Formation. Figure 1 . Figure 1.(A) Map of the Surat Basin, Australia, also showing the Bowen Basin and Clarence Moreton Basin.The West Wandoan 1, and Cabawin 1 wells are marked in green; (B) Generalized stratigraphic column showing the Precipice Sandstone and overlying Evergreen Formation. Figure 2 . Figure 2. SEM images of the 1053m Calcite cemented cap-rock before reaction (A) surface view of the framework grains with calcite cement; (B) and (C) quartz and plagioclase framework grains cemented by calcite; (D) EDS spectrum of the calcite cement containing Mn and possibly Fe; (E); Plagioclase with areas of illitisation, and FeZn sulphide (F) Chlorite surface.Qz = quartz, Cal = calcite, Chl = chlorite, Pl = plagioclase, Sp = sphalerite or FeZn sulphide, Ilt = illite. Figure 2 . Figure 2. SEM images of the 1053m Calcite cemented cap-rock before reaction (A) surface view of the framework grains with calcite cement; (B) and (C) quartz and plagioclase framework grains cemented by calcite; (D) EDS spectrum of the calcite cement containing Mn and possibly Fe; (E); Plagioclase with areas of illitisation, and FeZn sulphide (F) Chlorite surface.Qz = quartz, Cal = calcite, Chl = chlorite, Pl = plagioclase, Sp = sphalerite or FeZn sulphide, Ilt = illite. Figure 3 . Figure 3. SEM images of the exposed surface of the 1053 m calcite cemented cap-rock sub-sample after reaction (A) surface view of the framework grains with calcite cement dissolved; (B) Quartz and silicate framework grains remaining with calcite cement dissolved; (C) Chlorite surface with alteration; (D) quartz, plagioclase and K-feldspar grains remain with open porosity formed; (E) gypsum growing out of pore space; (F) precipitated gypsum and bright barite crystals.Chl = chlorite, Pl = plagioclase, Qz = quartz, Kfs = K-feldspar, Gp = gypsum/Ca-sulphate. Figure 3 . Figure 3. SEM images of the exposed surface of the 1053 m calcite cemented cap-rock sub-sample after reaction (A) surface view of the framework grains with calcite cement dissolved; (B) Quartz and silicate framework grains remaining with calcite cement dissolved; (C) Chlorite surface with alteration; (D) quartz, plagioclase and K-feldspar grains remain with open porosity formed; (E) gypsum growing out of pore space; (F) precipitated gypsum and bright barite crystals.Chl = chlorite, Pl = plagioclase, Qz = quartz, Kfs = K-feldspar, Gp = gypsum/Ca-sulphate. Figure 4 . Figure 4. SEM images of the inner surface of the cap-rock sub-sample after reaction (A) surface view of Ca-Na-plagioclase (andesine) grain with surrounding calcite cement dissolved; (B) Ca-Naplagioclase surface; (C) smectite precipitate layer; (D) corroded chlorite surface with precipitated material in the center; (E) corroded chlorite surface and bright barite crystal; (F) pore filling clay revealed by calcite dissolution, and bright barite precipitates. Figure 4 . Figure 4. SEM images of the inner surface of the cap-rock sub-sample after reaction (A) surface view of Ca-Na-plagioclase (andesine) grain with surrounding calcite cement dissolved; (B) Ca-Na-plagioclase surface; (C) smectite precipitate layer; (D) corroded chlorite surface with precipitated material in the center; (E) corroded chlorite surface and bright barite crystal; (F) pore filling clay revealed by calcite dissolution, and bright barite precipitates. Geosciences 2018, 8 , 24 Figure 5 . Figure 5. SEM images of the inner surface of the cap-rock sub-sample after reaction (A) surface view of a Si-Al-containing precipitated layer on grains; (B) Gypsum growing in pore space; (C) Gypsum crystals; (D) Si-Al containing precipitated layer; (E) Barite crystals on and in Si-Al-precipitated layer; (F) Skeletal pore filling clay.Gp = gypsum/Ca-sulphate, Als = precipitated aluminosilicate layer. Figure 5 . Figure 5. SEM images of the inner surface of the cap-rock sub-sample after reaction (A) surface view of a Si-Al-containing precipitated layer on grains; (B) Gypsum growing in pore space; (C) Gypsum crystals; (D) Si-Al containing precipitated layer; (E) Barite crystals on and in Si-Al-precipitated layer; (F) Skeletal pore filling clay.Gp = gypsum/Ca-sulphate, Als = precipitated aluminosilicate layer. Figure 6 . Figure 6.Modelled data as lines and experimental data shown as symbols for calcite cemented caprock reaction (A) Solution pH; (B) concentration of major ions; (C) concentration of minor ions; (D) predicted change in mineral mass in grams (g) where positive values indicate precipitation, and negative values dissolution.Note experimentally measured dissolved Si and S were converted to SiO2 and sulphate for comparison to the modelled outputs, assuming all of the measured dissolved S was present as sulphate. Figure 6 . 24 Figure 7 . Figure 6.Modelled data as lines and experimental data shown as symbols for calcite cemented cap-rock reaction (A) Solution pH; (B) concentration of major ions; (C) concentration of minor ions; (D) predicted change in mineral mass in grams (g) where positive values indicate precipitation, and negative values dissolution.Note experimentally measured dissolved Si and S were converted to SiO 2 and sulphate for comparison to the modelled outputs, assuming all of the measured dissolved S was present as sulphate.Geosciences 2018, 8, x FOR PEER REVIEW 12 of 24 Figure 7 . Figure 7. Experimental water chemistry during reaction of the cap-rock (A) solution electrical conductivity; (B) Dissolved concentrations of Sr, Mn, Ba and Ti; (C) correlation of dissolved Ca and Mn; (D) Concentrations of dissolved Nb, Ga, Ge, and Rb. Figure 8 . Figure 8. Up-scaled calcite cemented cap-rock, reaction over 30 years.Reaction with CO 2 -SO 2 -O 2 (A) predicted pH; (B) predicted change in mineral volumes (delta cm 3 ), where positive values indicate mineral precipitation, and negative values mineral dissolution; Reaction with CO 2 (C) predicted pH; (D) predicted change in mineral volumes; Reaction with CO 2 -SO 2 (E) predicted pH; (F) predicted change in mineral volumes. Figure 9 . Figure 9. Up-scaled mudstone cap-rock reaction over 30 years.Reaction with CO 2 -SO 2 -O 2 (A) predicted pH; (B) predicted change in mineral volumes (delta cm 3 ) where positive values indicate precipitation, and negative values dissolution; Reaction with CO 2 (C) predicted pH; (D) predicted change in mineral volumess; Reaction with CO 2 -SO 2 (E) predicted pH; (F) predicted change in mineral volumes. Figure 10 . Figure 10.Up-scaled siderite cap-rock reaction over 100 years.Reaction with CO 2 -SO 2 -O 2 (A) predicted pH; (B) predicted change in mineral volumes (delta cm 3 ) where positive values indicate precipitation, and negative values dissolution; Reaction with CO 2 (C) predicted pH; (D) predicted change in mineral volumes; Reaction with CO 2 -SO 2 (E) predicted pH; (F) predicted change in mineral volumes. Figure 11 . Figure 11.Up-scaled shale cap-rock reaction over 100 years.Reaction with CO 2 -SO 2 -O 2 (A) predicted pH; (B) predicted change in mineral volumes (delta cm 3 ), where positive values indicate precipitation, and negative values dissolution; Reaction with CO 2 (C) predicted pH; (D) predicted change in mineral volumes; Reaction with CO 2 -SO 2 (E) predicted pH; (F) predicted change in mineral volumes. Table 2 . Reactive surface areas and mineral proportions used in geochemical models.* As are the initial reactive surface areas assigned to each mineral.Asmod are the modified reactive surface areas to model the experiment data.Asres are the up-scaled reactive surface areas.# Used for precipitation only.
v3-fos-license
2017-06-21T23:05:03.803Z
2013-03-07T00:00:00.000
10639138
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-14-82", "pdf_hash": "55b58e4aad26f13ac763a9803b10abf4e6e63592", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44004", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "284b7c3ffd94dcf6081cb74556ab9e8679a771b9", "year": 2013 }
pes2o/s2orc
Effects of a progressive aquatic resistance exercise program on the biochemical composition and morphology of cartilage in women with mild knee osteoarthritis: protocol for a randomised controlled trial Background Symptoms associated with osteoarthritis of the knee result in decreased function, loss of working capacity and extensive social and medical costs. There is a need to investigate and develop effective interventions to minimise the impact of and even prevent the progression of osteoarthritis. Aquatic exercise has been shown to be effective at reducing the impact of osteoarthritis. The purpose of this article is to describe the rationale, design and intervention of a study investigating the effect of an aquatic resistance exercise intervention on cartilage in postmenopausal women with mild knee osteoarthritis. Methods A minimum of 80 volunteers who meet the inclusion criteria will be recruited from the local population through newspaper advertisements. Following initial assessment volunteers will be randomised into two groups. The intervention group will participate in a progressive aquatic resistance exercise program of 1-hour duration 3 times a week for four months. The control group will be asked to maintain normal care during this period. Primary outcome measure for this study is the biochemical composition of knee cartilage measured using quantitative magnetic resonance imaging; T2 relaxation time and delayed gadolinium-enhanced magnetic resonance imaging techniques. In addition, knee cartilage morphology as regional cartilage thickness will be studied. Secondary outcomes include measures of body composition and bone traits using dual energy x-ray absorptiometry and peripheral quantitative computed tomography, pain, function using questionnaires and physical performance tests and quality of life. Measurements will be performed at baseline, after the 4-month intervention period and at one year follow up. Discussion This randomised controlled trial will investigate the effect a progressive aquatic resistance exercise program has on the biochemical composition of cartilage in post-menopausal women with mild knee osteoarthritis. This is the first study to investigate what impact aquatic exercise has on human articular cartilage. In addition it will investigate the effect aquatic exercise has on physical function, pain, bone and body composition and quality of life. The results of this study will help optimise the prescription of aquatic exercise to persons with mild knee osteoarthritis. Trial Registration ISRCTN65346593 Background Osteoarthritis (OA) of the lower limb is a leading cause of decreased function and quality of life [1]. It has been estimated that the prevalence of symptomatic OA of the knee is between 7-33% with an increase in prevalence with age and is the most common site of symptomatic OA [2][3][4][5][6]. Early signs of OA in articular cartilage, which is constituent for the initiation and progression of OA, are characterised with loss of proteoglycans, breakdown of the collagen matrix and increased water content [7]. As the disease progresses there is fibrillation of the cartilage, changes in the subchondral bone, formation of osteophytes and thickening of the synovium [8][9][10][11] and as such OA is considered a whole joint disease. These modifications within the joint lead to the gradual development of clinical symptoms such as stiffness, decreased range of motion and pain [12] which cause a decrease in joint proprioception [13] and inhibits muscle activation [14,15] leading to a decrease in activity. This disuse results in a lowering of aerobic capacity, muscle strength and muscle mass and ultimately a decrease in functional capacity and increased dependence [16,17]. Additionally, reduced muscle strength is a risk factor for future pain [17], self-reported knee instability [18] and increased risk of falling [19]. These in combination cause the extensive social and medical costs to society as a direct or indirect result of OA. Although there is no known cure for OA the diseaserelated factors such as impaired muscle function and reduced aerobic fitness can be improved and maintained with therapeutic exercise [20,21]. Previous systematic reviews have demonstrated that exercise has positive effects on pain and function for people with symptomatic OA of the knee [21][22][23] and is recommended as one of the primary non-pharmaceutical treatment modalities in current OA guidelines [24][25][26][27][28][29]. Exercising in water is also strongly recommended in these guidelines. There is evidence to suggest that therapeutic aquatic exercise has a short term positive effect on pain and function in persons with OA of knee and/or hip similar to that of land training [30,31]. There is good evidence to support the use of strength exercises in the management of symptoms resulting from OA [32] however, there is conflicting evidence that therapeutic aquatic exercises can improve strength of lower limb muscles in persons with OA [33][34][35][36][37][38][39][40]. It is thought that the benefits from aquatic exercise are primarily a result of the decreased effects of gravity. Buoyancy reduces compressive and shear forces on joints and thus offers a comfortable training medium for patients with OA [41]. Previously, one restriction in OA research was the lack of non-invasive in vivo techniques to quantify the structure and acute changes in cartilage. Advances in magnetic resonance imaging (MRI) have made mapping of the articular cartilage and loading related changes possible [42]. The "delayed Gadolinium Enhanced MRI of Cartilage" (dGEMRIC) technique utilizes a paramagnetic contrast agent gadolinium (Gd-DTPA 2-) to detect early reduction of glycosaminoglycan (GAG) from the matrix, a phenomenon considered to represent the onset of the degenerative process of cartilage [43]. Measurement of T2 relaxation time, sensitive to degeneration of tissue collagen and the orientation of collagen fibres in the extracellular matrix, has been developed to detect early degeneration or senescent changes of cartilage [44,45]. In addition, the assessment of morphological properties from three-dimensional MRI measurements enables assessment of tissue changes at a macroscopic scale [46] which have been found to be reliable, responsive and valid methods for mapping the volumetric data of articular cartilage [47][48][49]. There is still a lack of evidence that human cartilage can adapt to mechanical loading in a similar way to other tissues such as bone and muscle. Animal studies have suggested that physical exercise can improve tissue integrity by increasing the GAG content and indentation stiffness in load bearing cartilage [50,51]. In a crosssectional study Tiderius et al. [52] concluded, based on dGEMRIC measurements, that GAG content was higher in regularly exercising individuals than in sedentary subjects. Additionally, observations by Teichtahl et al. [53] suggest that vigorous physical activity is associated with a reduced rate of patella cartilage volume loss in asymptomatic subjects. To date, only one randomised intervention study investigating the direct effect of exercise on biochemical composition of human cartilage [54] has been published. Roos et al. [54] reported a positive effect of a moderate four months exercise on the GAG content, measured with dGEMRIC in subjects with high risk of knee OA. Another study by Cotofana et al. [55] provides no evidence that a 3-month exercise intervention in untrained middle-aged women can significantly alter cartilage morphology in the knee joint. Furthermore, the optimal type or intensity of exercise for improvement in cartilage is not known and longitudinal effects of training are needed to determine the exercise response once OA is established. In particular, there are no studies investigating the effect non-impact training such as therapeutic aquatic exercise has on the structures related to and progression of OA in the knee joint. Therefore we plan to investigate the effects of an intensive aquatic resistance exercise program on the biochemical composition and morphology of the knee cartilage as well as its effect on physical function in postmenopausal women with mild knee osteoarthritis. In addition, we plan to discover if the possible benefits of exercise on cartilage, symptoms and physical function can be maintained one year after training period. The purpose of this article is to describe the rationale, design and intervention of a study investigating the effect an aquatic resistance exercise intervention has on the cartilage in postmenopausal women with mild knee osteoarthritis. Study design The design of this study will be a 4-month randomised controlled exercise intervention study (RCT) with a 16month follow up (Trial registration: ISRCTN65346593). After baseline measurements the voluntary participants will be randomly assigned into the two arms of the study, an aquatic resistance strength training group and a control group. All the outcome measurements will be performed at baseline, after the 4-month intervention and at follow up 12 months after cessation of training. Participants and selection criteria Volunteer postmenopausal women, between the ages of 60-68 year-old, will be recruited through a series of local newspaper advertisements and will be gathered from the county of Central Finland which has a population of approximately 275 000. Inclusion eligibility, (see below), will be initially assessed using a structured telephone interview. The telephone questionnaire includes questions concerning degree of knee pain, current level of physical activity and past medical history. Suitable participants will be taken forward and they will undergo weight bearing x-ray imaging of both knees. An experienced radiologist and orthopaedic physician will assess the images grading the degree of OA in the tibiofemoral and patellofemoral joints using the Kellgren-Lawrence grading (K/L 0-IV) [56]. Those participants who have a KL score of I (possible osteophytes) or II (definite osteophytes, possible joint space narrowing), will be included in the next stage of eligibility assessment and undergo a medical and physiotherapy screening. At this point any possible physical or medical limitations to full participation in the intervention will be assessed e.g. severely restricted joint range of movement (ROM), excessive laxity of knee joint, possible physical disabilities and abnormalities found from resting echocardiogram. Subjects will be excluded if they have at least one of the following criteria; BMI > 34, resting pain in knee VAS > 50/100, known loose particles in knee joint, acute inflammation in knee joint, knee intra-articular steroid injection in previous 3 months or oral steroid medication treatment in the previous 12 months, undergoing treatment for osteoporosis or T-score for femoral neck bone mineral density (BMD, g/cm 2 ) lower than −2.5 i.e. indicating osteoporosis as measured with DXA [57][58][59], previous cancer or radiotherapy, suffer from type I or II diabetes, cardiac disease, diagnosed rheumatic disease (other than OA), undergone surgical procedure to knee (excluding menisectomy or arthroscopy if over 12 months ago) or joint replacement surgery in lower limbs. Additional exclusion criteria are problems that would prevent MRI imaging, including electronic or magnetic implants e.g. pace maker, metal within body e.g. internal bone fixations, artificial aortic heart valve, metal particles in eyes, large tattoos on lower limb, claustrophobia or possible allergy to the contrast medium. Further, fasting blood samples will be taken to analyse Krea to ensure kidney function for normal removal of contrast medium from the body. All those participants fulfilling all the inclusion criteria will be included into the study and undergo the baseline measurements. Figure 1 shows the flow chart describing the selection and measurement procedure for the whole study. Sample size The sample size and power calculations have been estimated for the primary end points of this study, i.e. the dGEMRIC and T2 variables. Based on data from Roos et al. [54] and Tiderius et al. [55] it is estimated that 30 subjects are needed, at 80% power, to detect a mean ± SD difference of 40 ± 40 msec in the dGEMRIC between groups [54]. It is estimated that dropout rate will be about 20% at the 16 months follow up, consequently at least 70 subjects will need to be recruited. Randomisation and blinding The subjects will be randomly allocated into either of the two arms of the study by an external statistician blinded for the intervention and study participants and will only be provided with a randomisation number for each participant and severity of OA in knee according to x-ray classification. A computer generated block randomisation of size of ten, stratified according to Kellgren-Lawrence grading 1 and 2, will be used to ensure equal distribution of severity of OA within each group and equal group size. As with all exercise intervention studies blinding of the subject from the intervention is not possible. Researchers (BW, MM, AH) will be blinded to the allocation of groups as well as blinded from the interventions and measurement except for pQCT (MM) and DXA (BW) measurements. Due to practical limitations the physical therapists providing the intervention will also be performing the physical performance measurements. All statistical analyses will be completed by a statistician (HK), who is blinded to the participants and measurements. Primary outcomes This research project will have two primary outcome measures. Delayed gadolinium-enhance magnetic resonance imaging of cartilage (dGEMRIC), sensitive to the distribution of GAG, will be used to evaluate the biochemical composition of cartilage. Arrangement of collagen and hydration state of the cartilage will be measured using T2 relaxation time mapping. Furthermore, knee cartilage morphology as a regional cartilage thickness will be analysed from the weight bearing area of tibiofemoral and patellofemoral cartilages. The dGEMRIC method has been validated in several in vitro studies [60][61][62] and it had been applied in several in vivo studies [43,52,[63][64][65][66][67][68][69][70][71]. Also, T2 relaxation time method has been histologically validated in vitro [72], and it has been applied in several human studies to assess chondral repair [69,[73][74][75][76]. MRI protocols Prior to imaging, the subject will be advised to restrain from any strenuous physical activity during the 48 hours prior to the measurements to minimise possible transient changes in knee cartilage volume and composition. Subjects will be imaged at the same time of the day to avoid possible diurnal variation at the follow-up measurements. The participants will be imaged lying supine with knee to be imaged in slight flexion, stabilized in a leg holder and a custom made inflatable cushion. The cushion has been specifically designed to stabilize the patella without causing any compression of the patellofemoral joint. The imaging session will last in total 3 hours and will include initially a standard clinical MRI series and T2 relaxation time followed by a dGEMRIC series. T2 mapping will be performed using a sagittal multislice multi-echo fast spin echo sequence (field of view (FOV) 140 mm, acquisition matrix 256 x 256, repetition time (TR) 2090 ms, eight echo times (TE) between 13 and 104 ms, echo train length (ETL) 8, slice thickness 3 mm). The slices will be positioned perpendicular to a line tangential to the posterior femoral condyles in the axial scout view. Two slices, each covering the central region of the medial and lateral condyles, will be analysed. For the dGEMRIC series, immediately after the clinical and T2 imaging a double dose of Gd-DTPA 2-(Magnevist, Schering, Berlin) will be administered intravenously i.e., 0.4 ml/kg (0.2 mM/kg). At baseline, post intervention and 16 month follow up the amount of contrast administered will be corrected for body weight. It is felt this is appropriate because of the expected changes in body composition as a result of the intensive exercise intervention. In order to enhance the delivery of contrast agent into the knee cartilage, following administration of Gd-DTPA 2the subject will be instructed to perform 5 minutes of flexionextension exercises in a sitting position without resistance, 5 minutes of walking on a flat surface and 10 gentle deep squats. Exactly ninety minutes after the injection, T1 mapping in the presence of Gd-DTPA 2-(dGEMRIC) will be performed in the sagittal plane using a single slice inversion recovery fast-spin echo sequence (FOV = 14 cm, matrix 256 x 256, TR = 1800 ms, TE = 13 ms, six inversion times (TI) between 50 and 1600 ms, slice thickness 3 mm). The slice positioning will be copied from the T2 relaxation time mapping sequence, and the number of the slices in the correct orientation is reduced to one. The remaining slice is then positioned at the centre of the medial and lateral condyles as viewed on the axial scout image. The subject will be positioned into an identical position as for the first MRI imaging. For both the MRI images and pQCT measurements the knee with highest degree OA, as measured by the radiographic Kellgren-Lawrence scale, will be imaged. In the cases were both knee have identical KL score the right knee will be imaged. Segmentation Weight bearing cartilage regions of interest (ROIs) from single sagittal slices at the centre of the medial and lateral tibial and femoral condyles will be segmented using a semi-automated in-house MATLAB application (Mathworks, Inc. Natick, MA, USA). dGEMRIC indices will be corrected for BMI [77]. In this research team the in vivo precision of dGEMRIC for full thickness cartilage in different ROIs ranges from 5% to 7% [78]. The interobserver precision of T2 in different locations is on average 5% [79]. For quality assurance purposes, a set of phantom samples containing certain concentrations of agarose and nickel nitrate to modulate their T1 and T2 relaxation times will be imaged following the study protocol prior to baseline and follow-up measurement sessions to assess possible drift. Secondary outcomes Properties of bone and body composition Peripheral quantitative computed tomography (pQCT) The bone properties of the distal radius and mid and distal tibia will be measured using a pQCT device (XCT-2000; Stratec Medizintechnik, Pforzhem, Germany). A 2-mm-thick single tomographic slice with pixel size 0.59 mm in plane resolution will be taken at 5% and 55% of the length of the tibia proximal to the distal end of the tibia. Lower leg length is defined as the distance between the medial condyle of tibia and medial malleolus. Selection of lower limb to be imaged will be based on the same principles as the MRI scan. The forearm slice will be taken at 4% of ulna length proximal to the distal endplate of ulna. Length of ulna is defined as the distance between olecranon process and the midline of lateral aspect of distal ulna. In all cases right upper limb will be scanned except when subjects had suffered from fracture of either right ulna or radius. The analysis of the pQCT images will be performed with the density distribution plug-in [80] of the BoneJ (http://bonej.org/ densitydistribution) [81] ImageJ (http://rsbweb.nih.gov/ij/ download.html) plug-in. Compressive bone strength index (BSI d , g 2 /cm 4 ), bone mineral content (BMC), total and trabecular density (ToD and TrD, mg/cm 3 ) and total and trabecular area (ToA and TrA, mm 2 ) will be analysed from the shaft slices. The pQCT device is calibrated daily using a standard phantom provided by the manufacturer and coefficient of variation (CV) for these protocols in our laboratory has been measured to range between 1.5-3.4% for the reported variables [82]. Dual-energy X-ray absorptiometry (DXA) DXA (Lunar Prodigy; GE Lunar Healthcare, Madison, WI, USA) will be used to assess body composition and bone traits. Body composition analyses will be carried out using enCORE software (ENcore 2011, version 13.60.033). Using manufacturers software and protocols total body fat and lean body mass will be measured. In vivo precision of these measurements has been reported to be CV 1.3-2,2% [83]. Both proximal femur and Lumbar spine (L2-4) areal bone mineral density (aBMD, g/cm 2 ) and bone mineral content (BMC, g) will be scanned. Cross sectional geometry of the femoral neck will be analysed using advanced hip structure analysis (AHA) as per manufacturer's software. This will include femoral neck hip axis length (HAL, mm), cross sectional area (CSA, mm 2 ), cross sectional moment of inertia (CSMI, mm 4 ) and femoral neck strength index (FSI, mm 3 ) [84][85][86]. In vivo repeatability, CV, of these methods has been reported as 2.3% for CSA [87]. Questionnaires Health status General health and habitual physical activity at baseline will be assessed by a questionnaire devised by the research group. This health questionnaire addresses medical conditions, current medications, years of menopausal hormone therapy, history of fractures and current leisure time physical activity. Throughout the entire follow up period all subjects will be asked to report their daily amount of analgesia taken to manage their knee pain. Space will be provided in the physical activity diary for ease of recording. Impact of osteoarthritis of the knee Self-assessed impact of osteoarthritis on functioning will be measured using two questionnaires, the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) [88] and the knee injury and osteoarthritis outcome score (KOOS) [89]. The visual analogue version (VAS) of the WOMAC (0-100 mm) will be used with a range of scores of 0-2400. This questionnaire has 24 questions and is divided into three domains; pain (score ranging from 0-500), stiffness (0-200) and function (0-1700). A higher score indicates more disability. The internal consistence (Cronbach's alpha) for the VAS version is 0.7-0.91 and test-retest (ICC) coefficient is 0.95 for pain, 0.90 for stiffness and 0.92 for function ( [90]). A likert version of the KOOS will be used with each response being scored 0-4. The questionnaire has 5 domains: pain (9 questions), other symptoms (7 questions), activities of daily living (16 questions), sport and recreation (5 questions) and knee related quality of life (4 questions). Score for global and domains scores are transformed into a score 0-100 with a score of 0 indicating extreme knee problems and 100 no knee problems. The internal consistency for the KOOS is 0.86-0.96 and test-rest (ICC) is (0.67-0.95) [91]. Reliability of the Finnish language version of both WOMAC and KOOS has been shown to be similar to that of the English language version [92]. Quality of life Self-assessed quality of life will be measured using the RAND-36-Item short form healthy survey instrument [93] this questionnaire is identical in wording to the short form 36 questionnaire (SF-36) but summation of final scores is different. It contains 8 domains: physical functioning (10 items), role limitations due to physical health problems (4 items), role limitations due to emotional problems (3 items), energy/fatigue (4 items), emotional well-being (5 items), social functioning (2 items), pain (2 items), and general health (5 items). Global and individual domains will be re-scored and given values of 0-100 with higher scores indicating a more favourable health state. The scores will also be divided into two summary measure: the physical component summary score (PCS) and the mental component summary score (MCS). The dimensions physical functioning, role limitation due to physical health problems, body pain and general health form the PCS and mental health, energy/ fatigue, social functioning and role limitations due to emotional problems form the MCS. In a Finnish standardization population sample aged 18-79 years the homogeneity, i.e., the mean of the item intercorrelations of the Scale, was 0.63 and Cronbach alpha 0.94 [94]. Physical performance measures Muscle strength Maximal isometric knee flexion and extension strength of both legs, as well as grip strength of dominant hand, will be measured using an adjustable dynamometer chair (Good strength; Metitur Ltd, Jyväskylä Finland). The best result from 3 contractions will be used and recorded in newtons (N). In our laboratory, the precision of the test is 6% for knee extension and 9% for knee flexion [95]. Muscle power Single leg extension power will be measured using Nottingham power rig (University of Nottingham Medical School, Nottingham, UK) which has been tested for reliability and has a test retest co-efficient of variation (CV of 9.4%) [96] and in our laboratory the CV is 8% [97]. In addition lower limb power function will be determined by a maximal counter movement jump (CMJ) measured using a custom made force plate (University of Jyväskylä, Finland). This test is a measure of neuromuscular function. Jumping force, vertical ground reaction forces, power, impulse and jump height will be calculated. Data is collected at a sampling frequency of 500 Hz [98]. Aerobic fitness Maximal aerobic power VO 2 max will be estimated using the UKK 2 km walk test (UKK Institute, Tampere, Finland). This test requires the subject to walk 2 km as quickly as possible with a target of 80% maximal heart rate [99]. VO 2 max is estimated using walking time, body mass index (BMI), age and heart rate at end of test. The heart rate will be measured by a portable heart-rate monitor (Polar F6, Polar Electro Ltd, Kempele, Finland). It is a feasible test for estimating V02 max [100] and sensitive to changes [101]. Its validity has also been tested with correlation coefficient of 0.69-0.77 [102]. Static balance Static balance ability will be assessed using a force platform device (Goodbalance, Metitur Ltd, Jyväskylä Finland) which is validated and reliable method measuring body sway in different standing positions [103,104]. Balance will be measured in feet side-by-side eyes open and eyes closed and single leg stance [105]. Agility Agility will be assessed with a standardised figure-of -eight running test consisting of two laps around two cones placed 10 meters apart in a figure of eight [106][107][108]. Time (in seconds) taken to complete the task will be measured using a photocell. This test has shown to be effective at detecting decreased motor performance (area under curve 0.86) additionally it has been shown to be a very sensitive (73.5%) and specific (86.1%) tool for measuring agility [109]. Gait Spatial and temporal parameters of gait will be measured using the GAITRite W walkway (CIR systems, inc. Clifton, NJ 070872) [110]. This consists of a 577 cm long and 88.5 cm wide matt with 13,824 sensors placed on 1.27 cm in a grid. The collection frequency of the matt is 80Hz. The data is transferred by lead to a computer and is analysed using GAITRite 3.6b software. This technique has been validated with different populations [111,112] and found to be a reliable [112,113] instrument to measure spatial and temporal parameters of gait. Daily physical activity During both the intervention period (0-4 months) and the follow up period (5-16 months) daily physical activity of every subject (excluding pool training) will be recorded using a leisure time physical activity diary. The diary is completed daily and each activity, duration and intensity (1 = low, 2 = moderate or 3 = hard) is recorded. From this data MET-hours per week will be calculated [114,115]. In addition, during the intervention period each subjects' daily activity will be measured for 3 days using a heart rate monitor (F6 Polar, Polar Oy, Finland), accelerometers (Hookie AM 20, Traxmeet, Finland) and hourly physical activity diaries. Intervention Those subjects randomised into the intervention group will participate in 1 hour of aquatic resistance training, three times a week for 4 months, totally 48 training sessions. The intervention will be completed in small groups of 6-8 subjects in a pool heated to 32 degrees with depths 1.3-1.5 m. Aquatic steps will be used to ensure that all subjects will complete the standing exercises at a depth level approximately to their xiphoid bone ±5 cm ensuring weight bearing on the supporting leg of 25-50% of own body weight [41]. Each training session will last approximately 1 hour. The session will consist of three distinct parts; the warmup (15 minutes), lower limb strengthening program (35minutes) and cool down (10 minutes), a full description of exercises can be found from Table 1 and Figures 2, 3, 4, 5, 6. Warm up and cool down was planned by a physiotherapist with over 10 years of aquatic therapy experience with patients suffering from musculoskeletal problems (BW), the same therapist will ensure that quality of movement and intensity of the intervention is maintained throughout the training by reviewing the heart rate and perceived exertion by BORG 6-20 scale [116] which are collected after Table 1 Description of exercises included in the intervention every training session immediately after the main set before the cool down. All sessions will be supervised by 2 experienced physiotherapists, who had been trained to instruct these aquatic programmes and accredited for lifesaving before the trial began. The warm-up consists of 10 different movements to increase active ROM of all joints and enhance neuromuscular activation. Each movement will be completed for 1 minute (30 seconds per leg when alternating leg) with a 15 second rest period. Order of movements will be altered for each session randomly to maximise neuromuscular stimulation and prevent staleness as well as to maintain subjects' interest. The strength training section consists of 5 exercises which have been thoroughly researched for both their effect on muscle activation [117,118] and effect on muscle strength and physical functioning [119][120][121]. Focus will be on performing each movement as fast as possible through full ROM. During all standing exercises emphasis will be made on maintaining the lumbar spine in a neutral position thus avoiding excessive loading on the spine and to encourage activation of trunk muscles during exercises. The progression of the exercise program will be ensured by using resistance boots of different sizes and by varying the duration of sets. Table 2 shows the different durations of each set and targeted amount of repetitions per set for each stage of the intervention. Each leg will be trained before resting e.g. 45 seconds left leg, 45 seconds right leg and 30 second rest. Weeks 1-2 is an introductory period to allow subjects to become familiar with the movements with sets of 45seconds duration per leg per set with no resistance i.e. barefoot. Weeks 3-5 will consist of alternate trainings of 30 or 45 seconds with small fins (THERABAND PROD-UCTS, The Hygienic Corporation, Akron, OH 44310 USA). Weeks 6-8 will be 3 week period with 45 seconds of work alternating sessions with small aquafins and large resistance boots (Hydro-Tone hydro-boots, Hydro-Tone Fitness Systems, Inc. Orange, CA 92865-2760, USA). Weeks 9-11 and 13-16 will consist of alternate trainings with work of 30 and 45 seconds with large boots. Week 12 will consist of one session barefooted, one with small fins and one with large boots, work duration will be 45 seconds per set. The frontal area of aquafin resistance fins is 0.0181 m 2 and that of the large resistance boots 0.075 m 2 . In a previous study the drag experienced during seated aquatic knee flexion/extension exercises in healthy women was triple with the large boots compared to the barefoot condition. Additionally, a significant increase in EMG activity was seen with the large boots compared to no boots [117,122]. Intensity of training of every session will be monitored using polar heart rate monitors (F6 or RCX5, Polar Oy, Finland) and perceived rate of exertion (BORG 6-20) [116]. Target training zone will be 60-80% of maximum heart rate according to the Karvonen formula e.g. 60% training limit = (220age) x 0.6 and 80% training limit = (220age) x 0.8. Blood lactate levels will also be measured so as to obtain quantitative measures of training intensity and to ensure all training groups have trained at similar intensities. Samples will be taken during week 12, before training after 15 minutes of rest and 3 minutes after cessation of main strength training session. These will be recorded for each different intensity level of training (barefoot, small and large resistance boots, 45 seconds work per leg). Fingertip blood samples will be taken using safety lancet, normal 21 G with penetration depth 1.8 mm (Sarstedt AG & co, Germany) and collected into 20 μL capillary tubes which are placed in 1-mL hemolyzing solution. Care will be taken to clean skin to avoid contamination from chlorinated pool water. Samples will be analysed using an automatic system (EKF diagnostic, Biosen, Germany) after training. Control group The control group will be asked to maintain normal physical activity during the intervention period. They will be offered two sham contact sessions consisting of 1 hour of light stretching and relaxation during the 4month period. Follow up period After the post intervention measurements all participants will be advised to continue spontaneous physical activity, no other specific instruction will be given to the subject. Ethical considerations The study was given ethical consent on 30 th November 2011 Dnro 19U/2011 from the Ethics Committee of the Central Finland Health Care District. Written informed consent will be obtained from all subjects before their participation in the study. All subjects included have the right to withdraw from the study whenever without needing to provide a reason for withdrawal. The study will be conducted according to good clinical and scientific guidelines and the declaration of Helsinki (2000). Assessment of side effects Adverse effects or health problems attributable to the testing protocol or interventions exercise protocol will be documented and reported. Following each individual measurement and training session self-reported knee pain will be assessed using a visual analogue scale (VAS 0-100 mm) along with any other physical symptoms such as pain elsewhere than knee, stiffness and general fatigue. All subjects will have medical insurance and have access to the attending medical physician free of charge throughout the 4 month intervention and 12 month follow up period. Statistical analysis All analyses will be based on both intention-to-treat and dose related principles. Statistical analyses will be performed using statistical software (Stata, release 12.1, StataCorp, College Station, Texas and SPSS Version 19, IBM Corporation). Discussion This paper describes the rationale and design of a randomised control trial investigating the effect a progressive aquatic resistance training program will have on patellofemoral and tibiofemoral cartilage, properties of bone and body composition and physical function in post-menopausal women with mild knee osteoarthritis. Exercise is one of the main non-pharmaceutical treatments recommended in the management of lower limb OA [24][25][26]28,29]. It is presumed that training in an aquatic environment has benefits for persons suffering from lower limb OA, however exact content and intensity of optimal training remain unclear [22]. For persons with knee and/or hip OA there is strong evidence to suggest aquatic exercise can cause a small but significant reduction in pain [30,33,34,36,38,[123][124][125][126][127], improves self-assessed and measured function with a small to moderate effect size [33][34][35][36]38,[123][124][125][127][128][129][130]. In addition, there is moderate evidence to show that aquatic exercise can cause a small but significant improvement in aerobic fitness [33,35,127,129,131]. Further there is limited data to suggest aquatic exercise can increase lower limb strength [33][34][35][36][37][38][39] and improve balance and decrease risk of falling [36,39,40]. Intensities of interventions in previously studies may not have been high enough to produce large changes in muscle strength and cardiovascular fitness but reporting of exercise programs used are in most cases incomplete. There are few studies investigating the effect of a progressive resistance program using specifically designed resistance equipment to manage symptoms associated with knee OA even though there is accumulating evidence to suggest it can be effective in improving neuromuscular function [117,118,[120][121][122]. Also, there is some evidence to suggest water based exercise can either maintain [132] or slightly improve the properties of bone as measured with DXA [133]. However these are of low quality evidence and further research is required to validate the findings. Both dGEMRIC [63,134] and T2 relaxation MRI [7,72,135] can distinguish between normal and OA cartilage. These techniques have been shown to be sensitive enough to demonstrate acute changes in human cartilage dGEMRIC [42,67] and T2-relaxation times [42,136]. These methods are therefore suitable for use in our study, and it is known that correct biomechanical loading of cartilage is important in maintaining cartilage health whereas obesity and trauma are risk factors for the development of OA [1]. Although there is evidence to show that biochemical characteristics of cartilage can be negatively affected with changes after periods of joint immobilization [137,138] and non-weight bearing [136]. No evidence exists to show the impact of an intensive non-impact exercise on cartilage. As far as we know there have been no publications investigating the effect of aquatic exercise on cartilage and properties of bone in persons with knee OA. The aim of this study is to use repetitive aquatic resistance program with high intensity and repetition to discover what effects non-impact training has on knee cartilage, properties of bone and physical function. The information gained will help improve our understanding of the effects of exercise on the biochemical properties of cartilage and improve prescription of aquatic exercises in the management of OA. Competing interests All authors declare that they have no competing interests. Authors' contributions All authors were involved in the conception of the study plan and design as well as critically revising the draft manuscript for important intellectual content. All authors approved the final version to be published. BW, MM, JM and AHeinonen drafted the manuscript.
v3-fos-license
2018-06-02T04:25:07.531Z
2017-01-01T00:00:00.000
44190910
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.avensonline.org/wp-content/uploads/JCMCR-2332-4120-04-0028.pdf", "pdf_hash": "9f3a5ea3cf4ea574802ef86fb0c42016efb8a127", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44005", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "9f3a5ea3cf4ea574802ef86fb0c42016efb8a127", "year": 2017 }
pes2o/s2orc
A Case Report : A 15 Year-Old-Female with Elevated C-Reactive Protein , Major Depression and Maltreatment in Early Childhood Ms. S is a 15 year old female with refractory major depressive disorder. She has a history of early childhood maltreatment and currently has an elevated C reactive protein (CRP) concentration. Many studies have found CRP to be elevated in patients with major depressive disorder. A recent study found that having a CRP>3 mg has a strong connection with symptoms of major depressive disorder [1]. Another study of patients with a prior history of maltreatment before eight years of age, showed elevated CRP levels once they have reached adolescence [2]. Drugs targeting inflammation, such as Infliximab and Acetylsalicylic Acid (ASA), have shown early promise in the treatment of depression. Infliximab outperformed placebo in patients with treatment resistant depression and an elevated CRP [3]. ASA in combination with an SSRI showed a 52.4% response rate in nonresponding patients [4]. Therefore, further studies of anti-inflammatory drugs in the treatment of depression are needed, especially in combination with standard treatments. Clinical & Medical Case Reports hospitalized for suicidal ideation with intent to harm by cutting her wrists.She was later discharged from the hospital on May 5, 2015 and shortly after, was re-admitted to IOP about a week later.During this time, the IOP provided medication management, high intensity counseling and support which included group therapy, art therapy, individual and family therapy.Through therapy she continued to have thoughts of self-harm with depressed mood. After a week in the IOP program, the patient had her first appointment with the Psychiatrist.At this time, she was taking Duloxetine 30 mg, but was reporting experiencing gory visions.Ms. S had previously taken SSRIs in the past, but those were quickly discontinued after the patient reported having visions of "dead people", "bloody images", and hearing "screaming".During this appointment, the Psychiatrist continued the Duloxetine, but added Aripiprazole 5 mg to the patient's regimen.5 weeks after starting Aripiprazole 5 mg and Duloxetine 30 mg daily, the patient reports to the Psychiatrist that her mood had improved from "depressed to neutral".However, she continued to have thoughts of harming herself, passive suicidal ideation, uncomfortable flashbacks to her childhood abuse, as well as flashes of gory images in her head of "knives and murder" of people she has never seen before.At this point, her Psychiatrist increased her Duloxetine level from 30 mg to 60 mg.Shorty after this appointment, Ms. S reported to her mother that due to the stress of a close friend moving away, she had started self-harming again by cutting "up and down her arms", as well as on her chest, stomach and shoulder with razor blades.At this point, her mother and the IOP team developed a crisis plan.With the support of the IOP program and her outpatient therapist, Ms. S's mother actively sought outpatient DBT treatment. After 21 days in IOP treatment, Ms. S was discharged and referred to Dialectical Behavioral Therapy and follow-up appointments with the Psychiatrist.The patient's DBT was oriented around her self-injurious behavior and her previous abuse.During another appointment with the Psychiatrist, a Center for Epidemiological Studies Depression Scale for Children (CES-DC) was done.The patient's total assessment score was 43, which correlates with a diagnosis of moderate depression.She describes having neutral moods that would quickly switch to depressed mood with selfinjurious thoughts and behavior.By this time, Ms. S reported feeling this way 1-2 times per week despite compliant attendances to DBT and family therapy. Then, approximately a month later, the patient had her next appointment with the Psychiatrist to whom she reported having multiple panic attacks during the first week at school, as well as selfinjuring three days prior.A CES-DC was redone at the appointment, and her scale from a 43, a month prior, increasing to 46.The patient's Duloxetine was increased from 60 mg to 80 mg and Aripiprazole 5 mg was kept the same.For months following this appointment, the patient continued to have self-injurious behavior and depressed mood, and the Duloxetine was increased to 90 mg. By winter of the following year, the patient reported self-injurious behavior 24 hours prior, which resulted in 10 to 12 cuts with a thumbtack on her ankles.Over the last few weeks, the patient also reported feeling "dissociated" and "dazed".Her mother was advised to monitor the patient closely at home and take the patient to the emergency room or call 911 if she was unsafe.Soon after, the patient was readmitted to the IOP program that winter for self-injurious behavior, anxiety, and depression. Due to lack of a positive response to her current treatment regimen, the Psychiatrist recommended a genetic study to determine the appropriate treatment plan for the patient based on her biology.This genetic study is a psychogenetic combinatorial approach that allows physicians and patients to understand their body and the ability of their body to metabolize medication appropriately.Ms. S' results showed that she is a poor metabolizer of the CYP2D6*4 allele enzyme with reduction in both allelic enzymes, CYP2C9*2 and CYP2D6*9, making it difficult for her metabolize specific medications.In addition to these allelic reductions, Ms. S also showed that she carries the T allele C677T polymorphism in the MTHFR gene.Patients carrying this gene experience difficulty in metabolizing folic acid.As a result, the patient has a decreased amount of homocysteine and folate levels in the body.These metabolites are essential building blocks for the formation of mood neurotransmitters. Based off these results, her medications were changed at her following appointment with the program Psychiatrist, and L-methylfolate 15 mg, a medical food, was added to the patient's treatment regimen for depression with a plan to taper off Duloxetine and add Desvenlafaxine.Her Aripiprazole was discontinued and Lurasidone 20 mg was started.The patient reported another selfharming episode that resulted in cuts on her inner thighs, and complained of having frequent, violent, suicidal thoughts.Her medication regimen was then changed to Duloxetine 50 mg, Lurasidone 20 mg, Desvenlafaxine 25 mg, and L-methylfolate 15 mg.All of these medication adjustments were made due to the recommendation of the genetic results and the patient's physiologic response to the medications. By the spring, Ms. S complained of experiencing frequently fluctuating high and low moods.During her elevated moods, Ms. S describes herself as being uncharacteristically talkative and impulsive with continuous racing thoughts, as well as experiencing insomnia during this period.Ms. S's Lurasidone 20 mg was increased to 40 mg and her Duloxetine 50 mg was decreased to 30 mg with a plan to taper to 20 mg.Later, her Lurasidone was increased to 60 mg and her Duloxetine was tapered to discontinue.The patient then reported in June 2016 of having another hyper-elevated mood episode, upon which her Lurasidone was then increased from 60 mg to 80 mg.Given Ms. S' refractory depressive symptoms, the treating psychiatrist ordered a typical metabolic panel which included tests for C-reactive protein.The C-reactive protein was added to the panel in order to seek alternative treatment options based off of current literature.Her results were received and indicated some metabolic abnormalities, including an elevated CRP of 5.8 (normal limit of 0.4-4.9).This result correlates with recent studies which have suggested that elevated inflammatory cytokines is correlated to an increase in risk of depression.The lab results, in respect to the elevated CRP, were addressed by the psychiatrist and discussed with Ms. S and her treating pediatrician. Ms. S claims that she is doing well and able to cope with life stresses.Her treatment regimen hasn't changed.She is soon graduating from DBT and seeking an outpatient therapist.She denies any recent selfharm or suicidal ideation, to date. Discussion Studies are currently showing the relationship between C-reactive protein (CRP) and depression as well as the effects of CRP on neurotransmitter pathway and the basal ganglia.Our goal for this paper is to (1) discuss the pathway of C-reactive proteins effects on neurotransmitter synthesis and metabolism; (2) explore the relationship between C-reactive protein and depression in a case setting; (3) consider the connection between child maltreatment and elevated C-reactive protein; (4) treatment options for elevated C-reactive protein in refractory depression. C-reactive protein (CRP) is an inflammatory cytokine that is stimulated in response to illness or trauma.This inflammatory cytokine is under the control of Interleukin-6 that stimulates the hepatocytes in the liver to release CRP during an acute phase reaction [5].Current research being conducted is showing major support for the relationship between the immune system, increasing CRP, and the association with depression [5].Inflammatory cytokines impact the tryptophan pathway causing an increase in glutamate and manipulation of pertinent mood neurotransmitters like dopamine, norepinephrine and serotonin [1,6]. Inflammatory cytokines interrupt indoleamine 2,3 dioxygenase; this enzyme is required for the breakdown of tryptophan, the primary building block for serotonin, and conversion into kynurenine.In the brain, kynurenine is then converted by microglia and macrophages to quinolinic acid.Quinolinic acid then affects glutamate in the astrocytes by binding to the N-methyl-D-aspartate receptors.By binding the product to the receptors, it no longer allows astrocyte reuptake of glutamate.Research has found that decreased levels of quinolinic acid to be associated with depression-like symptoms: specifically anhedonia [7].Cytokines have also been associated with the re-uptake pathway of monoamines.The mediated effects of the cytokines under the mitogen-activated protein kinase pathways can influence the increase in activity of the serotonin, norepinephrine and Page -03 ISSN: 2332-4120 dopamine membrane transporters [6]. Not only do inflammatory cytokines interfere with important neurotransmitters, but they also cause damage to neural tissue as well.Inflammatory cytokines stimulate microglia and astrocytes to release reactive oxygen species and nitrogen species.In combination with quinolinic acid, these free radicals can cause significant oxidative damage leading to disruption of the lipid membranes resulting damage of neuronal cells and serotonin transmission [6,7].Having a decrease in these neurotransmitters causes the chemical imbalance that is associated with depression and mood disorders. Inflammation and the inflammatory response can also be caused by maltreatment-associated trauma increasing risk for depression.This activation of the inflammatory pathway occurs both peripherally and in the brain.These stressors, like early life childhood maltreatment, can cause an increase in peripheral CRP levels [6].Examples of maltreatment used in studies are as follows: taken into foster care, physically hurt by someone, sexually abused, separated by mother, and/or separated by father [2]. A retrospective analysis was conducted by Slopen and her research team, who analyzed children ranging 1.5-8 years of age, and have experienced maltreatment.Simultaneously, their mothers were asked if their child had experienced any of the above-named maltreatments, and to what severity on a four-point scale.The scale was rated by (0=no experience; 4=very upset by the event).The sum was totaled across the seven encounters for each of the five events listed.The information was then placed in a z-score.The results were as follows, taken into foster care: 3-5; physically hurt: 122-181; sexually abused: 1-8; separated from mother: 125-383; separated from father: 286-876 [2]."At age 10, CRP and IL6 were significantly correlated with each other (r=0.46,p<0.0001); and each was also significantly correlated with CRP at age 15" [2].The C-reactive protein serum levels were then recorded from these patients once they reached the ages of 10 and 15 years old. In patients ranging from ages 1.5 through 6 years of age showed no correlation between IL6 and CRP.When examining patients in the 7 and 8 year old age group, they showed elevated IL6 and CRP; IL6 (B=0.07,p=0.002) and (B=0.05,p= <0.001) respectively; and CRP (B=0.06,p=0.002) and (B=0.04,p=0.03).Ages 2.5, 3.5, 4.5, 6, and 7 years, were not strongly associated with elevated CRP at age 15.However, at 8 years old, CRP was significantly elevated at age 15 (B=0.05,p= 0.02) with cumulative event scores of (B=0.05,p=0.04) [2].In this model explored by Slopen and her team, it was concluded that there was an association between depression and elevated CRP in patients 15 years of age.There was a marked increase in inflammatory cytokines by the age of 10 in patients who were exposed to maltreatment before the age of 8.This inflammatory process continued from childhood into adolescence [2]. In another study, patient's plasma and CSF CRP were used to determine whether the elevation of CRP was connected to major depression.The experiment examined the relationship between plasma CRP and CSF CRP on basal ganglia concentrations of glutamate.Patients were divided into three groups -Low CRP (<1 mg), medium CRP (1-3 mg), and high CRP (> 3 mg).Patients who fell into the "high CRP" category resulted in increased basal ganglia glutamate concentrations when compared to the low CRP group [1].This study further showed that increased concentrations of glutamate is connected to elevated CRP and the symptom of anhedonia. A meta-analysis was conducted by extrapolating data from multiple papers regarding inflammatory markers and major depression.The material was then analyzed according to each inflammatory marker and its impact on major depression in that patient profile.Articles that analyzed patients with a diagnosis not based on the DSM criteria, minor depression, bipolar disorder, or co-morbidities were excluded from the material [8].After proper exclusion was made, there were 58 articles left that were analyzed and 20 of those papers included C-reactive protein.In these papers, "there was a medium association with CRP and major depressive disorder (MDD) (N=20, combined d=0.47; 95% CI =0.28-0.65;total MDD=746).Statistical significance (p<0.00001) was achieved after 14 studies, and the association did not change after six more studies were conducted" [8].Patients were asked to stop the use of antidepressants before their blood samplings.These samples revealed a strong association between major depression and elevated inflammatory markers. It has been shown that CRP and other cytokines present in the peripheral blood are associated with inflammation and depression. Targeting inflammation could open new doors in the treatment of those with depression and increased inflammatory markers especially in patients who are refractory to traditional antidepressant treatments.There are several anti-inflammatory approaches that can be made to assist in the treatment of depression.One method is to inhibit TNF-α by using a cyclooxygenase 2 inhibitor.This method has been shown to decrease depression in patients in this trial [3]. In a study done by Miller et al. that used infliximab (a TNF inhibitor) on treatment-resistant depressed individuals, it was found that infliximab outperformed placebo and had similar effect size as standard antidepressants in patients with a CRP concentration ≥ 5 mg [3].There was also "separation from placebo" in the group with a CRP concentration>3 [3]. Another study used acetylsalicylic acid (ASA) in combination with a selective serotonin reuptake inhibitor (SSRI) in non-responder depressed patients.160 mg/day of ASA was added to the patients' current antidepressant regimen and resulted in a 52.4% response rate."Remission was achieved in 43% of the total sample and 82% of the responder sample.In the responder group, a significant improvement was observed within week 1 (mean Hamilton Depression Rating Scale-21 items at day 0=29.3±4.5, at day 7= 4.0±4.1;P<0.0001) and remained sustained until day 28" [4].Therefore, the ASA in combination with the SSRI had an "accelerating effect" on the treatment [4].This could clinically be useful since the effects of SSRIs often aren't seen until three weeks after the initiation of treatment.It would also be useful to study the ASA-SSRI combination in patients with elevated CRP.A pitfall to the use ASA and SSRI is that both medications cause an increased risk for bleeding.More research should be explored in the use of ASA and SSRIs in the pediatric community and the risks of bleeding. Conclusion There are currently many theories being studied in the clinical psychiatric community, some having to do with neurotransmitters and inflammatory cytokines.Studies show that there is correlation between elevated CRP levels and depression.It has also been shown that maltreatment in early childhood relates to elevated CRP in adolescence.Ms. S, a 15 year old female with a CRP of 5.8 and a history of sexual abuse, falls into both of these categories.Ms. S, also experienced multiple changes to treatment options and therapy, while consistently remaining refractory to changes in regimens.Further studies can be conducted to analyze the potential benefits of targeting refractory depression with the combination of anti-inflammatory medications and typical antidepressant regimen.Additionally, given our patient's episode hyper-elevated mood, avenues should also be explored with the connection of inflammatory cytokines and bipolar or unipolar disorders. Informed Consent The patient and legal guardian have been made aware of this research paper and the use of personal medical information.All terms and agreements have been discussed with the patient and guardian and both agree to these terms.
v3-fos-license
2021-01-26T05:19:53.046Z
2021-01-23T00:00:00.000
231699506
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11135-020-01087-2.pdf", "pdf_hash": "9a40605fcd8ceb24a387ac77596c9dbfba1fc456", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44006", "s2fieldsofstudy": [ "Economics" ], "sha1": "9a40605fcd8ceb24a387ac77596c9dbfba1fc456", "year": 2021 }
pes2o/s2orc
Exogenous shocks and citizens’ satisfaction with governmental policies: can empirical evidence from the 2008 financial crisis help us understand better the effects of the COVID-19 pandemic? I examine to what extend the financial crisis of 2008 affected levels of individual satisfaction with governments in general and three policy areas in particular; the economy, health services and education. I use data from the European Social Survey (9 rounds, 2002-2018, 14 countries, approx.195000 observations). Running Interrupted Time Series regressions I find that, on aggregate, there was a decrease of satisfaction with the government and the economy immediately after the crisis, but an increase for health and educational services. Longer term, satisfaction gradually increased for all the four indicators examined. In separate regressions for each country, a consistent pattern of behavior emerges. Where the short-term effect on satisfaction was negative, the long-term effect was positive, and vice versa. The switch, from short-term negative to long-term positive effect, could be attributed to the successful efforts of governments to correct the immediate adverse effects of the crisis. On the contrary, some individuals seeing the problems other countries faced, applauded their own government’s short term performance in handling the crisis. With the passing of time however, they gradually became more critical. The COVID-19 pandemic has forced governments to implement policies reviving the economy and improving services in health and the education sectors, amongst others. Results of this study may be used when measuring and evaluating the effects of the current pandemic. Introduction The idea for this study matured quickly. Since the beginning of March 2020, because of the coronavirus pandemic, the university where I work, shut down completely. The same happened with schools, restaurants, churches, bars, and gyms. All gatherings with more than 10 people were forbidden. In my case, all university personnel, whether with strictly academic duties, administrative or both, were forced to work from home using all the available web resources. Literally, from one day to the next many things that were taken for granted changed. Our lives were affected, directly and indirectly, in many ways. How did all of this begin? At the end of 2019 news agencies around the world started reporting that a new flu-type virus called COVID-19 appeared to be infecting residents at the city of Wuhan, capital of the Hubei province in China at an ever alarming rate. Although the Chinese government attempted to downplay the news, the thousands of infected cases and the death toll soon made this impossible. New infections and casualties were reported daily. In order to contain the epidemic, at the end of January 2020, the Chinese government finally took swift action and imposed a lockdown on more than 50 million people living in that city, as well as in other provinces around the country. By that time however, the epidemic had spread in many other countries as well. Soon from Italy came news of acute health conditions, of thousands of newly infected and deaths. Iran and Spain followed. Gradually other countries, thought to be immune, reported infections and deaths as well. This was the wake-up call for many governments to start informing their citizens of the perils of this epidemic and what needed to be done in terms of hygiene and social distancing. The United States was one of the last countries to act on this situation. The World Health Organization (WHO) albeit hesitant initially, finally classified the phenomenon a pandemic and announced harsh warnings on what would happen if measures were not implemented by all. Because of the abnormally large amount of infected individuals, the biggest fear was that health services would not cope; there would not be enough specialized equipment in intensive care units to administer adequate treatment to those in most need. Thus, most countries imposed restrictive rules on movements of individuals and ordered lockdowns of whole sectors in the economy. Sporting events, championship series and concerts were cancelled or postponed until further notice. Naturally, the immediate effects were felt in the tourism, transportation, restaurant and entertainment sectors, and where big individual financial commitments are made such as the real estate and the car industries. Economic activity came to an abrupt stop because everything closed within a few days. That in turn caused thousands of layoffs and an upsurge in unemployment claims. Governments and central banks hurried to support the collapsing system by promising subsidies, loans with better terms and occasionally free money to those immediately affected firms and individuals alike. Work from home suddenly became the norm. Schools and universities were closed. Despite initial difficulties, teaching was soon conducted via the web. To the apprehension of many parents, the responsibility for supervising their children's school performance fell on them. The financial markets which had already enjoyed one of the longest bull-run in history started feeling the effects of the uncertainty caused by the havoc in everyday life. At the beginning 2020, three of the best-known indexes, the Dow Jones Industrial Average, the SandP 500 and the Nasdaq recorded all-time highs. During the second week of March all collapsed with sharp declines of more than 20%. 1 Although all three have since rebounded, the extreme volatility has generated billions of dollars in losses to institutional as well as private investors worldwide. Oil prices fell sharply; at one point prices for the West Texas Intermediate were quoted with a minus sign. 2 . Governments that have managed to curtail the spread of the virus better seem to enjoy popularity from their constituents whereas other governments are blamed for not taking harsher measures early enough. People started perceiving themselves as "experts" with the R 0 indicator and the pandemic curve. They have familiarized themselves at what point then curve is estimated to reach its apex and start diminishing. The dilemma of social distancing and personal hygiene versus herd immunity became the forefront much debate. Many countries grappled with the questions of how long whole areas or even whole countries can remain under lockdown without generating irrevocable damage to the economy. How much liquidity is needed to be injected into the economy so that people and enterprises survive the first shock? For how long can this support continue? Should central banks reduce the reserve requirement ratios for commercial banks, should they reduce discount rates, should they buy bonds held by commercial banks or should they even print new money 3 ? Must countries with exceedingly high debt to GDP ratios nonetheless attempt to borrow from the financial markets to revive their economies? 4 The COVID-19 pandemic is still an ongoing event that no one knows how it will actually develop in the future. The second wave observed during the autumn of 2020 struck with greater intensity than many had estimated. Some acknowledge that a third wave might be on its way in the spring of 2021. Regardless of the virus's potency, however, its repercussions on billions of people will be felt for years to come. The sudden and extreme measures imposed by governments to fight the pandemic, combined with the fear of contracting the lethal virus itself, have generated considerable anxiety and mental illness amongst the general population (Fofana et al. 2020). Vaccines to immunize against the virus have been produced and have recently received licensing. Nonetheless, even if these vaccinations are implemented in sufficient numbers for the general population, it will take a considerable amount of time for socioeconomic conditions to return to pre-virus period levels. It is thus interesting from a sociological perspective to investigate how individuals reacted in the short-and long-term, during another calamity of the recent past. The financial crisis of 2008 was not health-related. It was the subprime problem in the US that burst, bankrupted Lehman Brothers and created a tsunami of financial collapses initially and fiscal problems in many countries later (Ervasti et al., 2019(Ervasti et al., , p. 1210. The two crises are indeed not completely comparable. A significant difference is the mindset of the people affected by both. In 2008 many were suddenly faced with a financial collapse and the uncertainty of the potential unemployment ahead. With the COVID-19 pandemic the negative financial repercussions are only one aspect. The restrictions on movement due to lockdowns and, as discussed above, the fear of contracting the lethal virus have created even harsher living conditions for millions. And yet, the two crises are similar at a more 1 https ://www.bbc.com/news/busin ess-52113 841. 2 https ://www.bbc.com/news/busin ess-52350 082. 3 For example, on June 4, 2020, the European Central Bank (ECB) announced among other measures that "…the pandemic emergency purchase program (PEPP) will be increased by 600 billion to a total of 1,350 billion euro". (https ://www.ecb.europ a.eu/press /pr/date/2020/html/ecb.mp200 604~a307d 3429c .en.html) 4 Additional borrowing from the financial markets increases a country's sovereign dept. As discussed above due to the crisis and the lockdown, the GDP will decrease. Hence the debt to GDP ratio will become even higher. 1 3 macro level. They were both sudden crises, were generated by exogenous events and they effected directly or indirectly all countries around the globe. 5 In examining the 2008 crisis we have the opportunity to investigate how different countries coped with the sudden exogenous shock, both short and long term. In particular, we can examine the opinions of citizens about their governments and how they evaluate the policies on the economy, education and health services. This might provide a gauge in the future for comparing it to the current COVID-19 crisis. The European Social Survey (ESS) has the suitable data for this. Respondents are asked how satisfied they are with their government, the economy, education and the health services. All these four indicators provide a reasonable assessment of how citizens evaluate their government. Although the majority of the literature is focusing on the level of trust citizens have in political and impartial institutions, examining the satisfaction level with respect to specific policies is a more direct and practical approach in measuring the success or failure of government activities concretely. Morgeson (2014, p. 7) asserts that the satisfaction of citizens refers to the "…individual citizens' (in the aggregate) happiness or contentment (or what another author called "fulfillment response") with an experience or experiences with the services (or goods, or processes, or programs) provided by the government bureaucracies and administrative institutions." Surveys measuring satisfaction of citizens with governments in general but also with different policies in particular, have been conducted since the 1970s. They have been popular especially with regard to services provided at local and municipal level (Stipak 1979). Such surveys have been conducted with increasing frequency in recent years (Dutil et al. 2010, p.31;Howard 2010, p. 66). A few theoretical considerations. Why measure the satisfaction of citizens? The basic logic behind these feedback mechanisms stems from the hypothesis that in a democratic institutional framework governments must be responsive to the preferences of their constituents regarding public policies. As Rosset et al. (2017, p. 796) note, the "democratic rule" implies that there is a connection between the citizens and the state and that citizens' preferences are considered by the political institutions that govern the country. Huber and Powell (1994, p. 293) even assert that what constituents prefer, is best represented by the preferences of the median voter. This feedback approach also has a theoretical framework similar to that discussed in Campbell (2012); it resembles the responsive approach due to the democratic obligations of elected governments. It says that policies obviously influence citizens' behavior, which then gives feedback (sometimes "feed forward" to describe the temporal ordering of things) and thus influences future policy formulation and implementation (Ziller 2019, p. 287). In classifying this feedback mechanism under an even larger theoretical framework we may need to go back to the 1980s in New Zealand and the early 1990s in the UK, when the first ideas regarding the New Public Management (NPM) paradigm appeared (Hood 1991, cited in McLaughlin andOsborne 2002, p.1). The basic goal of the NPM is to increase and improve the efficiency and the effectiveness of services provided by the public sector. According to Ariely (2011, p.999) the premise guiding these changes is obvious: better public institutional performance leads to satisfied citizens, which in turn generates positive evaluations of governments as a whole. The emphasis on performance reflects among others the ever-growing problem of fiscal austerity that must be implemented due to the economic deficits that most governments face (Van Ryzin 2015, p. 426). In measuring citizens' satisfaction, the other major theoretical approach is based on the assumption that planners and implementers of policies need "evidence" when making key decisions (Nutley et al. 2007). Dutil et al (2010, p. 31) elaborate on this further and at the same time justify the use of satisfaction surveys as follows: "Proponents of evidence-based decision-making persuasively argue that managers require reliable and impartial data on policy and administrative problems, to effectively navigate the complexities and constraints of the contemporary policy environment. Citizens satisfaction surveys are thus attractive, because they promise simultaneously to enhance public input into government and the methodological rigor of the evidence used in official decision making". The feedback process is not geared solely towards the government's performance, in general. There are instances where feedback focuses on specific policies and measures. For example citizens are concerned about education and health care policies, among others (Stecker and Tausendpfund 2016, p. 496). Moreso, voters are eager to evaluate their governments' economic performance because depending on its success, it can affect directly their wellbeing. The evidence on the link between economic performance and citizens' satisfaction goes back many decades and remains topical. (see for example, Fiorina (1978) Furthermore, satisfaction with government policies is closely correlated with another very popular theme in social and political sciences, trust. Most authors would agree that satisfaction is a prerequisite for trust. Van de Walle and Bouchaert (2003, p. 892) state that many accept the implicit assumption that "…better performing public services will lead to increased satisfaction among their users, and this, in turn, will lead to more trust in government." This has shown to be true both regarding trust with the government in general, but also with specific policies in particular. For example Van Ryzin et al (2004, p.332) have used the American Customer Satisfaction Index (ACSI) model to capture the process through which residents of New York evaluate the different services provided by the city. They clearly show that satisfaction with the government comes temporarily before trust (ibid, p. 333, Fig. 2). Within his own theoretical model, Vigoda-Gadot (2007, p. 290, Fig. 1) depicts a similar relationship. Finally, satisfaction with particular services such as health care is also mentioned as a prerequisite for trust in the government (Christensen and Laegreid 2005). Goals and hypotheses The main goal of the study is to examine and compare the potential impact of the 2008 financial crisis on the individual levels of satisfaction in different European countries. I hypothesize that, on average, the 2008 crisis has contributed in diminishing the levels of satisfaction short-term and perhaps long-term as well. The analysis is thus based on these 2 chronological horizons. In both cases, I first attempt to measure potential associations between levels of satisfaction and the financial crisis. I then rank the intensity of the relationship -if any -by country and evaluate the result as positive, negative or non-significant. 3 The paper proceeds as follows. Below I describe the data and the variables based on which I conduct the empirical analysis. In the methods section, I discuss certain theoretical consideration regarding the regression models applied. Then, I commend on the reported results. In the final section, I summarize and discuss the findings which I also link to the current COVID-19 crisis. In addition, I list several caveats, briefly mention other interpretations that could also be used in explaining the empirical findings and propose certain areas for future research. Dependent variables For the analysis I use four dependent variables. One measures the overall satisfaction of the respondents with the activities of the government in general and three measure the level of satisfaction with particular government policies: the economy, health services and education. All their values ranged from 0 to 10. The exact wording of the ESS questions based on which the responses were recorded were as follows: Independent variable of interest The independent variable of interest is the ESS round. With 9 different surveys conducted between 2002 and 2018, I treat the round in the models as continuous with values ranging from 1 to 9. I run several Interrupted Times Series (ITS) regressions (see below). With an ITS specification, I can estimate the rate of growth of satisfaction (its slope) throughout the period under scrutiny, both before and after the crisis and determine the long term effects of the crisis, if any. In addition, I can compare the levels of satisfaction between in years 2008 and 2010 (rounds 4 and 5), thus measure the immediate effects of the 2008 crisis on satisfaction. Control variables People's evaluations of governments measured through levels of satisfaction and trust are found to differ based on age, gender or socioeconomic background (see e.g. Lyons et al. 1992;Van Ryzin et al. 2004;Christensen and Laegreid 2005;Van de Walle 2007;Van Ryzin 2015). Van Ryzin (ibid, p.435) in particular, notes that education "…helps control the differences across respondents in the knowledge of and experience with (local) government". The correlation between satisfaction and trust in the government is positive and in general rather high. For instance, Vigoda-Gadot (2007, p.297, Table 2) reports a figure of 0.67. Thus, the same controls have been used not only in models measuring satisfaction but trust as well, where research is even more extensive. (see, for example Brewer et al.( 2004, p.102), Fridberg and Kangas (2008, pp.79-82), Arpino and Obydenkova ( 2020, p.409 , Table 3), Torrente et. al. (2019, pp.646-647), and Daskalopoulou (2019, pp.290-291)). Specifically, Rudolph and Evans (2005, p.661) find that at the individual level, political trust matters more to conservatives than to liberals. Jost et al. (2009) also find that political ideology is a factor that is accepted as important when measuring people's attitudes towards officials who govern. Also Haugsgjerd (2018, p.628) uses household income, educational background, employment status, age and gender as controls in his regressions measuring trust. Finally Currie et al. (2015) investigated how the great recession affected mothers' health. They found that mothers with less education, unmarried and from minorities experienced a greater deterioration in their health compared to those who were white, married and had gone to college. Following the aforementioned empirical research, in my models I use the following individual-level control variables: Gender, (level of) Religiosity, Age, Age squared, Happiness Level, Social Activity, Political Orientation, Years of Education, (level of) Subjective Health, Subjective Financial Status and Country (Dummy). (Table 1). Methods Policy impacts are best measured by conducting so called social experiments. The focus is on the population of interest or a sample of its units, which is randomly divided into 2 groups: one that is exposed to the policy (the treatment) of interest and one that is not. The latter group is often used to depict the counterfactual situation-what would have happened to the policy indicator of interest had the experimental group not been exposed to the treatment. The random assignment to the two groups ensures that whatever different characteristics exist between them cancel out and the differences that remain on the indicator is the impact of the policy intervention. In standard notation, U = Universe of all units of interest. u = One unit in that universe. Y(u) = An impact related to each u. T = Causes of Y(u). They can be i = treatment/intervention (policy, program, measure, condition of interest, exogenous event). c = control (any other treatment or no treatment-the most usual). The effect of the intervention on the units of interest is always based on a relative comparison. It is the average difference of the impact on the units given ("|") they were treated, less the impact on the units given they were not treated, or As discussed, through randomization, we attempt to make the two groups as identical as possible, because we cannot observe simultaneously the same units under 2 different treatment regimes. For a non-experimental ex-post evaluation-as it is the case here -the best case scenario is having 2 populations, with measurements on the indicator of interest; one that is exposed to the treatment and the other that is not exposed. Preferably the observations cover the periods before and after the intervention, for some time. However since the classification is not done at random, it becomes problematic because we do not have two groups of data that differ by just that exposure to the intervention, on average. This, in turn, implies that their measured differences in the indicator might not be solely due to this intervention (Holland 1986;Imbens and Wooldridge 2009). An even weaker scenario is where the intervention covers all the available population. As I show later, this compels us to approximate the counterfactual population using a very weak assumption on its behavior; that it will continue to behave the same post-treatment, as during pre-treatment. This in fact is the main methodological constraint with the current data and the research questions. On the other hand, measurements on the indicators of interest are present, both pre-and post-intervention. Hence, we can perhaps identify trends in growth, positive or negative, before and after 2008, the seminal year during which the crisis erupted. In brief, we may think of the financial crisis as the policy treatment. By definition the crisis is an exogenous event that produces a global shock, thus covers all the population of the countries examined. The objective is to establish whether the crisis has had an impact on the levels of satisfaction of the people surveyed. In my analysis I use an Ordinary Least Squares (OLS) fixed effects 7 regression specification called Interrupted Time Series. Considerations when comparing two periods Here I discuss what should be considered when measuring the levels of individual trust between two periods. I compare how the four types of satisfaction grew between 2008 and 2010. Following the classification suggested by Langbein and Felbinger (2006, p.118 . I utilize this design because the program is applied horizontally. That is, all units are potentially exposed to the treatment. In our case, the treatment is the financial crisis which has potentially affected all individuals in the countries surveyed. The impact of the financial crisis on satisfaction is measured as the difference in the mean levels of satisfaction of individuals before and after the crisis, controlling for certain variables. The ESS survey conducted in each country is using a random representative sample of individuals during each round. In other words the individuals interviewed in one round are not the same ones the round that follows. Thus we are not able to reduce our estimates' bias by eliminating all fixed individual characteristics, which could be achieved when analysing panel data. The other weakness in the design is that the surveys in these two rounds occur at different times. Hence all exogenous factors during one period that might influence ones' behaviour are different the next. Therefore it is natural to expect a "temporal heterogeneity" of conditions, and consequently of behaviour. The difference of the 2 rounds calculation performed here assumes nonetheless a "temporal homogeneity" amongst the individuals surveyed, which is a weak assumption as discussed earlier. On the other hand, the random selection of the respondents in each round helps perhaps to cancel out the potential differences in individual characteristics, on average. Longer term analysis In contrast to the previous short term analysis, here I analyze more than 2 periods of collected responses concurrently; four before and five after the financial crisis. Again, following the classifications by Langbein and Felbinger (2006, p.120) and Shadish et al. (2002, p.175), 9 the data structure I examine is once more of the "One group before-after design" type with the element of an Interrupted Time-Series added to it. It is depicted as follows: As discussed previously, the treatment (the financial crisis) is applied horizontally. All individuals surveyed after 2008 were potentially exposed to the consequences of the financial crisis. This again assumes temporal homogeneity meaning that, in the absence of the crisis, the development of satisfaction levels would have continued to evolve as in the precrisis period, linearly. 10 As there are now observations in more than two time periods, both before and after the intervention, this design can reveal changes in the growth of satisfaction that could perhaps be attributed to the financial crisis. It measures dynamically how satisfaction preferences evolve throughout the period under scrutiny and how the financial crisis may have contributed to this growth, be it positive or negative. Interrupted time series models One method, of establishing perhaps a causal relationship between the 2008 crisis and the levels of satisfaction, is by comparing growth trends of satisfaction before and after the crisis. In cases where we have a treatment covering the entire population at hand, we estimate the slopes of the dependent variable of interest before and after the intervention and compare the two. In a nutshell, if the difference is statistically significant with a plus sign, this indicates a positive effect; if the sign is negative, it indicates the opposite. 11 Wagner et al. (2002, p. 301) and Lopez-Bernal et al. (2017) and apply an Interrupted Time Series regression or segmented regression model. Penfold and Zhang (2013, p.S38) assert that analysis based on ITS "…is arguably the strongest quasi-experimental research design". Zhang et al. (2009, p. 143) concur "ITS designs, especially when they involve analysis of comparison series, are the strongest observational designs to evaluate changes caused by interventions because they can account for the pre-intervention level and trend of the outcome measures". A limitation of the method is that it requires a minimum of 8 time observational instances before and after the intervention (Penfold and Zhang, 2013, p.S43). I only have about half such instances in my data (4 rounds before and 5 rounds after). Although estimations can be calculated, they result in coefficient confidence intervals that are too narrow, consequently exaggerating precision. In such cases it is recommended that one runs the models using repeated samples via bootstrapping (Efron and Tibshirani 1993;cited in Zhang et al. 2009, p.146). 12 Abiding by the standard rules of ITS analysis, the following segmented regression model is used: Y depicts levels of satisfaction. T is the time variable (here the round). It is continuous with values ranging from 1 to 9. Its b 1 coefficient is interpreted as the slope of satisfaction for the period before the crisis (rounds 1-4). Satisf D is a dummy variable depicting the two periods before and after the crisis with values. 0 if the round is one of 1-4 (2002-2008), 1 if the round is of one of 5-9 (2010-2018). Its coefficient b 2 denotes the change (difference) in mean value of satisfaction immediately after the crisis; that is, between round 4 (2008) and round 5 (2010). P is another time variable. It represents the period following the 2008 crisis. The coefficient b 3 denotes the difference in slope (growth) after the crisis versus the slope before. To find the slope for the period after the crisis we simply add b 1 + b 3 . Notwithstanding the limitations of the evaluation design discussed earlier, this impact indicator takes under consideration the growth trends of satisfaction dynamically. Table 2 shows the relevant ITS estimations for each of the four dependent variables using the individual controls of Table 1. Note that the number of observations is considerably less than those depicted in Table 1 due to list-wise deletions. Models using all country data together The individual controls are statistically significant in almost all cases, ceteris paribus. Compared to women, men seem to be on average more satisfied with all four indicators under scrutiny. The more religious activity you pursue the more satisfied you are with the government and with its 3 policy areas. The same positive correlation is found with satisfaction and political orientation. The more conservative you are in the left-right political axis, the more positive you evaluate government's policies but also the government in b 0 + b 1 T + b 2 D + b 3 P + controls + error, where 12 I have applied bootstrapping (400 repetitions) in all ITS models reported in the paper. Additionally, instead of the standard 95%, a 99.99% level is used in all models when calculating confidence intervals. Therefore the statistical significance reported, is on the conservative side. general. Age has a curvilinear relationship with satisfaction. At a younger age you tend to be more critical, but as you grow older your opinions on how governments perform become perhaps more pragmatic and realistic. The happier you are with your own life in general the more satisfied you are with the government and its policies. The opposite is reported when it comes to health. The worse subjective health level you report, the less satisfied you are. The same applies to personal finances. The more difficulties you have making ends meet, the less satisfied you are with the ones who govern you and the policies they implement. In 2 individual controls the results are not as uniform however. Social activities correlate negatively with government satisfaction, negatively with economic policies, positively with health and seem to have no associations with education in a statistically significant way. In addition, those that are highly educated have a critical view on health services and education, but are satisfied with the government in general and with its economic policies. Coefficients of the independent variables of interest T, D and P To reiterate, in the absence of the crisis, the basic assumption is that satisfaction would have continued to grow between 2010 and 2018 (after the crisis) at the same pace as during the pre-crisis period, that is, between 2002 and 2008. Lopez-Bernal et al. (2017, pp.350-351, Fig. 2) present six different ITS impact models based on the immediate level of the dependent variable just after an intervention, as well as its slopes before and after the intervention. 13 As discussed above, b 1 depicts the growth of satisfaction from 2002 up until before the crisis. In all four models it comes out statistically significant and is gradually increasing across all 14 countries in our database, on average. However, the effects after the crisis are mixed. Immediately after, the satisfaction with the government and with its economic policies-depicted by b 2 dropped; it increased for policies regarding health and education. Thereafter the situation again reversed somewhat. The growth in satisfaction (b 3 ) after the crisis both regarding the government in general and with its economic policies -was on average higher during the period of 2010-2018 compared to 2002-2008. For health and education policies satisfaction still grew, but at a lower pace compared to the pre-crisis rate. Individual country analysis Although the previous analysis provides on aggregate some insight on the potential effects of the 2008 crisis on satisfaction in these 14 countries, we still do not have a clearer picture of the dynamic process that has taken place within each country. Note that the country dummy coefficients do not always have the same sign across the 4 models and their mean satisfaction differences come out statistically significant. Following Venetoklis (2019, pp. 3042-3043), for each of the four dependent variables of interest, I run 14 separate Interrupted Time Series OLS regressions. With these I depict the slope (growth rate) before the crisis, the change -if any -in the levels of satisfaction during the two-year period immediately after the crisis and the difference in slopes (growth rates) between the period before and after the crisis. In essence, I interact the country dummy with the control variables in each of the Models 1-4. Tables 3, 4, 5 and 6 show the results of the models for each dependent variable of interest (satisfaction with government, the economy, health services and education). Each table contains 14 models, one for each country examined. I list the 3 coefficients of interest, b 1 , b 2 and b 3 with their statistical significance. For b 2 I also calculate the respective elasticity, that is, how much satisfaction changed percentage wise from 2008 to 2010. 14 Immediate/short-term effects To compare the immediate effect of the crisis on satisfaction I first divided the results in Tables 3, 4, 5 and 6 based on whether b 2 was statistically non-significant (no effect) or significant (effect). I then ranked both groups based on the b 2 value, from the smallest to the largest. This way I could classify those countries where the crisis had had no immediate effect, those in which the effect was negative and those where the crisis had increased levels of satisfaction, all in order of magnitude. Figures 1-4 show these country rankings. I ranked them based not on the absolute value of the potential effect (b 2 ), but on the percentage change (elasticity) to account for the variability in the levels of satisfaction amongst the 14 countries in 2008. 15 Note also that, for all three coefficients, I use the same scale on the Y axis so that the effect is comparable not only between countries but also between satisfaction types. The immediate effects of the crisis on the four indicators are evident. In only 8 instances out of the potential 56, 16 do we get a non-significant result for b 2 and for its respective elasticity. As far concerns for the significant effect go, the results are mixed. There are more negative responses in relation to satisfaction with the government in general and the economy in particular. The Portuguese and Spaniards were the most dissatisfied, with the Slovenians coming closely second. On the positive side, the Swedes and the Norwegians were the most satisfied, both with their government's operations and with its handling of the economy during the first two years after the crisis. The Germans were also quite content ( Figs. 1 and 2). Regarding satisfaction with health services and educational policies, governments seemed to have performed better immediately after the crisis. In most countries respondents' satisfaction jumped higher in 2010 compared to 2008. The highest growth in satisfaction was observed in the UK, although for education the growth was more moderate (Figs. 3 and 4). Long-term/dynamic effects The next 4 Figures show the long-term dynamic effect of the crisis on the 4 indicators of interest. As with Figs. 1 to 4, Figs. 5, 6, 7 and 8 use the same scale on the Y-axis and have been divided into two groups based on whether the differences (b 3 ) are statistically significant (those to the right) or not (left). Within the two groups, they have then been ranked based on the magnitude of the slope after the crisis (b 1 + b 3 ) per country, from the smallest to the largest (dark column). To capture the dynamic effect of the crisis I depict the growth in satisfaction both before (b 1 -light column) and after the crisis (b 1 + b 3 -dark column). The columns are shown next to each other to emphasize the difference in growth from one period to the other. Overall, after the crisis, satisfaction seems to have grown positively for the majority of cases. The slope of satisfaction with the government comes out positive in 9 countries and with the economy in 11. With health services and education, positive growth is reported in fewer countries (7 and 4 respectively) and, in addition, the magnitude of growth is much smaller. On aggregate, we have in 8 instances out of 56 (14,28%) similar (non-significant) growth before and after the crisis, in 31 instances (55,36%) statistically significant positive growth and in 17 (30,3%) statistically significant negative growth. By individually ranking the countries we can see that after the 2008 crisis, satisfaction growth with the government was found to be highest in Portugal and lowest in Sweden, with the economy it was highest in Slovenia and lowest in Norway, with health services it was highest in Germany and lowest in Slovenia, and finally, with educational services, the highest satisfaction growth was recorded in Norway and the lowest in the UK. In addition for each country, a line under the X axis evaluates the overall impact of the exogenous 2008 shock. If the country belongs to the non-significant difference group (on the left), there is no effect observed (O). If the difference between the pre-and post-crisis period slopes is statistically significant, the evaluation of the effect can be Negative (N) or Positive (P) based on the sign of the b 3 coefficient. This means that even if the sign of the slope after the crisis is positive the evaluation can still be negative. This is the case with 4 countries (UK, FR, NL, DE) in Fig. 5 depicting satisfaction with the government, with three countries (CH, DE, PL) in Fig. 6 depicting satisfaction with the economy, with three countries (FR, CH, PT) in Fig. 7 depicting satisfaction with health services, and with one country ( PL) in Fig. 8 depicting satisfaction with education. Hence, although the slope of satisfaction is mostly positive after the 2008 crisis overall, the net effect seems to be slightly less favorable. For the government a positive effect (P) is reported in 8 countries and the same goes for the economy. On the other hand, for health services as well as for education a positive effect (P) is found only in 4 countries. Summary and discussion In this study I examined the individual level of satisfaction in 14 European countries, for a 16-year period, from 2002 to 2018. The data analyzed was gathered from the European Social Survey. The indicators of satisfaction were linked to the performance of government overall and in relation to three specific policy areas; the economy, health services and education. During the period under scrutiny the exogenous shock generated by the global financial crisis of 2008 compelled many governments to implement different fiscal, monetary and reform policies to combat the severe recession that followed. With the available time series data, I measured how the crisis was associated with these indicators of satisfaction, both short-and long-term. The motivation for the analysis is twofold. First it is due to the COVID-19 global crisis which resembles somewhat the events of 2008, but on a much wider and severer scale. Governments are now under immense pressure to solve several problems, such as economic, health and education-related without delays. The losses in human life have easily surpassed those in many battle fields of the twentieth century and that, in a much shorter time span. Businesses-especially small and medium sized (SMEs) -have closed or are in the brink of closing, while unemployment has risen sharply. Schools and universities have had to lock their facilities while distance teaching has been enforced on a wide scale. Governments and public sector officials are constantly being evaluated on how they are handling the unprecedented calamity generated by this exogenous shock. Thus, examining not only the overall performance of governments during the 2008 crisis but also how they fared particularly in these three vital policy areas -the economy, health services and education-may provide us with a baseline to match and compare with the respective levels generated by the current COVID-19 crisis. Second, at a more theoretical level, policies influence people's wellbeing and behavior. Economic policies in particular play a vital role, especially in the light of exogenous events (Gahner-Larsen et al. 2019, p. 234). Governments need feedback mechanisms (e.g. through satisfaction surveys) from citizens on their merits and faults. This in turn influences future policy planning and implementation. Hence and irrespective of the COVID-19 or the 2008 crisis, examining how governments perform is of academic interest in itself. On aggregate, the results indicate that during the period immediately after the crisis, from 2008 up until 2010, individual satisfaction with governments in general and the economy in particular, dropped considerably; for health and education related policies satisfaction grew. After the crisis, the slopes of all indicators came out statistically significant with Table 7 Short (STE) and long-term (LTE) effects of the 2008 crisis on satisfaction a positive sign. Comparing the pre-and post-crisis slopes, satisfaction with the government in general and with the economy in particular grew faster after the crisis. The opposite was the case for health services and education policies. Their slopes grew as well, but at a pace slower than before the crisis. Applying the same analysis for each country separately, the results were mixed. There was no indication of any country with consistently low or high coefficient values for any of the four satisfaction indicators. In the four pairs of Figures for each type of satisfaction (Figs. 1 and 5, 2 and 6, 3 and 7, 4 and 8) we see interesting behavior patterns within each country. In the majority of cases (20 + 23 = 43 out of 56) if the short-term effect (STE) is negative (N), the long-term effect (LTE) is positive (P) and viceversa. Table 7 depicts these effects for each country and each type of measured satisfaction. Overall, short-and long-term effects of the 2008 crisis seem to have balanced out. The switch from short-term negative to long-term positive satisfaction could be interpreted as reflecting the successful efforts of governments to correct the immediate dissatisfaction felt by many due to the 2008 crisis. In contrast, the long-term reduction in satisfaction compared to the short-term positivity is not as easily explained. It could be that some individuals seeing the problems other countries faced applauded their own government's short-term performance in handling the crisis. Long-term however, they became more critical, both overall and regarding the three individual policies under scrutiny. There are some caveats in the study. They stem mainly from the limitation of the data utilized and the implemented analytical method. First, external validity is not achieved. We cannot generalize the findings because of the low number of countries in the data. Nonetheless, by choosing only those countries that participated in all the survey rounds, I created a longer time series cross-sectional data set, and thus was able to run the ITS regression models. Furthermore, the ITS design requires at least 8 observational instances for the two periods, before and after the exogenous intervention. Since in my dataset I had only 9 in total (4 + 5), I bootstrapped all the models to ensure that the confidence intervals of the variable coefficients were of the correct size. Consequently this made the statistical significance of the coefficients more robust. Lastly, there is the school of thought which argues that, regardless of survey results, people respond to their evaluation of governments based not on facts, but on pre-existing beliefs (Hvidman 2019, p. 265). I did not test this assertion. This may indeed be a topic for future research wherein one combines evaluative responses on specific government policies with general perceptions on the functions and ethics of public sector officials and politicians. At the beginning of August 2020 there were almost 20 million people in 213 countries that were infected with the virus; approximately 725 thousand had perished. In less than four months later, by the end of November 2020 the respective number of infections globally had increased threefold to 60 million, whereas the number of deaths had almost doubled, to a shocking 1.4 million. In the 14 countries I examine in this study, there were more than 9.3 million people infected and 225 thousand dead. The lockdown is going to shrink global economic growth dramatically, with the GDP in many countries estimated to drop by an average of 6% to 8% during 2020 (IMF, 2020, p.v). In its latest report, the European Commission (2020) forecasts that the European economy in particular will contract by an average of 8.75% in 2020, before recovering at an annual growth rate of 6% in 2021. 17 In Table 8 the mortality rate per million inhabitants ranges between 58 in Norway to 1373 in Belgium. The same variability is observed in the GDP growth. During the summer of 2020, it ranged between -4.6% in Poland and -10.9% in Spain. In its forecasts for 2021, the European Commission predicts that all the economies of the 14 countries will recover, at different paces. The net GDP loss during the next 2 years will be-3.8% for Spain and Portugal, but only-0.3% for Poland and-0.5% for Switzerland. Based on these indicators we can safely assume that that peoples' satisfaction levels will vary from country to country, both during and after the coronavirus pandemic. In countries such as Finland and Norway, where the death toll per million is relatively low, satisfaction with health services could end up being much higher compared for example to Belgium, Spain, the UK or France. Further, satisfaction with the economy will most probably be inversely correlated to the magnitude of the net GDP loss during 2020 and 2021. This is because reduction in GDP usually means revenue losses, bankruptcies, higher unemployment and fewer investments. In addition, both the COVID-19 number of victims and the GDP figures may also affect satisfaction with the government, in general. Finally, when it comes to the satisfaction with education, it is difficult to evaluate in advance how it will be affected by the COVID-19 pandemic. It will perhaps depend on the operational instructions given by each country's ministry of education, on the technological infrastructure already in place for distance learning in each country, on whether employers encourage work from home or other arrangements. The European Social Survey announced that during the fall of 2020 it will begin conducting its 10 th round of surveys with at least 26 countries participating. Although we cannot have a complete time series data for which we could examine past trends in their behavior, a simple pre-and post-crisis data set with responses from the 2018 and the 2020 surveys will be available from most of the participating countries. This will provide us with sufficient information to measure the short term effects of the pandemic on the topics investigated in this study so that a comparison of results can be made. 18 Funding Open Access funding provided by University of Turku (UTU) including Turku University Central Hospital. Data availability All data and code for all regression models (in Stata 14.2) and Figures (Excel) in the manuscript are available upon request. Code availability As above. Compliance with ethical standards Conflict of interest The author declare there is no competing interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
v3-fos-license
2021-01-29T14:02:00.207Z
2021-01-21T00:00:00.000
234191869
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/3/1089/pdf?version=1611311541", "pdf_hash": "aa2db15928dfc5a6ce66c936a687ea0036a44b37", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44008", "s2fieldsofstudy": [ "Business", "History" ], "sha1": "81cc25cc59366570350c06594d13dfb72303f95e", "year": 2021 }
pes2o/s2orc
Transmission Path Tracking of Maritime COVID-19 Pandemic via Ship Sailing Pattern Mining : Since the spread of the coronavirus disease 2019 (COVID-19) pandemic, the transportation of cargo by ship has been seriously impacted. In order to prevent and control maritime COVID-19 transmission, it is of great significance to track and predict ship sailing behavior. As the nodes of cargo ship transportation networks, ports of call can reflect the sailing behavior of the cargo ship. Accurate hierarchical division of ports of call can help to clarify the navigation law of ships with different ship types and scales. For typical cargo ships, ships with deadweight over 10,000 tonnages account for 95.77% of total deadweight, and 592,244 berthing ships’ records were mined from automatic identification system (AIS) from January to October 2020. Considering ship type and ship scale, port hierarchy classification models are constructed to divide these ports into three kinds of specialized ports, including bulk, container, and tanker ports. For all types of specialized ports (considering ship scale), port call probability for corresponding ship type is higher than other ships, positively correlated with the ship deadweight if port scale is bigger than ship scale, and negatively correlated with the ship deadweight if port scale is smaller than ship scale. Moreover, port call probability for its corresponding ship type is positively correlated with ship deadweight, while port call probability for other ship types is negatively correlated with ship deadweight. Results indicate that a specialized port hierarchical clustering algorithm can divide the hierarchical structure of typical cargo ship calling ports, and is an effective method to track the maritime transmission path of the COVID-19 pandemic. Introduction With the popularity of ship-borne automatic identification systems, the data of ship trajectory have increased exponentially, which provide data support for the analysis of ship sailing patterns. A lot of studies in the literature have performed cluster analysis on ship trajectory data in a certain area, and clarified the ship sailing pattern in this area. Based on existing ship sailing patterns, the real-time trajectory of the ship is predicted to realize ship tracking [1,2]. However, few studies in the literature [3] analyze the ship behavior mode from the perspective of ship berthing port. As the node of the shipping network, the port of call is an important factor to predict the navigation behavior of ships. Since the spread of the COVID-19 pandemic, the volume of maritime cargo transportation has shrunk significantly [4][5][6] in order to effectively prevent the spread of the pandemic at sea. However, affected by the internal demand of economic recovery in various countries, sea freight volume has steadily increased in the second half of 2020. In order to continuously and effectively monitor the spread of marine epidemics, it is very important to use the big data of ship automatic identification systems to monitor ship navigation behavior. In order to effectively monitor the ship's navigation behavior, the information of the ship's berthing port is mined according to the ship's speed and position, based on the typical cargo ship trajectory data provided by the ship-borne automatic identification system. According to classification of ship types and sizes, the classification model of specialized terminals (ports) is established, and the contribution of each ship type to port capacity (i.e., ship type importance) and the contribution of each ship scale to portspecific ship type capacity (i.e., ship scale importance) are calculated. The probability distribution of ships berthing at corresponding ports (ship type and size) can accurately reflect the behavior pattern of ships, which could provide an effective way to track the marine transmission path of the COVID-19 pandemic. This paper is organized as follows: a literature review is presented in Section 2. A classification model of ports with special purpose terminals are established in Section 3. Then, a simulation is employed and the results are discussed in Section 4. Finally, this paper ends with a conclusion including suggestions for future works. Construction of Cargo Transportation Network Considering Ship Type Concerning maritime transportation of bulk cargo and container cargo, cargo throughput (that is, the total amount of cargo loading and unloading at a port during a period of time) is mainly used as a measure of port importance. Based on the coal transportation data of China's coastal ports from 1973 to 2013, Ref. [7] construct a port space agglomeration evaluation model to evaluate the spatial agglomeration level and evolution law of coal transportation for ports. Based on the proportion of GDP in the area along the Maritime Silk Road in 2010-2013 and the proportion of regional port container throughput, Ref. [8] excavate hot spot ports along the Silk Road. Based on the GDP and container throughput of China's major coastal ports in 2015, Ref. [9] make use of complex network theory to construct a network evolution model under the dual factors of port attraction and interport maritime distance. Based on the container throughput data of important ports along the Maritime Silk Road in 1995, 2005, and 2015, Ref. [10] analyze the evolution process of the international shipping network of China by using the hub degree model, complex network method, and Hirschman-Herfindahl Index (HHI). Based on berthing data of global Ro-Ro ships in 2012-2014, Ref. [11] excavate the important domestic and international ro-ro terminals. Ref. [12] use statistics of the frequency of port calls in the maritime network based on the container shipping schedule of COSCO Container Lines and the Maersk Line in 2014, and assess the status of Asian ports in the maritime network by using the node importance research method in complex network theory. Ref. [13] make use of a clustering algorithm to identify abnormal berthing outside the port and anchorage based on the container ship mooring data of Shanghai Waigaoqiao Port in 2016. Ref. [14] construct a container shipping network in the Asian region based on the liner route, schedule, and capacity data, and analyze its structural characteristics and evolution model. Ref. [15,16] build a container shipping network based on shipping data of important liner companies around the world, divide the network level, and analyze its anti-jamming capability. Ref. [17,18] build a container shipping network, sort the network hierarchy based on the shipping data of the world's major container liner shipping companies, and analyze the impact of the navigation of the Arctic routes on the network. Based on global container liner shipping company route data from 2015 to 2016, Ref. [19] establish a global container shipping network to evaluate port importance. Ref. [20] build a global container shipping network based on the data of the world's major container liner shipping companies in 2004 and 2014, and analyze its vulnerability. Ref. [21,22] analyze the central extent of the world's important container ports in the shipping network topology, based on the world's major container liner shipping company route data. Ref. [23] build a container shipping network based on the main route data of global container shipping in 2002-2014, and have measured the joint strength of the node space. Ref. [24] build a global container shipping network based on the cargo throughput of 25 major container ports around the world in 2010. The above literature only counts the distribution of important ports of a ship type and establishes a regional or global maritime network, without considering the importance of the ship type in relation to the port. Hierarchical Clustering of Ports Considering Ship Type Based on AIS data, Ref. [25] count the top 20 ports of the major cargo ships in the world in 2005 (including seven types of tankers, such as tankers, container ships, and bulk carriers) with a total capacity of 10,000 tons or more. Ref. [26] construct a global shipping network (including port transit information) of tankers, container ships, and bulk carriers, and evaluate the importance of ports in the entire marine transportation network according to node degree and intermediary centrality. Based on a shipping company's 1977-2008 ship (including container ships, dry bulk cargo, liquid bulk cargo, and other six types of ship) capacity, port of call, and route data, Ref. [27] build a variety of cargo ship shipping networks, and port throughput is taken as the importance of network nodes in sorting the network hierarchy. Based on the AIS data of the Maritime Silk Road, the BRICS countries, and the important economic development areas of the United States, Japan, and South Korea from 2013 to 2016, Ref. [28] construct a shipping network of tankers, container ships, and bulk carriers in the region, and analyze the evolution of time and space. Based on AIS data of global tankers, container ships, and bulk carriers in 2007, Ref. [29] build a shipping network and comparatively analyze the characteristics of various types of typical cargo ships. Ref. [30] build a global container ship, bulk carrier, and tanker shipping network based on AIS data of global cargo ship in 2015, and analyze the network's anti-interference ability. The above literature shares statistics on the distribution of important ports for a variety of cargo ships, but it does not compare the contribution of various ship types to port throughput, and ignores the contribution of ship scale to port throughput. The established shipping network cannot accurately reflect the importance of ship type and ship scale to the port. With the popularity of ship borne AIS equipment, full coverage of AIS base stations, and maturity of data management technologies, AIS data can accurately reflect global shipping port records and could be used to construct cargo transportation network. Based on global shipping port records, this paper comprehensively considers category type and size of port, builds classification model of specialized port and frequent port, and analyzes ship sailing pattern of typical cargo ships. Method The data in this paper are derived from on-board AIS equipment, transmitted by VHF, satellite, or network. In order to accurately describe the berthing ships, the set of the ship called S is defined as follows: In Formula (1), m means the max number of ship types and {S i } is the set of i type ships. According to Table 1, it is defined as follows: In Formula (2), n means the max number of ship scales; S ij is the set of i type i scale ships, which is defined as Formula (3): In Formula (3), p means the max number of ship calls, S ijk is the set of i type j scale k ship, which is defined as Formula (4): In Formula (4), t means the time of port call for the ship, m ijkt is the maritime mobile identification code of i type j scale k ship, c ijkt is the ship type, d ijkt is the total deadweight of the ship, and p ijkt is the port name. According to statistical model of berthing ships in the port [31], we obtain records of typical global cargo ships with deadweight more than 10,000 in 2020, including 155,227 bulk carrier records, 240,944 container ship records, and 196,073 tanker records, as seen in Table 2. Classification Model of Ports with Special-Purpose Terminals In order to mine ship sailing pattern, referring to the classification of ship types and scales, port capacity of corresponding ships are calculated based on ship's AIS data. Comprehensively considering the proportion of transport capacity of a major maritime cargo merchant fleet and port capacity ratio for each ship type and size, it can reflect the contribution of each ship type and size to port capacity to a certain extent-that is, the importance degree of ship type and size, which is an important dimension of port hierarchy. n ijkl is defined as the arrival frequency of l port for i type j scale k ship, which means the number of ships berthing at l port for i type j scale k ship within a certain period of time. d ijkl stands for the capacity of l port for i type j scale k ship, the calculation method of which is shown in Formula (5): d ijl is defined as the capacity of l port for i type j scale ships, as seen in Formula (6); d il is defined as the capacity of l port for i type ships, as seen in Formula (7); d l is defined as the capacity of l port for all ships, as seen in Formula (8): defined as the proportion of transport capacity of j scale cargo merchant fleet for i type ships, I ijl is defined as the importance degree of i type j scale ship for port l, which means the capacity ratio of i type j scale ships for port l, the calculation of which is shown in Formula (9). d il /d l is defined as the capacity ratio of i type ships at l port, ∑ defined as the proportion of transport capacity for i type cargo merchant fleet, I il is defined as the importance degree of i type ships for port l, which means the capacity ratio of i type ships for port l, the calculation of which is shown in Formula (10): Hierarchical Clustering of Ports of Call The set of port of call P is as follows: According to Formula (11), overall number of ports equals to q, and overall number of ship types equals to m. The distance between two ports can be calculated by Formula (12), which stands for port similarity on importance degree of i type ship for port l: The set of special purpose port of call P i is as follows: In Formula (13), overall number of special purpose ports for i type ship equals to q i . According to Formula (13), overall number of ports equals to q i , and overall number of ship scales for i type ship equals to n. The distance between two ports can be calculated by Formula (14), which stands for port similarity on importance degree of i type j scale ship for port l: The algorithm flow is as follows: Input: the set of port call P (P i ), cluster distance measure function s (s ), and cluster number k. Process: (1) assume that each sample point is a cluster, the cluster class is divided into C = C 1 , C 2 , . . . , C q , and the number of clusters is q. (2) The distance between two clusters is calculated by means of mean link. For instance, the distance between C j and C k is the average distance between C j and all samples. (3) C j and C k would be merged into the same cluster class if the distance between them is the smallest, and repeat calculating the cluster number. 3.3. Classification Model of Port Arrival Frequency Degree for Ships with Different Type and Scale n ijl is defined as the arrival frequency of l port for i type j scale ships, as seen in Formula (15): ∑ l n ijl is defined as the arrival frequency of all ports which i type j scale ships called at, f ijl is defined as arrival frequency degree of l port which i type j scale ships called at, calculation of which is shown in Formula (16). Number Distribution of Ports with Special Purpose Terminals According to the statistics of typical cargo ships calling from January to October 2020, 3022 ports of call were obtained. According to the classification model of ports with special purpose terminals, the importance degree of i type j scale ships and the importance degree of i type ships for each port l are calculated by Formulas (9) and (10), as shown in Table 3. According to the model of special purpose port hierarchical clustering, these ports were classified into special purpose ports of bulk, containers, and tankers, the numbers of which are 1125, 684, and 1213 respectively. Meanwhile, according to hierarchical clustering of the ports of call, bulk ports are divided into handy size, canal size, and cape size, the numbers of which are 642, 338, and 145, respectively. Container ports are divided into 1st to 3rd generation, 4th to 5th generation, and 6th generation, the numbers of which are 447, 149, and 88, respectively. Crude oil ports are divided into handy size, canal size, and VLCC size, the numbers of which are 634, 416, and 163, respectively, as shown in Figures 1-3 and Table 4. In Figure 1, hierarchical clustering for all ports is listed, which are divided into three categories, including bulk ports, container ports, and tanker ports. In Figure 2, hierarchical clustering for bulk ports is listed, which are divided into three categories, including handy size bulk ports, canal size bulk ports, and cape size bulk ports. In Figure 3, hierarchical clustering for container ports is listed, which are divided into three categories, including 1st to 3rd generation container ports, 4th to 5th generation container ports, and 6th generation container ports. In Figure 4, hierarchical clustering for tanker ports is listed, which are divided into three categories, including handy size tanker ports, canal size tanker ports, and VLCC tanker ports. Arrival Frequency Degree Distribution for Ports with Special-Purpose Terminals According to Formula (16), we calculated the arrival frequency degree of specialized ports for each scale, and drew the distribution figure of the specialized port's arrival frequency degree, as shown in Figures 5-7. The ordinate indicates the arrival frequency degree of the specialized port, and the abscissa indicates the serial number of the specialized port. The specialized port numbers are arranged in descending order according to the port's frequency degree. The figure shows that a few ports are frequently called by ships, which belong to important specialized ports. In Figures 5-7, different colored legends stand for different ports, and the size of the histogram indicates the arrival frequency proportion for a certain port. In Figure 5a, for handy size bulk ports, the top 22 ports accounted for 44% of the frequency degree, the 23rd to 44th 17%, and the 45th to 66th accounted for 10%. In Figure 5b, for canal size bulk ports, the top 22 ports accounted for 57.5%, the 23rd to 44th 14%, and the 45th to 66th accounted for 9%. In Figure 5c, for cape size bulk ports, the top 22 ports accounted for 74%, the 23rd to 44th 14%, and the 45th to 66th accounted for 6%. In Figure 6a, for 1st-3rd generation container ports, the top 22 ports accounted for 42.5%, the 23rd to 44th 19%, and the 45th to 66th accounted for 10%. In Figure 6b, for 4th-5th generation container ports, the top 22 ports accounted for 62%, the 23rd to 44th 20%, and the 45th to 66th accounted for 10%. In Figure 6c, for 6th generation container ports, the top 22 ports accounted for 81%, the 23rd to 44th 15%, and the 45th to 66th accounted for 3%. In Figure 7a, for handy size tanker ports, the top 22 ports accounted for 46% of the total frequency degree, the 23rd to 44th 17%, and the 45th to 66th accounted for 8.5%. In Figure 7b, for canal size tanker ports, the top 22 ports accounted for 53.2%, the 23rd to 44th 14.5%, and the 45th to 66th accounted for 8.5%. In Figure 7c, for VLCC tanker ports, the top 22 ports accounted for 75%, the 23rd to 44th 12%, and the 45th to 66th accounted for 6.5%. According to the specialized port frequency distribution map of each scale, set th f ij equals to 0.005. According to the model of the frequent port calls division, 391 frequent ports of call for typical cargo ships are selected. The number of frequent bulk ports for handy size, canal size, and cape size is 51, 46, and 40, respectively. The number of frequent container ports for 1st to 3rd generation, 4th to 5th generation, and 6th generation is 51, 50, and 38, respectively. The number of frequent tanker ports for handy size, canal size, and VLCC size is 44, 40, and 31, respectively. The arrival frequency degree distribution for ports with special purpose terminals show that for a specialized terminal (port), ships frequently call at some ports, which are destination ports for the corresponding ship type (size), and are closely related to the prediction of ship navigation behavior. Geographical Distribution of Frequent Ports of Call for Typical Cargo Ships According to the location information of the ports in the frequent specialized port collections, the geographical distribution maps of frequent specialized ports are drawn. The frequency of port calls are differentiated by symbol size, as shown in Figures 8-11. In Figure 9, twenty-three of handy size bulk cargo ports are located in Asia, thirteen located in Europe, seven located in North America, five located in Africa, and three are located in South America. The figure shows that location advantage of Asian and North American ports for handy size bulk carriers are obvious, especially for port of Montreal, Chittagong, Yingkou, Thorold, Sault Ste. Marie, Port Colborne, Huanghua, Changzhou, Chiba, and Gresik. Seventeen of the canal size bulk cargo ports are located in Asia, nine located in South America, eight located in North America, six located in Australia, three located in Europe, and three are located in Africa. The figure shows that Spain, Australia, Brazil, South Africa, and China have significant location advantage for canal size bulk carriers, especially for port of Gibraltar, Newcastle, Guangzhou, Santos, Qinhuangdao, Richards Bay, Yantai, Zhuhai, Gladstone, and Las Palmas. Twenty-one of the cape size bulk cargo ports are located in Asia, eight located in South America, five located in Australia, three located in Africa, two located in North America, and one is located in Europe. The figure shows that Australian and Chinese ports have significant location advantage for cape szie bulk carriers, especially for port of Hedland, Tangshan, Port Walcott, Suzhou, Dampier, Lianyungang, and Rizhao. In Figure 10, twenty-nine of 1st to 3rd generation container ports are located in Asia, ten located in Europe, five located in Africa, five located in South America, and two are located in Australia. The figure shows that location advantage of East Asian and Southeast Asian ports for 1st to 3rd generation container ships are obvious, especially for port of Kaohsiung, Port Kelang, Gwangyang, Jakarta, Kobe, Ben Nghe, Manila, Laem Chabang, Keelung, and Yokohama. Twelve of 4th to 5th generation container ports are located in North America, nineteen located in South America, eight located in Asia, five located in Europe, four located in Australia, and two located in Africa. The figure shows that Asian, Panamanian, and US ports have significant location advantage for 4th to 5th generation container ships, especially for port of Hong Kong, Cristobal, Jeddah, Rodman Pier, Busan, Taboguilla Terminal, Oakland, Kill van Kull, and Coco Solo North. Fourteen of 6th generation container ports are located in Asia, twenty located in Europe, and four are located in Africa. The figure shows that China, the Netherlands, and Malaysia have obvious location advantage for 6th generation container ships, especially for port of Ningbo-Zhoushan, Qingdao, Shenzhen, Shanghai, Rotterdam, Tianjin, Xiamen, Tanjung Pelepas, and Dalian. In Figure 11, fourteen of handy size tanker ports are located in Europe, twelve located in Asia, seven located in South America, six located in North America, and five are located in Africa. The figure shows that Gulf of Guinea, East Asia, North America, and Europe have significant location advantage for handy size tankers, especially for port of Lagos, Lome, Al-Khair Terminal, Mailiao, Incheon, Galveston, Baytown, Nieuwport, Ijmuiden, Vlaardingen, and Gothenburg. Eighteen of canal size tanker ports are located in Europe, nine located in South America, eight located in Asia, four located in Africa, and one is located in North America. The figure shows Black Sea and America have obvious location advantage for canal size tankers, especially for port of Novorossiysk, Freeport, Ambarli, Istanbul, Sint Michielsbay, Haydarpasa, Icdas Port, Bullenbaai Terminal, and Fuikbay. Nineteen of VLCC size tanker ports are located in Asia, six located in Africa, four located in South America, one located in North America, and one is located in Europe. The figure shows that Persian Gulf, East Asia, and South Africa have obvious location advantage for VLCC size tankers, especially for port of Singapore, Fujairah, Ras Tanura, Khor Fakkan, Durban, Das Island, Shuaiba, Cape Town, and Ju'aymah. Port Call Probability Distribution for Typical Cargo Ships According to ship type and scale list in Table 1, specialized port call probability distributions for typical cargo ships are exhibited in Figures 12 and 13. In Figure 12a, bulk port call probability for bulk carriers increases with deadweight growth. However, container port call probability for bulk carriers decreases with deadweight growth, and tanker port call probability for bulk carriers decreases with deadweight growth. In Figure 12b, container port call probability for containers increases with deadweight growth. However, bulk port call probability for the container ship decreases with deadweight growth, and tanker port call probability for the container ship decreases with deadweight growth. In Figure 12c, tanker port call probability for tankers increases with deadweight growth. However, bulk port call probability for tankers decreases with deadweight growth, and container port call probability for tankers decreases with deadweight growth. For all types of specialized ports (regardless of ship scale), port call probability for the corresponding type ship is higher than for other type ships. Moreover, port call probability for corresponding type ships are positively correlated with ship's deadweight, while port call probability for other type ships are negatively correlated with ship's deadweight. In Figure 13a, bulk handy size port call probability for bulk carriers decreases with deadweight growth. However, bulk canal size port call probability for bulk carriers increases with the deadweight growth, and bulk cape size port call probability for bulk carriers increases with deadweight growth. In Figure 13b, container 1st-3rd port call probability for container ships decreases with deadweight growth. However, container 4th-5th port call probability for container ships increases with deadweight growth, and container 6th port call probability for container ships increases with deadweight growth. In Figure 13c, tanker handy size port call probability for tankers decreases with deadweight growth. However, tanker canal size port call probability for tankers increases with deadweight growth, and tanker VLCC port call probability for tankers increases with deadweight growth. For certain types of special ports (considering ship scale), port call probability for the corresponding ship scale is higher than for other ships. Moreover, port call probability is positively correlated with ships' deadweight if port scale is bigger than ship scale. Otherwise, port call probability is negatively correlated with ships' deadweight. Discussions Based on the specialized port classification model, berthing ports for typical cargo ships are divided into specialized bulk, container, and tanker ports. The number of each specialized port are 1125, 684, and 1213, respectively. What's more, bulk ports are divided into handy size, canal size, and cape size ports, the number of which are 642, 338, and 145 respectively; container ports are divided into 1st to 3rd generation, 4th to 5th generation, and 6th generation container ports, the numbers of which are 447, 149, and 88 respectively; tanker ports are divided into handy size, canal size, and VLCC size, the number of which are 634, 416, and 163 respectively. Calculating the arrival frequency degree of port for each specialized port, the results indicate that top 66 handy size bulk ports account for 71%, canal size bulk ports account for 80.5%, and cape size bulk ports account for 94%. Top 66 1st to 3rd generation container ports account for 71.5%, 4th to 5th generation container ports account for 92%, and 6th generation container ports account for 99%. Top 66 handy size tanker ports account for 71.5%, canal size tanker ports account for 76.2%, and VLCC tanker ports account for 93.5%. Based on the model of frequency of port of call, top 391 important ports of call for a typical global cargo ship in 2020 are mined by setting arrival frequency degree threshold, which account for 80% of the total deadweight tons of all ports. The number of frequent bulk ports for handy size, canal size, and cape size are 51, 46, and 40 respectively; the number of frequent container ports for 1st to 3rd, 4th to 5th, and 6th generations are 51, 50, and 38 respectively; the number of specialized ports for handy size, canal size, and VLCC size are 44, 40, and 31 respectively. Conclusions For all types of specialized ports (regardless of ship scale), port call probability for corresponding ship type is higher than other ships. Moreover, port call probability for the corresponding ship type is positively correlated with the ship deadweight, while port call probability for the other type ship is negatively correlated with ship deadweight. For certain types of special type ports (considering ship scale), port call probability for the corresponding ship scale is higher than other ships. Moreover, port call probability is positively correlated with ship deadweight if port scale is bigger than ship scale. Otherwise, port call probability is negatively correlated with ship deadweight. According to port call probability distribution of typical cargo ships, all possible destination ports' geographical distribution for specific ship types and ship scales can be clearly shown, which provides an effective way for tracking maritime transmission path of the COVID-19 pandemic. In future research, we will collect the cases of marine COVID-19 pandemic transmission and verify the real effect of this model in the tracking of marine pandemic transmission paths through simulation experiments. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2020-01-02T21:11:31.586Z
2019-12-18T00:00:00.000
240920752
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-9732/latest.pdf", "pdf_hash": "5c41042bbcab21a175dc2ad3c04dad0bae414239", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44012", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "89a00b3fa7b03b113e3340dc63395a89e5300eb9", "year": 2020 }
pes2o/s2orc
On-farm management and participatory evaluation of pigeonpea (Cajanus cajan [L.] Millspaugh) diversity across the agro-ecological zones of Benin Republic Background: Pigeonpea is a multipurpose food legume that contributes to food security in Benin. However, its production declined and some landraces are being threatened of disappearance. Previous investigations on pigeonpea in Benin have been restricted to South and Central Benin. Therefore, pigeonpea diversity in northern is still unknown. This study aimed to have a better knowledge of pigeonpea genetic diversity, for its promotion and valorization. Methods: 500 producers of pigeonpea belonging to thirteen sociolinguistic groups were selected through 50 villages. Data were collected using methods and tools of participatory research appraisal. Folk nomenclatures, taxonomy of pigeonpea and seed system were investigated. The distribution and extent of pigeonpea landraces were evaluated using Four Square Analysis method. A comparative analysis of pigeonpea uses categories, production systems, pigeonpea production constraints, famers’ preference criteria and participative evaluation for existing landraces across agro-ecological zones was done. Result: Folk nomenclature and taxonomy were mainly based on seed coat colour and size colour. Seven pigeonpea uses category were recorded including sacrifice, grain processing and fertilization. The results showed that pigeonpea seed system is informal. Based on seed characteristics, fifteen landraces were recorded with seven new landraces. A high rate of landraces threatened of disappearance was observed across the ecological zones. Ten constraints are known affecting pigeonpea production in Benin with pests and diseases as the most important in all agro-ecological zones. This study revealed that pigeonpea cultivation is increasing in the Sudanian zone. Landraces to be produced must be selected on the basis of 11 farm crop genetic criteria among them precocity and resistance to pests and diseases, in the three ecological zones and adaptability to any type of soil in the Sudanian zone were the most important. The participatory evaluation revealed the existence of a few performing landraces. and Latin America [1]. Pigeonpea is an excellent source of protein (21.7 g/100 g), dietary fibres (15.5 g/100 g), soluble vitamins, minerals and essential amino acids [2,3]. Moreover it is also used in traditional medicines and leaves, flowers, roots; seeds are used for the cure of bronchitis, sores, respiratory ailments and also acts as an alexeritic, anthelmintic, expectorant, sedative, and vulnerary [4,3]. Conclusions In Benin, pigeonpea is highly consumed in the South-East through the Adja cultural area and contribute to the improvement of household incomes [5]. The plant is used for soil conservation and weed management in the fields [6,7,5]. Despite the importance pigeonpea [5], very few research effort has been undertaken to improve the production of the specie. As a result, the potential yield of pigeonpea is estimated at 2,500 kg/ha, while the yields obtained on farmer's fields is estimated at 620 kg/ha in Benin [8]. This low yield could be due to the lack of improved varieties in Beninese agriculture [9]. Therefore, an exhaustive collection of pigeonpea diversity cultivated at the country level is the base for the development of any varietal improvement program and the implementation of conservation strategy. Several studies were done on pigeonpea diversity in Benin [9, 10, 11, 12]. However, all previous investigations on pigeonpea in Benin have been restricted to South and Central Benin [9, 10, 11, 12]. Therefore the pigeonpea diversity in northern Benin is still unknown. In addition, no comparative studies on pigeonpea production constraints across different ecological zones in Benin are not yet documented, varietal diversity as well as farmers' varietal preference criteria and their variation throughout ecological zones and sociolinguistic groups have been very little documented. While it is known that understanding the genetic diversity, uses, and distribution of orphan crops is essential in determining what to conserve and where to conserve, for sustainable utilisation [13,14,15]. Thus, there is important to dispose a comprehensive collection of pigeonpea genetic resources of Benin and to document all associated ethnobotanical knowledge by extensive survey [9,11]. Seeds are the lifeblood and foundation of a successful farming and a crucial element in the lives of agricultural communities [16]. The procedures, through which a cultivar is bred, produced, certified, stored, marketed and used which includes all the channels through which farmers acquire genetic materials and in interaction with the commercial seed industry is known as seed system [17]. Thus, the success of crop varieties introduction is tightly linked to the uses, biophysical conditions, the cropping systems in which the crop is integrated which vary across growing areas [10]. In developing countries where agriculture is the spearheading of the economy, improved varieties must be developed or simply searched for within the existing diversity. In both cases, a good knowledge of the existing varietal diversity and the agronomic performances of varieties are necessary [18,19]. Thus, farmers' participation in varietal selection process is determinant of variety adoption [20]. Moreover, documentation and identification of high-performance landraces based on farmer's varietal preference criteria will provide strategies to overcome constraints affecting pigeonpea production in Benin. Hence, it is important to evaluate the performance of pigeonpea existing landraces under participatory approach to enhance pigeonpea production and productivity contributing thereby to attain food security and reduce poverty. This study on pigeonpea aimed to: (1) document the different pigeonpea landraces in Beninese agriculture (2) compare seeds management and conservation systems of pigeonpea genetic resources and use categories across different ecological zones, (3) compare constraints associated with pigeonpea production and varietal preference criteria across different ecological zones and sociolinguistic groups and (4) evaluate in participatory way the performances of different landraces in relation to agronomic and culinary traits. Study area The study was carried out in Benin. With a population size of 10 008 749 habitants [21] the Benin is located in the inter tropical zone between parallels 6° 30 ' North and 12° 30' North latitude, and meridians 1° East and 30° 40' East longitude [22]. With an area of 114,763 km², Benin is limited to the north by the Niger River in the northwest by Burkina Faso, to the west by Togo, the south by the Atlantic Ocean and to the east by the Nigeria (Figure 1). The Republic of Benin is divided into three ecological zones: the Guinean zone in the South (6° 25' North latitude and 7° 30' North longitude), the Sudano-Guinean zone (7° 30' North latitude and 9° 45' North longitude) in the Central and the Sudanian zone (9° 45' North latitude and 12° 25 North longitude) in the north [23]. The Guinean and Sudano-Guinean zones are both located in moist agro ecological zone characterized by a subequatorial bimodal climate with two dry seasons and two rainy seasons. The Guinean zone is characterized by an annual rainfall varying between 1200 and 1500 mm/year. The temperature ranges from 24 to 30 °C. The Sudano-Guinean zone annual rainfall varies from 1100 to 1300 mm/year ( Table 1). The temperature in this zone varies between 25 and 34 °C. The Sudanian zone is located in the semi-moist agro ecological zone characterized by a unimodal climate pattern with one rainy season and one dry season. The annual rainfall varies between 900 and 1100 mm/year while the temperature ranges from 21 to 35 °C [23] (Table 1). After exploratory study in agricultural research institutions, visits to local and urban markets, discussion with farmers and sellers, the villages surveyed were selected based on the pigeonpea production, their accessibility and the manner to cover all sociolinguistic groups. A total of 50 villages were prospected ( Figure 1). Data collection The surveys were done using methods (group discussions, individual interviews and field visits) and tools (questionnaires) of participatory research appraisal following Dansi et al. [24]. Focus groups In each village, a group of 15 to 28 pigeonpea farmers has been identified and brought together with the help of administrative and/or local authorities (village chief, farmers' associations, etc.). Interviews were conducted with the help of local translators to facilitate discussions [25]. Prior to the meeting, farmers were requested in advance to bring samples of pigeonpea landraces they cultivate or knew about. After a brief presentation of the objectives to the farmers, they were asked to list all the pigeonpea landraces in cultivation, in the village. The distribution and extent of these landraces were evaluated using the participatory method of Four Square Analysis described by Brush [26]. This method permit to classify at village-level existing landraces into four groups (produced by many households on large areas, produced by many households on small areas, produced by small households on large areas and produced by few households on small areas). In agreement with the farmers, we agreed that a cultivar cultivated by few households is that grown by no more than 20% of farmers in the context of the village; and that cultivated on a small area is that cultivated on not more than 0.25 ha. The participatory evaluation of identified landraces for agronomic and culinary parameters was carried out according to Gbaguidi et al. [27]. The considered parameters were the productivity, vegetative cycle, cooking, sensitivity to pests and disease and sensitivity to storage insects. The two-level evaluation method described by Loko et al. [28] was used. In this approach and for a given trait, a landrace is scored 1 when it is performing and 0 when it is not. After that, local nomenclature, folk taxonomy and the vegetative cycle of landraces have been documented. According to Dansi et al. [29], farmers were asked to list all the constraints related to pigeonpea production. The identified constraints were prioritized in groups by identifying and gradually eliminating the most severe constraint. In a first step, farmers were asked to identify, among the constraints they have listed, the most critical one. The constraint thus identified is ranked first and is eliminated from the list. The same procedure was repeated until the last constraint was ranked. Secondly, farmers were asked to list all the traits that could interest them and motivate them to continue growing pigeonpea. Using the same approach (gradual elimination of the most important criterion), the identified criteria were then prioritized. The discussions were free, open-ended and without a time limit being set. Household surveys After group discussion, ten households were identified for individual interviews by village. In each household, the person interviewed was chosen based on the common agreement from the host couple according to Christinck et al. [30]. Socioeconomic data (gender, educational level, age, year of experience in pigeonpea cultivation and household size); the biophysical resources, cultural practices and seed system (number of cultivated landrace, sowing time, crop type, cropping system, perception about the evolution of pigeonpea cultivation, fertilization, sources of labour, level of intervention in the production chain, pests and diseases incidence and its management, pigeonpea cropping areas for 2015, 2016, 2017); the reasons for pigeonpea production; the different pigeonpea uses categories; pests incidence and its management have been documented. According to their incidence pattern, pests incidence was categorized by farmers as negligible (none), low, high and very high. Incidence was categorized as negligible when pest appeared in very low number while it categorized as low when infestation was responsible for growth retardation and high when infestation involves damage to flowers or pods and very high when infestation are responsible for death of plant. Data analysis Descriptive statistics were used to analyze data. To avoid overestimation of pigeonpea diversity in each ecological zone, correspondences between vernacular names were made following seed characteristics (seed color, color pattern, color pigmentation and the seed eyes color) according to Mohammed et al. Thus, the importance of the constraint is determined by the formula described by Dansi et al. [28]: IMC= (NTV+MAC+PCO)/3. The same approach was used to rank farmers' varietal preference criteria. To compare socio-economic data, biophysical resources, cultural practices and seed system from one ecological zone to another, the Analysis of variance (ANOVA) and Turkey test was used for quantitative variables using Minitab 16 Software while the bilateral Z test was used for qualitative variables, using Statistica 7.1 Software. Before ANOVA, data were logtransformed (log(x + 1)) for variances homogeneity. In order to determine a potential significant change in the cropping area from 2015 to 2017, analysis of variance was conducted. Socio demographic characteristics of respondents In total 500 pigeonpea producing households including 190 in the Guinean zone, 200 in Sudano-Guinean zone and 110 in the Sudanian zone were surveyed. The age of the surveyed pigeonpea farmers ranged from 21 to 76 years with an average of 45.9±9.2 years old. The majority (62.4%) of farmers were men. The majority of pigeonpea farmers were found to be illiterate (43.4%), while 31.6% and 25% were found to have primary and secondary levels of education, respectively. The average household size was 6.4±2.1 members (ranging from 3 to 11 members). The experience year old was 15±8 years, on average ( Table 2). Significant differences in age of the surveyed pigeonpea farmers were observed across ecological zones. On average, farmers in the Guinean zone are older (48.7 years against 44 years) and more experienced than those in the Sudano-Guinean zone (18.4 years of experience against 16.5). The number of farmers with none, primary and secondary level of education varied between ecological zones. Local nomenclature Across the thirteen sociolinguistic groups surveyed in the study area, 50 different pigonpea local names were recorded in the local dialects. Referring to the various vernacular names identified, the generic name of pigeonpea varied according to sociolinguistic group and ecological zones (Table 3). In the Guinean and Sudano-Guinean zones, pigeonpea is called Hounkoun, Kloué or Klouékoun referring to Cowpea by farmers belonging to Fon and Mahi sociolinguistic groups while in the Guinean and Sudanian zones, pigeonpea is called Otili in reference to a pod-producing tree by farmers belonging to Nago and Dendi sociolinguistic groups. However, Bariba and Peulh sociolinguistic groups designated pigeonpea by Wotiri in reference to a pod-producing erected tree. Moreover, in Guinean zone, farmers belonging to Holly and Yoruba sociolinguistic groups call pigeonpea Otini. Pigeonpea is called Ekloui or Kloui by Adja sociolinguistic group. In Sudano-Guinean zone, pigeonpea is called Colo (meaning is unknown to farmers) by Idaasha sociolinguistic group while pigeonpea is called Tissi Tounan and Itoun by Biali and Somba sociolinguistic groups respectively, referring to a cowpea. Folk taxonomy In the study area, 5 criteria were used by surveyed farmers to identify pigeonpea landraces. The great majority of names (90.7%) given to pigeonpea had a meaning. Diversity of cultivated pigeonpea landraces Based on seed characteristics, fifteen pigeonpea landraces were idesntified in the study area ( Figure 2). At village level, the number of pigeonpea landrace ranges from 1 to 5 with an average of 2.7 ± 1. The highest number of landrace (5) Distribution and extent of pigeonpea landraces Within each ecological zone, the production was limited to specific districts and departments. In the Guinean zone, the production was restricted to the districts of (Table 12). Reasons for pigeonpea production and uses category Our study revealed that pigeonpea is produced for three main reasons depending on the ecological zones (Table 6). In the Guinean and Sudano-Guinean zones, nutritional value is the main source of motivation while in the Sudanian zone land fertilizing power is the main source of motivation. The third reason is the market value. The different pigeonpea uses categories were mainly concentrated on grains. Based on their fidelity level, pigeonpea is more used in medicine in the Guinean (FL = 19.5%) and Sudanian (FL = 23.9%) zones. According to famers, boiled leaves are used by oral route to treat malaria. Also, the decoctate of the leaves is used in bath to treat measles and is also used as an antibiotic to treat mouth's sores or tooth decay. The roots, when chewed, prevent the rise of snake venom, in the case of snake bite. The use of pigeonpea grains as an offering for food or symbolic purposes and in sacrifice to divinity was specific to the Sudano-Guinean and restricted to Holly and Nago sociolinguistic groups. While grain processing into donuts is specific to Guinean (FL = 4.3%) and Sudano-Guinean (FL = 2%) zones and restricted to Holly and Adja sociolinguistic groups. In these zones, pigeonpea are roasted and reduced to flour to sprinkle sauces as nutritional supplement by the first one or to make donuts by the second one. Consumption, weed control and land fertilization are common to all three ecological zones (Table 6). Cultural practices Pigeonpea was considered as an annual plant by most of surveyed farmers (93.2%). Only 6.2% of farmers considered this legume as a perennial plant. For the last one, plant is left in the field and is harvested the following year. The main pigeonpea farming activities included: ploughing, sowing, weed control, pod harvest, pod plugging and winnowing. Seeding and weed control were practiced by all the farmers. Pigeonpea is sown between April, May, June (73.6%) in intercropping with other seasonal crops (82.8%) or in pure stand (17.2%). Three sources of labour were observed. For farming activities, 13.2% of farmers used family labour, 73% combined family and friends labour while 13.8% used a combination of family, friends and jobber labour (Table 7). Land fertilization was not reported while only 14% of farmers included in this study used pesticide. The average grain yield in farmers' fields was estimated at 553.4±36.3 kg/ha. According to farmers, during the three last years, Sudano-Guinean zone were the largest cropping area followed by the Guinean zone while farmers in the Sudanian zone produced pigeonpea on a small cropping area (Table 7). Sowing was more realized between April, May and June in the Guinean and Sudano-Guinean zones (97.9% and 91% respectively) whereas it was more realized between June, July and August in the Sudanian zone (68.2%). Intercropping with other seasonal crops such as maize and millet was specific to Guinean (100%) and Sudano-Guinean (98.5%) zones while pigeonpea was more cultivated in pure stand in Sudanian zone (75.4%). Family and the friends was the main source of labour for various farming activities in the Guinean and Sudano-Guinean zones (87.9% and 61.5% respectively) while it was family (49.3%) in the Sudanian zone. Our results revealed that the average pigeonpea yield in the Sudanian zone is lower (522.3 ± 44kg/ha) compared to the Guinean and Sudano-Guinean zones (557.5 ± 15.9 kg/ha and 566.6 ± 35.8 kg/ha respectively). Seed system Three sources of seeds were observed. Farmers used seeds from previous harvest (60.2%) or friends (22%) or local market (17.8%). After each harvest, 67.8% of farmers stored seeds until scarcity at market while 32.2% of them sell seeds in local markets. Comparing seed system between ecological zones, previous harvest was the main source of seed in the Guinean and Sudano-Guinean zones (70% and 62.9% respectively) while friends was the main source (50.4%) in Sudanian zone and after each harvest, farmers stored more grains in the Guinean and Sudano-Guinean zones (70% and 84% respectively) while they were more immediately sold in Sudanian zone (65.5%) ( Table 7). Pigeonpea production constraints In total, 10 constraints were identified as affecting pigeonpea production. The long vegetative cycle was ranked as the major constraint in Benin following by pests and diseases and rainfall irregularity (Table 8). According to farmers descriptions, pigeonpea production is faced by Low productivity is seventh among the constraints followed by the sensitivity to storage insects. All constraints have been reported in the three ecological zones. However, their relative importance varied from one zone to another. The most important constraint in the Guinean and Sudano-Guinean zones is the long vegetative cycle followed by the sensitivity to pests and diseases, while in the Sudanian zone pest and diseases followed by soil poverty were the most important constraints (Table 8). Incidence of pests on pigeonpea yield and control methods According to farmers, the impact of pests and diseases in pigeonpea production in farmers' fields varied from one zone to another (Table 9). The impact is low in the Guinean and Sudano-Guinean zones (52.6% and 42.5% respectively) while it is high in the Sudanian zone (81.8%). Pests control was only reported in the Sudanian zone (63.7%). Three reasons justified the non-control of pest reported by farmers: the high price of pesticides (49.6%), the risk of intoxication (29.6%) and the lack of sprayers (20.8%). Evolution of pigeonpea production in Benin Overall, the majority of farmers (69.4%) reported a decrease in pigeonpea production in Benin. According to farmers, this downward trend was the fact to the Guinean and Sudano-Guinean zones (75.79 % and 85.5% respectively). In these zones, the decrease in cropping area is highly significant (p ˂ 0.001). Indeed, the average of cropping area was 0.9 ±0. (Table 7). According to farmers, the reasons for this increasing are the fertilizing power of the plant (89.1 %) and weeds control (10.9%). Farmers' preference criteria of pigeonpea Through the study area, 11 criteria depending on the ecological zones and different sociolinguistic groups underlined the choice of pigeonpea landraces to be cultivated by the farmers. The most important criteria were precocity, resistance to pests and diseases, rapid for cooking, adaptability to any types of soil, good taste and high productivity (Table 10). In Guinean and Sudano-Guinean zones, precocity was the most important criterion followed by the resistance to pests and diseases while in the Sudanian zone, the resistance to pests and diseases was ranked first followed by the adaptability to any type of soil (Table 10). Precocity appeared at the front of the criteria of all sociolinguistic groups except for Nago sociolinguistic group for whom adaptability any types of soil was the first criterion. Moreover, in addition to precocity, resistance to pests and diseases, rapid for cooking, adaptability to any types of soil and good taste were the choice criteria for farmers belonging to Bariba sociolinguistic group (Table 11). In addition to the choice criteria for Bariba sociolinguistic group, farmers belonging to Boo sociolinguistic group had strong tendency to varieties cultivable at any time of the year and resistant to storage insects while those belonging to Dendi sociolinguistic group preferred varieties with high productivity and cultivable at any time of the year and those belonging to Peuhl sociolinguistic group preferred high productive and resistant to storage insects varieties. Lastly, precocity, resistance to pests and diseases, rapidity for cooking and adaptability to any type of soil were farmers belonging to Yoruba sociolinguistic group criteria. Participatory evaluation of pigeonpea landrace grown in Benin The results revealed that for landraces identified simultaneously in the three ecological zones, none of them were performing for a given character simultaneously in the three ecological zones (Table 12). Moreover, none landrace were performing simultaneously for all 5 evaluated characters. Nevertheless, the Carder ekloui (Adja sociolinguistic group) only identified in Guinean zone combined 4 good performances (high productivity, rapid for cooking, resistant to pests and diseases, resistant to storage insects). Moreover, Carder ekloui (Adja sociolinguistic group) and Otili founfoun kékélé (Idaasha sociolinguistic group) showed high productivity in Guinean and Sudano-Guinean zone but showed low productivity in the Sudanian zone. These two landraces, however, showed resistance to pests and diseases. In addition, Klouékoun vôvô (Fon and Mahi sociolinguistic groups) showed high productivity, rapid for cooking, resistant to pests and diseases, resistant to storage insects and short vegetative cycle in Guinean and Sudano-Guinean zone, however, showed low productivity and susceptible to pests and diseases in the Sudanian zone (Table 12). Discussion Our study showed that pigeonpea generic names varied according to sociolinguistic group and ecological zones. Our findings are similar to those of Ayenan et al. [11] who distinguished respectively 2 and 3 infra-specific pigeonpea taxa. However, local names do not necessarily reflect the genetic history of landraces of crops because different names may be given to identical seeds of landraces or a single name may apply to heterogeneous crops [37]. Such a situation may contribute to under or over-estimate the diversity within a species [38,23,39,9]. So to avoid redundancies and optimizing the efficient conservation and sustainable use of pigeonpea, it is important to conduct morphological and molecular characterizations to avoid redundancies and establish equivalence between the local names [26,40,41]. Farmers use morphological aspect of seeds (coat colour, seed eyes colour, and seed size), plant type, seed origin and vegetative cycle for identification of folk varieties. These criteria of pigeonpea classification and identification are among the descriptors of C. cajan recommended by IBPGR and ICRISAT [42] and used by many authors in morphological characterization of this legume. Our study revealed that morphological aspect of seeds (in particular the seed coat colour) was the predominant criterion used by farmers to classify and identify pigeonpea landraces. The main reason is that seed coat colour is unique to each landrace while other traits may be commonly shared [15]. Our finding is contrary to those of Manyasa et Thus, given that the previous studies did not take into account the entire production area as insignificant as it may seem, a part of the existing pigeonpea landraces in Benin was left out. This finding suggests that extent of the study area affect varieties richness [45,15]. Thus a study that better reflects the existing diversity of cultivated species should not be restricted to the major production areas of the species. Our results also revealed that the on-farm level diversity of pigeonpea was specific to ecological zones. In fact, the same landraces haven't the same distribution and The fertilizing power of pigeonpea as main reasons for producing this legume reported in present study is not surprising because pigeonpea has significant position in dry land farming systems especially adopted by small and marginal farmers in many parts of world by fixing nitrogen, flexibility for mixed cropping or inter crop [46,47]. The use of pigeonpea leaves to treat various diseases such as malaria corroborates the observations made by Ayenan et al. [9] and Zavinon et al. [11] in Benin and those of Aiyeloja and Bello [48] and Oladunmoye et al. [49] in Nigeria. Also, the use of pigeonpea as weeds control has been reported by several authors in Benin [50,51,5]. However, the use of pigeonpea roots to prevent the rise of snake venom, in the case of snake bite reported in the current study has not been reported elsewhere. Also, grain processing into donuts identified in the current study has not yet reported by previous studies. Unfortunately, this technological ability of pigeonpea is weakened by its retention of oil. Thus, this possibility of transformation of pigeonpea must be explored and improved, like the soybean's transformation in cheese, in Benin. This action will help to reduce malnutrition in rural populations and improve in situ conservation of the existing pigeonpea diversity. Moreover, the use of pigeonpea grains as an offering for food or symbolic purposes and in sacrifice to divinity has not yet reported by previous work. All these findings are found to be sociolinguistic groups and ecological zones-dependent and suggest that pigeonpea farmers in Benin do not have the same knowledge on the use of pigeonpea. However, specific knowledge related to the plant part uses might be kept and transmitted within communities in some areas as a result of vertical knowledge transmission [52,15]. Knowing that integrating cultural practices of local communities permit an efficient on farm conservation [53,36], this specific use category of pigeonpea genetic resource show the potentiality of cultural approach for the conservation of this legume in Benin. Our study reveals that in the Sudanian zone, pigeonpea cultivation is increasing while it is in decreasing in the Guinean and Sudano-Guinean zone. In fact, the productivity of the smallholder farming system in this zone is under threat due to soil fertility decline [54]. Research in many parts of Africa including Benin has shown that legumes have the potential to sustain soil fertility in smallholder farming systems [55,56,47]. Thus, thanks to the project "Protection and Rehabilitation of The results showed that pigeonpea seed system in Benin is informal. Similar observations were made on pigeonpea in Tanzania [57] and India [16]. This informal seed system has the advantage to facility seed exchanges among farmers and among villages [17]. However, marketed seeds must deserve attention. In fact, seed acquisition from market does not guarantee genetic purity [58]. It is so important to make available to farmers good quality seeds in order to increase productivity of pigeonpea [58,10]. The association of pigeonpea with other crops has been reported in others countries such as Uganda [43] and Kenya [59]. After each harvest, the great majority of farmers stored seeds until scarcity at market before selling them. However, farmers in Sudanian zone sold immediately their seeds that help resolving urgent problems such as children education. These findings justify the fact that pigeonpea is an essential source of household income which can contribute to poverty reduction in Benin as reported by Dansi et al. [5]. In Benin, many factors negatively affect pigeonpea production. Long vegetative cycle followed by pests and diseases were the main constraints affecting pigeonpea production. Indeed, african pigeonpea were characterized by the late maturity [60,12]. According to farmers theses genotypes cultivation in sole crop occupies land that should be used for other crops. The pests and diseases ranked first in the Sudanian zone are not surprising. In fact, in this zone pigeonpea is more cultivated on pure land, which facilitates pests' attraction. Our findings confirm the observations made by Sarkar et al. [47] which revealed that intercropping system minimizes the attack of pest and diseases as compared to intercropping system. Farmers in this zone have limited access to pesticides and are suffering most from this production loss. Although the impact of pests and diseases was found to be low in the study area, their presence is the key indicator of the urgent need to develop strategies against these pests. Instead of the use of pesticides, an integrated pest management is recommended, through the combination of biological control based on the use of natural enemies of these pests and genetic control based on the use of tolerant or resistant cultivars [61,62,27]. As in the majority of leguminous where the attack of storage insects is a major constraint [63], surveyed farmers reported that seeds are sometimes subject to attack by storage insects. Farmers use some toxic products to protect their seeds. A sensitization of farmers or consumers for a purely biological conservation, as the use of small peppers, is highly recommended, as it is the case of Kersting's groundnut [64]. Curiously, low productivity ranked seventh among constraints. This suggests that low productivity represents only a small portion of the constraints mentioned by famers. Thus, low productivity is the direct consequence of the negative effects of the other constraints [59]. In view of the observed, the lack of improved varieties as a challenge to pigeonpea production. The availability of improved varieties and their distribution across the different ecological zones according to their specific needs can alleviate the constraints affecting pigeonpea production in Benin. Therefore, government should encourage small-scale enterprises to provide farmers with improved seeds. Farmer preference criteria take an important place in breeding program and facilitate the adoption of improved varieties [38,11]. Our study reveals that famers perceived precocity, resistance to pests and diseases, good taste and rapidity for cooking as the most important preferred traits. Similar observation was observed, on pigeonpea, by Mergeai et al. [59] in Kenya, Shiferaw et al. [65] in Tanzania, Changaya [66] in Malawi, Ogbe and Bamidele [67] in Nigeria and Ayenan et al. [9] in Southern Benin. All these preference criteria are correlated with identified constraints. This suggests a veritable link between these two parameters as reported by Odjo et al. [68] on Rice. The precocity as criterion is important for famers because short vegetative cycle varieties should certainly encourage them to produce pigeonpea. Indeed, in the global climate context where changes are noticeable, early varieties will provide producers the guarantee that pigeonpea plants attain a significant level of vegetative development before the cuts of rain. The high productivity as criterion of varietal choice is also not surprising as it is for any breeders and famers the most desired criteria [69,28]. Our findings are however contrary to those of Zavinon et al. [11] for whom, high market value is the main famers' preference criterion. In fact, the high market value cannot appear at the first rank of preference criteria in the sense that this criterion is only the result of the adoption of an improved variety for one or the other of the criteria. Our study revealed that preference criteria varied across different sociolinguistic groups however, convergence in preference criteria between certain sociolinguistic groups was observed. This could be explained by the cultural links and the intensive knowledge's exchange between these sociolinguistic groups or due to the common origin of these sociolinguistic groups. For a given character, the same landrace doesn't have the same performance, from one ecological zone to another. For instance, the landrace called Otili founfoun kékélé (Idaasha sociolinguistic group) which perceived by farmers as having high productive in Guinean and Sudano-Guinean zone has presented a low productivity in the Sudanian zone. This may be due to the variability in soil types, fertility and organic matter turn over, soil nutrient dynamics [70], water regime [71] across these ecological zones. In addition, the landrace called Klouékoun vôvô (Fon and Mahi sociolinguistic groups) which showed high productivity in Guinean and Sudano-Guinean zone but low productivity in the Sudanian zone reinforce the fact that variability in soil types, fertility and organic matter turn over, soil nutrient dynamics or water regime justify these agronomical difference. The Carder ekloui (Adja sociolinguistic group) only identified in the Guinean zone must deserve attention. This landrace combine four good performances (high productivity, rapid for cooking, resistant to pests and diseases, resistant to storage insects), according to famers and is appear as a promising landrace. Unfortunately, this landrace is threatened of disappearance. There is urgent need to process to an ex-situ as well in situ conservation to preserve this landrace as well as all those threatened of disappearance in Benin. All identified landraces in the current study must however be tested for their identified performance according to farmers. Thus, morphological and molecular characterization is highly recommended to help select suitable varieties for breeding programmes. Thereafter, association mapping of candidates' genes/QTLs for desirables traits can be done and used in future marker-assisted breeding program. Otherwise breeding of adapted pigeonpea to any type of soil and resistant to pests and disease will be of dual interest to famers in the Sudanian zone. It will enhance the chain value this legume and will also help to restore the fertilizing power of impoverished lands. Waiting, taking into account farmer's preference criteria, the few performing landraces identified can be used in a varietal exchange programs to enhance pigeonpea production in Benin. Conclusions Our study area showed a great varietal diversity of pigeonpea with fifteen landraces identified based on seed characteristics. Seven new landraces were found and some were specific to an agro-ecological zone. A highly significant decrease in cropping areas occurred in the Guinean and Sudano-Guinean zones. Some of the landraces are threatened of disappearance due to several factors, which constraint pigeonpea production and need to be considered under specific conservation strategy to avoid diversity loss. The few performing landraces identified through participatory evaluation can be used in varietal exchange programs in order to mitigate the effects of these constraints. The development of new varieties based on farmers' criteria is important to enhance pigeonpea production in Benin. To develop new varieties, morphological and molecular characterizations of identified landrace are highly recommended to help select suitable varieties for breeding programs. In situ and ex situ conservation strategies on the one hand and on the other hand the preservation of traditional knowledge associated to pigeonpea is important to preserve landraces threatened of disappearance and to conserve pigeonpea diversity in Benin. Funding Not applicable. Acknowledgments We express our sincere gratitude to all farmers, chiefs of village, and leaders of farmer groups for their contributions to the success of this study. We would like to acknowledge the technical support of Falil BANI OROU KOUMA during the prospection and collection of landraces. Availability of data and materials Raw and treated data generated during study are available from the corresponding author on reasonable request. Authors' contributions GK designed the study, collected and analyzed data and drafted the manuscript. AGF participated in the interview work. GD, LEYL, AD, CA and AD supervised data analysis and revised the manuscript. All authors read and approved the final manuscript. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Table 8 Comparative table of have no common letters are statistically different (p<0.05); ***p˂0.001; ns: nonsignificant difference at the 5% level GZ: Guinean zone; SGZ: Sudano-Guinean Zone; SZ: Sudanian Zone; TNV: Total Number of Villages in which the criterion is cited; MCR: Number of villages where the criterion is the major one or ranked first; PCr: number of villages in which the criterion was classified among the principal criterion i.e. among the first five; Imp: Importance Different pigeonpea landraces cultivated across ecological zones of Benin.
v3-fos-license
2020-09-03T09:12:47.440Z
2020-08-31T00:00:00.000
224820510
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1099-4300/22/9/969/pdf", "pdf_hash": "0e62b85abd1a65822971c6faab7ce398eeea23eb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44013", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "52a8ea9ef50badd01f86659b0b2a4dc98bbdf2e1", "year": 2020 }
pes2o/s2orc
On-The-Fly Syntheziser Programming with Fuzzy Rule Learning This manuscript explores fuzzy rule learning for sound synthesizer programming within the performative practice known as live coding. In this practice, sound synthesis algorithms are programmed in real time by means of source code. To facilitate this, one possibility is to automatically create variations out of a few synthesizer presets. However, the need for real-time feedback makes existent synthesizer programmers unfeasible to use. In addition, sometimes presets are created mid-performance and as such no benchmarks exist. Inductive rule learning has shown to be effective for creating real-time variations in such a scenario. However, logical IF-THEN rules do not cover the whole feature space. Here, we present an algorithm that extends IF-THEN rules to hyperrectangles, which are used as the cores of membership functions to create a map of the input space. To generalize the rules, the contradictions are solved by a maximum volume heuristics. The user controls the novelty-consistency balance with respect to the input data using the algorithm parameters. The algorithm was evaluated in live performances and by cross-validation using extrinsic-benchmarks and a dataset collected during user tests. The model’s accuracy achieves state-of-the-art results. This, together with the positive criticism received from live coders that tested our methodology, suggests that this is a promising approach. Introduction This manuscript explores fuzzy rule models for automatic programming of sound synthesis algorithms in the context of the performative artistic practice known as live coding [1,2]. Live coding is the act of writing source code in an improvised way to create music or visuals, arising from the computers' processing capacities that allowed for real-time sound synthesis around the new millennium. Therefore, the phrase "live coding" implies programming sound synthesis algorithms in real time. To do this, one possibility is to have an algorithm that automatically creates variations out of a few presets A preset is a configuration of a synthesis algorithm together with a label describing the resulting sound selected by the user [3]. However, the need for real-time feedback and the small size of the data sets, which can even be collected mid-performance, act as constraints that make existent automatic synthesizer programmers and other learning algorithms unfeasible to use. Furthermore, the design of such algorithms is not oriented to create variations of a sound, but rather to find the synthesizer parameters that match a given one. State-of-the-art automatic synthesizer programmers apply optimization algorithms that receive a target sound together with a sound synthesis algorithm and conduct a search approaching the target. For example, in [4], the "sound matching" performance of a hill climber, a genetic algorithm, and three deep neural networks (including a Long short-term memory) are compared. At the beginning of the new millennium, diverse systems using interactive evolution were developed [5,6]. These systems represent the settings in genomes, which are then evolved by genetic algorithms that use human selection as the fitness function. Although they provide great capabilities, the selection of the sounds, as they have to be listened to, is time consuming; as such, its use in live coding is hard to manage. Timbre is the set of properties that allow us to distinguish between two instruments playing the same note with the same amplitude. Some new approaches to timbre in sound synthesis [7] focus on models of instruments with "static" sound. Therefore, these approaches do not consider some elements of synthesizers, such as low frequency oscillators, which produce dynamic changing sounds over time (sometimes over several minutes). In [8], a methodology is presented that relates the spaces of parameters and audio capabilities of a synthesizer in such a way that the mapping relating those spaces is invertible, which encourages high-level interactions with the synth. The system allows intuitive audio-based preset exploration. The mapping is built so that "exploring the neighborhood of a preset encoded in the audio space yields similarly sounding patches, yet with largely different parameters." As the mapping is invertible, the parameters of a sound found in the audio space are available to create a new preset. The system works using a modification of variational auto-encoders (VAE) [9] to structure the information and create the mapping. By using VAE, parametric neural networks can be used to model the encoding and decoding distributions. Moreover, they do not need large datasets to be trained. This system works effectively as an exploratory tool in a similar sense to interactive-evolution based approaches. However, its interface is still oriented to sound matching and exploring rather than to automatically producing variations (it might be an interesting feature though). Furthermore, the resulting encodings are difficult to interpret from a human (especially non expert) perspective. A deep learning based system that allows for interpolation and extrapolation between the timbre of multiple sounds is presented in [10]. Deep-learning systems are a promising path for sound synthesis applications, although their training times still do not allow for real-time feedback. An algorithm, designed for live coding performance, that receives a set of labeled presets and creates real time variations out of them is proposed in [3]. It also allows for the addition of new input presets in real time and starts working with only two presets. The algorithm searches for regularities in the input data from which it induces a set of IF-THEN rules that generalize it. However, these rules only describe points that do not cover the whole feature space, providing little insight into how the preset labels are distributed. Here, we present an algorithm able to extend IF-THEN rules to hyperrectangles, which in turn are used as the cores of membership functions to create a map of the input feature space. For such a pursuit, the algorithm generalizes the logical rules solving the contradictions by following a maximum volume heuristic. The user controls the induction process through the parameters of the algorithm, designed to provide the affordances to control the balance between novelty and consistency in respect to the input data. The algorithm was evaluated both in live performances and by means of a classifier using cross-validation. In the latter case, as there are no datasets, we used a dataset collected during user tests and extrinsic standard benchmarks. The latter, although they do not provide musical information, do provide general validation of the algorithm. Even though this is a purely aesthetic pursuit that seeks to create aesthetically engaging artifacts, it is surprising that the accuracy of the models reaches state-of-the-art results. This, together with the positive criticism that the performances and recordings received, suggests that rule learning is a promising approach, able to build models from few observations of complex systems. In addition, to the best of the author's knowledge, inductive rule learning has not been explored beyond our work [3,11] neither for automatic synthesizer programming nor within live coding. The rest of this manuscript is structured as follows: Section 2 introduces rule learning for synthesizer programming; Section 3 presents the algorithm that extends IF-THEN rules; Section 4 discusses user tests, cross-validation tests and the reception of the live performances and recordings; Finally, Section 5 contains the conclusions. Inductive Rule Learning for Automatic Synthesizers Programming RuLer is an inductive rule learning algorithm designed in the context of live coding for automatic synthesizers programming [3]. It takes as input a set of labeled presets, from which a set of IF-THEN rules generalizing them is obtained. Examples of labels could be: "intro" if the preset is intended to be used during the intro of a piece, or "harsh", which could be the linguistic label describing the produced sound.The generalization process is based on the patterns found through the iterative comparison of the presets. To compare the presets, a dissimilarity function receives a pair of them and returns True whenever they are similar enough according to the specific form of the function and a given threshold. The dissimilarity threshold (d ∈ N) is established by the user. The algorithm works as follows: The algorithm iterates as follows, until no new rules can be created: 1. Take the first rule from the rule set (list). 2. Compare the selected rule with the other rules using the dissimilarity function (Section 2.1). If a pattern is found, i.e., the rules have the same class and the dissimilarity between them is less than or equal to the threshold d established by the user, create a new rule using the create_rule function (Section 2.2). 3. Eliminate the redundant rules from the current set. A rule r 1 is redundant with respect to a rule r 2 (of the same class) if ∀ i ∈ {0, . . . N−1}, • Add the created rules at the end of the rule set. Dissimilarity Function The dissimilarity function receives two rules (r 1 , r 2 ) together with a threshold d ∈ N and returns True if the rules have the same category and dissimilarity(r 1 , r 2 ) ≤ d. It returns False otherwise. The parameter d is an input parameter of the algorithm. The dissimilarity function, currently implemented in the RuLer algorithm, counts the number of empty intersections between the sets of the corresponding entries in the rules. Create_Rule Function This functions receives pairs of rules r 1 , r 2 , satisfying that dissimilarity(r 1 , r 2 ) ≤ d. No contradictions (i.e., rules with the same parameter values but different label) are created during the generalization process. 2. From all the presets contained in the candidate rule, the percentage of them contained in the original data are greater than or equal to a ratio ∈ [0,1]. This number is also an input parameter of the algorithm defined by the user. For instance, ratio = 1 implies that 100% of the instances contained in a candidate rule have to be present in the input data for the rule to be accepted. ratio = 0.5 needs 50% of the instances, etc. Domain Specific Functions Note that the dissimilarity and create_rule functions can be changed according to the objects being compared and the desired generalization. For example, for harmonic objects, we probably want to use a dissimilarity that looks at the harmonic content. For rhythms, temporal factors need to be addressed. See, for example, [12], for a comparison of rhythmic similarity measures. RuLer Characteristics The RuLer algorithm is designed to return all the existing patterns, expressing as rules all pairs of instances satisfying dissimilarity(r 1 , r 2 ) ≤ d, as its main intention is to offer all possibilities for creating new instances. Therefore, it is possible for a single instance, let us call it r 2 , to be included in more than one valid rule if r 1 , r 2 , and r 3 are single rules satisfying that: dissimilarity(r 1 , To illustrate this case, consider the dataset of Table 1. Table 1. Dataset to illustrate instances that appear in more than one rule. Rule Parameter 1 Parameter 2 Class The Notice that the combination [{2},{2},'intro'] is present in both rules. As mentioned, if this were not the case, one of the patterns might fail to return to the user. To illustrate this, consider the same dataset and let us use the Hamming distance (d = 1) as the similarity function. Then, suppose that the create_rule function, whenever a pattern is found, creates a rule taking the unions of the parameters of the respective rules and eliminates the component rules after producing the new one. With these conditions, comparing r 1 and r 2 produces the rule r 1,2 [{2,3},{2},'intro']. This rule will not produce another rule when compared with the remaining data: To avoid this, the create_rule and the dissimilarity function were conceived to return all the patterns found in the data. Regarding how d and ratio work, consider the simple set of individual rules presented in Table 2. Table 2. Dataset to illustrate instances that appear in more than one rule. Data Set If d = 2 and ratio = 1/4, the single rule that models the dataset is at the mid part of Table 2. The number of allowed empty intersections among the single rules at the Top of the Table is two. Then, every pair of rules can be compacted into a new rule during the process. As the ratio of single rules that have to be contained in the original data for any created rule is 1/4, the rule at the mid part can be created as it contains all the instances in the original data which are 1/3 of the number of single instances of the rule (nine). Note that this is true if all seen values are: for the first attribute 1, 2, and 3; For the second attribute 4, 5, and 6; For the third attribute 6. If d = 2 and ratio = 1/2, the rule model extracted by the algorithm is presented at the bottom of Table 2. Here, the ratio of single instances contained in any rule that have to be in the original data are 1/2. Therefore, the rule at the middle of Table 2 cannot be created. The parameter ratio is constant because it defines the level of generalization that the user of the algorithm wants to explore. The ratio allows for the extension of the knowledge base to cases that have not been previously used to build the model. If the user is more conservative, the ratio should be closer to 1. If the goal is to be more exploratory, lower ratios are needed. Finally, although no comparisons of computational time were carried out, the algorithm complexity serves to estimate its performance. If m is the size of input data, the algorithm complexity is O(m * (m − 1)). This complexity considers the dissimilarity and create_rule functions described. This complexity is better than a previous version of the algorithm O(2 m − 1) presented in [11]. FuzzyRuLer Algorithm The FuzzyRuLer algorithm constructs a fuzzy rule set of trapezoidal membership functions out of logical IF-THEN rules. For that, it builds hyperrectangles (Section 3.1), which are the cores of the trapezoidal membership functions and, in turn, are used to fit the supports (Section 3.2). Building Cores To build the cores, the algorithm extends the sets contained at the entries of the logical IF-THEN rules to intervals between their respective minimum and maximum values. For example, [1,4], [3,5], intro], including all the values in between 1 and 4 as well as between 3 and 5. Then, instead of four values, we have a region to choose from! Next, the contradictions that might appear between the created intervals are resolved. A contradiction appears when two rules with different labels or classes intersect each other. Two rules r 1 and r 2 intersect if for all i (i.e., parameter placed at position i in the antecedent of the rule) there exists x in r 1 [i] such that y 1 ≤ x ≤ y 2 with y 1 , y 2 ∈ r 2 [i]. If two rules with different classes intersect, it is enough to "break" one parameter to resolve the contradiction. For example, the contradiction between the rules r 1 and r 2 (at the top of Table 3 and depicted in Figure 2) can be resolved either as shown on the left or on the right of Figure 3. Table 3. The contradiction between r 1 and r 2 can be resolved by "breaking" one parameter. Figure 2. Rule [ [2,3], [1,5], harsh] intersects rule [ [1,5], [2,4], calm]. Harsh is represented by an "x" and Calm by a "." in the plot. To select the partition, the Measure of each set of rules is calculated and the one with maximum value is selected. The set with maximum Measure value is selected as it is the one that covers a wider region of the feature space. While the inductive process of the RuLer algorithm is intended to create new points, the generalization process of the FuzzyRuLer covers the entire observed space. Therefore, maximum coverage is the goal. The Measure of a single rule has components: Extension (E) and dimension, defined in Equation (1): Rule Parameter 1 Parameter 2 Class In Equation (1) The Measure of a set of rules collects the individual measures of the rules, adding those who have the same dimension. It is expressed as an array containing the extension for each dimension. When two measures are compared, the greatest dimension wins. For example, (Extension = 1, dimension = 2) > (Extension = 4, dimension = 1). In the same way (Extension = 1, dimension = 3) > (Extension = 100, dimension = 2; Extension = 100, dimension = 1). Table 4 presents an example. Fuzzy Rule Supports Once the cores are known, there are many possibilities for building the supports of trapezoidal membership functions. Here, as the algorithm is designed for real performance, we construct the supports using the minimum and maximum values observed for each variable. In this way, the slopes of each trapezoidal membership function are defined automatically by how close the core is to the respective minimums and maximums. Thus, each rule covers the whole observed space and the supports are defined automatically by the cores avoiding costly procedures that iteratively adjust the supports while the information is processed. This is done in the following way: For each parameter, the minimum and maximum values observed are calculated. If the parameter values are normalized, these values are 0 and 1. Then, the algorithm connects the extremes of each core with the respective minimum and maximum values of each parameter. See Figure 4 for an example. Evaluation Evaluation of automatic synthesizer programmers has followed two main approaches: user tests, in which expert musicians are interviewed after using the algorithm; In addition, similarity measures in sound matching tasks, in candidate sound, is compared with the target. Let us consider the unsupervised software synthesis programmer "SynthBot" [13], which uses a genetic algorithm to search for a target sound. The search is guided by measuring the similarity of the current candidate and the target, using the sum of squared errors between their MFCCs. The system was evaluated "technically to establish its ability to effectively search the space of possible parameter settings". Then, musicians competed with SynthBot to see who was the most competent sound synthesizer programmer. The sounds proposed by SynthBot and the musicians were compared with the target by using sound similarity measures. In [4], a hill climber, a genetic algorithm, and three deep neural networks are used for sound matching. The results are evaluated by calculating the error score associated with the euclidean distance between the MFCCs of the proposed sound and the MFCCs of the target. In our case, the evaluation includes: 1. The analysis of how the model generalizes a user test dataset. This evaluation is reinforced by other extrinsic benchmarks (Section 4.2). 2. The evaluation of the performances where the project has been presented and the lists where the compositions made with the algorithms have been included (Section 4.4). As one of the objectives of the FuzzyRuLer algorithm is to provide new presets classified with the same labels of the input data, the generalization using the user-labeled data are evaluated by cross-validation. The classifier used for that purpose is presented next. When the rules are used to classify new instances, the classifier assigns to them the label that it will assign to the same combinations if the model is used to produce new presets (data). In addition, cross-validation allows for the assessment of the performance of the algorithm using benchmarks in a task for which datasets might not exist. Fuzzy Classifier To classify a new preset P = (v 1 , . . . ,v N−1 ), proceed as follows: For each rule r k , calculate the membership of each feature value i.e., µ k,i (v i ). Then, calculate its firing strength τ k (P), which measures the degree to which the rule matches the input parameters. It is defined as the minimum of all the membership values obtained for the parameters (see Equation (2)), i.e, Once the firing strength has been calculated for all rules, the assigned class will be equal to the class of the rule with maximum firing strength, as in Equation (3): An example of the classification process for a hypothetical system with two rules each with two parameters is shown in Figure 5. For the second rule µ(v 1 ) = f , µ(v 2 ) = g and min( f , g) = g. Finally, max(e, g) = e and therefore the class assigned to the instance is Class i. Cross-Validation To test how the algorithm models the feature space of a synthesis algorithm, we used the data set described in [11]. This dataset was generated by user tests, in which different configurations of a Band Limited Impulse Oscillator [14] were programmed by users and tagged either as rhythmic, rough or pure tone. For this, the users tweaked the device parameters of the synthesis algorithm: Fundamental Frequency and Number of Upper Harmonics (which are add to the fundamental frequency). Then, the parameter combinations that produced any of the searched categories were saved together with the corresponding label. The data set is shown in Figure 6. In addition, four datasets from the UCI repository [15] were selected. As they belong to diverse domains and have different unbalanced degrees, they provide a general idea of how the algorithm behaves. The results of the fuzzy classifier of Section 4.1 were compared with K-Nearest Neighbours, Support Vector Machine (with kernels linear, polynomial degree 2 and rbf), and Random forest classifiers. The K-Nearest Neighbours does not require a training period (these types of algorithms are known as instance based learners). It stores the training data and learns from it (analyzes the data) as it performs real-time predictions. While this has some disadvantages (for example it is sensitive to outliers), it also makes the algorithm much faster than those that require training, such as SVM. By assigning the classes only by looking at the neighbors, new data can be added with little impact to its accuracy. These characteristics make KNN very easy to implement and to interpret (only two parameters are required: the value of K and the distance function). The Support Vector Machine (SVM) is an algorithm with good generalization capabilities and nonlinear data handling using the kernel trick. In addition, small changes in the data do not affect its hyperplane. However, choosing an appropriate Kernel function is difficult and the algorithmic complexity and memory requirements are very high. As a consequence, it has long training times. In addition, the resulting model is difficult to interpret. The Random Forest is based on the bagging algorithm and uses an Ensemble Learning technique. It creates many trees and combines their outputs. In this way, it reduces the overfitting problem of decision trees and reduces the variance, improving the accuracy. It handles nonlinear parameters efficiently. However, as it creates lots of trees, it requires computational power and resources. Using the Random Forest to compare is interesting because these algorithms are normally considered the alternative to rule learning. However, while a random forest algorithm might indeed perform as easy and fast as the FuzzRuler, its only parameter, the Number of trees, is not as expressive and interpretable for the user as parameters d and ratio for controlling the induction process. Together, these algorithms provide a spectrum to compare the classifier. For each dataset, the model parameters producing the highest 10-fold (70% training and 30% test) cross-validation accuracy were selected. For the SVM, tested parameter values for C and gamma were respectively [0.01, 0.1, 1, 10, 100, 1000] and [1, 0.1, 0.01, 0.001, 0.00001, 0.000001, 10]. For KNN, the tested N values were [1,2,3,4,5,6,7,8,9,10] and for the Random forest [1, 10, 100, 500,1000] trees were considered. In the case of the FuzzyRuLer, d was explored from 1 to half the number of features in the dataset and ratio with [0.9, 0.8, 0.7, 0.6, 0.5] values. Table 5 presents for each model the parameter selected and the accuracy obtained. Table 5. Data sets Wine, Wine-quality-red, Glass and Ionosphere, selected from the UCI repository [15]. The Blip data set was obtained from [11]. The accuracy was calculated using 10-fold cross validation. Cross Validation Results Table 5 shows the cross-validation mean accuracy results obtained for each classifier and dataset. Table 6 presents the general mean and standard deviation for each classifier. These results show that the FuzzyRuLer yields similar results to those achieved by state-of-the-art classification algorithms. There exists abundant literature applying different machine learning algorithms to the UCI datasets; see, for instance, [16]. However, the algorithms are used for a variety of purposes and under different conditions. For example, their evaluations use different partition schemes or sometimes are performed using techniques that trade execution time to gain accuracy, e.g., leave-one-out. Here, some references intended to frame the obtained results are presented. However, the reader has to keep in mind that these experiments are not completely comparable. Data For the Wine dataset, according to [15], the classes are separable, though only RDA has achieved 100% correct classification. The reported results are RDA : 10 0%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data), in all cases, the results have been obtained using the leave-one-out technique. In [17], using the Wine-quality-red dataset with a tolerance of 0.5 between the predicted and the actual class, the SVM best accuracies for this dataset were around 57.7% to 67.5%. Finally, for the Ionosphere dataset, in [18], Deep Extreme Learning Machines (DELM) were used for classification. According to the report, the multilayer extreme learning machine reaches an average test accuracy of 0.9447 ± 0.0216, while the DELM reaches an average test accuracy of 0.9474 ± 0.0292. In [16], they report the following results KNN 0.8, SVM 0.8286, LMNN 0.9971. To compare if mean accuracies are significantly different between algorithms, we performed a statistical test. As the predictor variables are categorical and their outcomes are quantitative, we performed a comparison of means test. As there are more than two groups being compared, but there is only one outcome variable, the statistical test is the one-way-ANOVA. Table 7 shows that the p-value of the one-way analysis of variance is greater than the significance level 0.05, from which we conclude that there are not significant differences between the groups. The Tukey multiple comparisons of means yields 95% family-wise confidence level. Together, these results suggest that the fuzzy model could be used to generate new instances. Figure 7 shows the fuzzy rules obtained for the three categories of the "Blip" data set (shown in Figure 6) by using the FuzzyRuLer algorithm. Extracted Rules Although the Blip is a simple data set, it provides insight into the algorithm capacities for identifying the underlying structures that codify the categories. In Figure 7, it can be seen that the ranges in the frequency that separate the categories are consistent with the perception thresholds described in [19]. These are: from 0 Hz to approximately 20 Hz the category is rhythmic no matter the number of harmonics added. From 20 Hz depending on the number of harmonics added, the sensation is rough until approximately 250 Hz. If the frequency is greater than 20 Hz and there are no harmonics added, or if the frequency is greater than approximately 250 Hz, the sensation is pure tone. Live Performances and Recordings A series of live coding performances and recordings have accompanied the design and testing of the algorithm. These have been developed in different contexts and venues including universities, artistic research centers, theatres, online streaming, smoky bars, etc. They allow for the evaluation of: 1. The algorithm affordances and capacities to produce "interesting variations" over the input data during the performance. 2. How the community receives the music generated using the algorithms. The live performance presented during the live coding => music; seminar [20], held at the Instituto Nacional de Matemática Pura e Aplicada (National Institute for Pure and Applied Mathematics) of Rio de Janeiro, is presented in [21]. The online performance presented during the EulerRoom Equinox 2020, which featured 72 h of live coding performances around the world (20)(21)(22), can be foud in [22]. The EP studio album Visions of Space [23], featured by the Berliner record label Bohemian drips, applied IF-THEN rules to generate the sections of tracks 4 and 5. Although a subjective appreciation, the algorithm has shown effective capacities to produce new interesting material on-the-fly. The current version allows for the preloading of data before the performance and/or the saving of new instances as they are found. If all the instances are captured in real time, the space exploration process becomes part of the performance. The current implementation does not overwrite the input data with the extracted model, so the performer can extract different sets using different combinations of d and ratio while conducting the piece. In 2018, the Bandcamp Daily featured the album Visions of the Space together with nine other albums realized during 2017 under the list Meet the Artists Using Coding, AI, and Machine Language to Make Music [24]. Conclusions Real-time synthesizer programming in live coding imposes challenges to the intended use of learning algorithms, which provide numerous well-chosen examples, and have processes for data cleaning, learning and testing before selecting the final model. Here, on the contrary, the examples are collected in real time, sometimes including musician mistakes that have to be managed as glitches and integrated into the performance. In cases when the data are pre-selected, the size of the datasets may be small. In other words, in this artistic practice, although it is also possible to include already trained models, the artists focus on having real-time feedback, creating the dataset mid-performance. Then, real-time algorithms that operate with small noisy data are also needed. Inductive rule learning has offered interesting results within this context. However, the number of inducted instances is reduced and the resulting IF-THEN rules provide a poor visualization of the space. The fuzzy rule learning algorithm presented in this manuscript is able to build fuzzy rule models of the feature space out of a set of IF-THEN rules. The resulting set provides an image of the class distribution in the feature space that helps musicians to have a quick insight into the inner workings of the synthesis algorithm. As the new examples only modify the rules that they "touch", the general model can manage outliers, integrating them into the model. The model has been evaluated during live performances and recordings which have been well-received by the community. The performances and reviews are available as part of the references. Finally, the model was also evaluated using cross-validation, comparing its results with those obtained by KNN, SVM (linear, polynomial degree 2 and rbf), and Random Forest classifiers. The one-way analysis of variance shows that there exist no significant differences among the algorithms. These results together suggest that the algorithm is a promising approach to be used in contexts, such as live coding, where the focus is not necessarily placed in model accuracy but, for example, in having real-time feedback of the algorithmic process. Funding: This work has not received financial support.
v3-fos-license
2024-06-16T05:08:51.978Z
2024-06-14T00:00:00.000
270512879
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "4a947809d103fe2a6df1c787abaf58e3ee0e8ab3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44014", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4a947809d103fe2a6df1c787abaf58e3ee0e8ab3", "year": 2024 }
pes2o/s2orc
Development of an outcome indicator framework for a universal health visiting programme using routinely collected data Background Universal health visiting has been a cornerstone of preventative healthcare for children in the United Kingdom (UK) for over 100 years. In 2016, Scotland introduced a new Universal Health Visiting Pathway (UHVP), involving a greater number of contacts with a particular emphasis on the first year, visits within the home setting, and rigorous developmental assessment conducted by a qualified Health Visitor. To evaluate the UHVP, an outcome indicator framework was developed using routine administrative data. This paper sets out the development of these indicators. Methods A logic model was produced with stakeholders to define the group of outcomes, before further refining and aligning of the measures through discussions with stakeholders and inspection of data. Power calculations were carried out and initial data described for the chosen indicators. Results Eighteen indicators were selected across eight outcome areas: parental smoking, breastfeeding, immunisations, dental health, developmental concerns, obesity, accidents and injuries, and child protection interventions. Data quality was mixed. Coverage of reviews was high; over 90% of children received key reviews. Individual item completion was more variable: 92.2% had breastfeeding data at 6–8 weeks, whilst 63.2% had BMI recorded at 27–30 months. Prevalence also varied greatly, from 1.3% of children’s names being on the Child Protection register for over six months by age three, to 93.6% having received all immunisations by age two. Conclusions Home visiting services play a key role in ensuring children and families have the right support to enable the best start in life. As these programmes evolve, it is crucial to understand whether changes lead to improvements in child outcomes. This paper describes a set of indicators using routinely-collected data, lessening additional burden on participants, and reducing response bias which may be apparent in other forms of evaluation. Further research is needed to explore the transferability of this indicator framework to other settings. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-024-11178-7. Introduction A healthy childhood sets the stage for health across the life course [1].Universal health visiting provision has been a cornerstone of preventative healthcare for children in the United Kingdom (UK) for more than 100 years [2].In that time many changes have taken place, although the foundations of a home visiting programme covering a wide array of child and parental health assessment and support has continued.Health visiting, as with all health policy in Scotland, has been devolved from UK policy since 1999, leading to changes which are specific to Scotland.In 2003, the Health for All Children, version 4 (Hall 4) was published with specific recommendations that a core number of contacts should be established for all children (the 'Core' group), with additional visits beyond the 6-8 week check being provided for those deemed to be in need of additional support.The majority of visits/contacts were to be carried out by a range of health professionals [3].Research on this approach demonstrated that the identification of additional support needs at six to eight weeks missed substantial proportions of children who later displayed language problems (43% of those with language difficulties at 30 months being in the 'Core' group at 6-8 weeks), or social, emotional and behavioural difficulties (64% having previously been allocated to the 'Core' group) [4].This led to a revision of the Hall 4 policy in Scotland, which reintroduced a universal review of the child at 24-30 months [5]. Over the subsequent years, further evidence indicated inconsistencies in the delivery of health visiting [6], whilst the introduction of a Named Person service for young people, where a named person is the single point of contact if a child or their parents want information or advice, led to the health visitor role being increasingly important as the named person for children prior to starting school, usually at age 5 years [7]. In 2015, this led to the publication of the new Universal Health Visiting Pathway (UHVP) policy [8], which reemphasised both the role of the health visitor in home visiting and the importance of regular universal home visits throughout the preschool years with a particular focus on the first year of life (Fig. 1). Additionally, the UHVP placed greater importance on visits being conducted in the home setting, involving both parents where applicable, and using robust developmental assessments [8].The UHVP was intended to be implemented across Scotland for children born on or after 1 April 2016.In reality, however, implementation varied across health boards, with the earliest point of implementation occurring for children born in April 2015, and the latest for children born in October 2019. Monitoring child health outcomes is of primary concern to policymakers in many high-income countries, including Scotland.This has led to a wealth of administrative data being collected about child health, particularly in the early years.Among administrative datasets available, Child Health Surveillance Programmes (CHSP) aim 'to prevent disease, detect physical and developmental abnormalities, and promote optimum health and development' [9].In many high-income countries these surveillance programmes have now been running for many years, making them an ideal source of data for capturing trends over time, and providing opportunities to use quasi-experimental techniques to explore the impacts of national policy interventions.The proposed indicator framework makes use of the Scottish CHSP, collected primarily by health visitors, supplemented by other routine data sources where necessary. Although other indicator frameworks of child health have been developed [10], these frameworks contain limited application for the Scottish context, due both to cultural differences in service delivery, and, more importantly, a tendency to focus on programme-based data collection, rather than universal health surveillance-based indicators.Ben-Arieh [11] notes that administrative data are likely to be the best option for developing new sets of children's wellbeing indicators, due to both the expense of alternative approaches, such as surveys, and the abundant availability of administrative data.This is particularly true when looking at outcomes across a whole population, as is the case in this current study, which uses administrative data collected as part of the CHSP, alongside other routine data sources such as hospital admissions, to explore a range of outcomes which can potentially be attributed to the actions of health visitors. The UHVP represents a significant financial commitment from the Scottish Government of £40 million to increase health visiting staff numbers to deliver the pathway.Public bodies require evidence that changes being made as the result of such investment are benefitting the target population (in this case improving outcomes for children and families) in order to justify continuing expenditure.To evaluate the impact of the UHVP on a range of outcomes, the Scottish Government therefore commissioned a research consortium to robustly evaluate the implementation and outcomes for the programme.The aim of this paper is to outline the process of developing an outcome indicator framework, specifically including outcomes that could be measured using routinely-collected data as part of this UHVP evaluation.The paper will document the process and decisions that led to the creation of a set of administrative data indicators, what those indicators comprise, and the baseline data from those indicators. Setting This study was undertaken in Scotland, where there are approximately 55,000 births per year.As part of the National Health Service, health visiting is a free at point of use service provided to all new parents in the UK.As part of this, everyone is given a universal identifier (Community Health Index (CHI) number) which allows their health data to be linked over time.Set data are collected as part of the health visitor reviews each time they visit.A subset of these data are collated at a national level and can be made available to research.This evaluation sought to utilise these routine data at an aggregate level.Administrative data should, in theory, therefore be available for all children born in Scotland, or who subsequently moved to Scotland and registered with a GP.In reality, not all parents will take up the offer of an appointment, and/or may refuse to answer individual questions: for this reason, not all children will have data available for every review/variable.Missingness is discussed in the results section. Methods Before beginning the evaluation of the UHVP, an evaluability assessment was conducted [12].As part of the assessment a theory of change was produced [13]: this involved a series of workshops with stakeholders to explore the UHVP implementation processes and to define their anticipated outcomes.The criteria for outcomes were: (1) outcomes that relate to the child and family, and (2) outcomes which could feasibly be influenced by Health Visitors through the pathway.This formed the basis of an initial logic model [12] to visually explain the pathways from the activities (e.g. home visits) to the selected outcomes (e.g.health childhood development and early identification of problems).Following the implementation of the UHVP, this logic model was revisited in a further series of workshops, bringing together the research team with 31 health visitors and other health professionals and managers, policy-makers, and third-sector organisations; resulting in the final logic model for the evaluation of the UHVP [14].The research team then mapped outcomes identified within the logic model to the most appropriate methods for assessing them (sometimes comprising more than one method).This resulted in four distinct workstreams of qualitative evaluation, case notes review, surveys and routine data.The full methodology can be found in the UHVP Protocol paper [15]. Four broad outcome groupings from the logic model were allocated to the routine data analysis stream of the evaluation (Table 1).This paper focuses on the range of outcomes within these grouping which were: i) improved health behaviours within families; ii) improved child development and school readiness; iii) improved health outcomes for children; iv) improved child safety and protection. Administrative data were then sought which were nationally available at both the pre-and post-UHVP implementation stages, and addressed the outcome groupings.To confirm face validity and feasibility, a provisional list of outcome indicators was shared with wider members of the UHVP evaluation team (which included academics from health visiting, community paediatrics and social work and a senior Scottish Government analyst working on child protection data) for comment. Further discussion on this provisional list was undertaken with Public Health Scotland and Scottish Government analysts to obtain indicative background information on: the quality of the data source to be used for each measure; and how common the chosen outcomes were.This led to certain provisional measures being refined or dropped.Further review by the Research Advisory Group for the project (which included third-sector colleagues, health professionals and policymakers) led to the inclusion of two additional measures on looked after children, supplementing those already included on children on the child protection register. Preliminary aggregate data (by Health Board) were sought to assess quality of data sources and power calculations were performed.This is important for any future evaluation to ascertain if any change could be reasonably identified in the data.Information was used to estimate the statistical power to detect a 5%, 10%, and 20% relative change in each of the outcomes of interest in the Scottish context (Supplementary Table 1).As there was no clear a priori information available on the expected impact of the UHVP on the chosen outcome indicators, estimating the power to detect a 5%, 10%, and 20% relative change gave a useful indication of power to detect relatively modest, but potentially feasible impacts that would represent meaningful improvements at the population level.All methods carried out in the study were performed in accordance with relevant guidelines and regulations, such as the Public Health Scotland Statistical Disclosure Protocol [16] . All data analysed were secondary data held by the Scottish Government and Public Health Scotland, respectively.These aggregate data were made available to the research team by the respective organisations through Scottish Government and Public Health Scotland colleagues working with us as part of the evaluation.Details on how others can access these data are available at the end on the paper. The evaluation received ethical approval from the School of Health in Social Science Research Ethics Committee, University of Edinburgh. Results Within the broad outcome groupings, the final outcome indicator framework comprised 18 indicators across eight core outcome areas: parental smoking, breastfeeding, immunisations, dental health, developmental concerns, obesity, accidents and injuries, and child protection interventions (Table 1).One element ('school readiness') was not able to be assessed within the routinely-collected data. Data quality was mixed: coverage of health visitor reviews, in which data are collected for the CHSP, was high, with over 90% of children receiving their 6-8 week review, and over 90% receiving their 27-30 month review.Individual item completion within reviews was far more variable, ranging from 63.2% for child BMI (used to calculate overweight and obesity) at 27-30 months, to 92.2% for breastfeeding outcomes at 6-8 weeks.Completeness of other (non-CHSP) sources was assumed, e.g.hospital data, whereby a lack of a record of hospital admission is Child protection interventions 4a.Placed on child protection register at any point between birth and third birthday 4b.Placed on child protection register for ≥ 6months between birth and third birthday 4c.'Looked After Child' status at any point between birth and third birthday 4d.'Looked After Child' status for ≥ 6months between birth and third birthday BMI, body mass index; SIMD, Scottish Index of Multiple Deprivation assumed to mean there was no admission.Accuracy of data recording is difficult to quantify for routinely-collected data, however analysis from Public Health Scotland indicates that the accuracy of diagnostic coding in Hospital Admissions data, for example, is high [17]. Alongside coverage, utility of the data also depends on the prevalence within the population.Although the majority of indicators chosen fall in the middle range of prevalence, eight indicators, relating to three outcomes, demonstrated extreme high or low prevalence.Overall, 93.6% children had received all immunisations by age 2 years.Conversely, levels of accidents and injuries were found to be very low, with 3.4% having any hospital admission for unintentional injury by age three, and child protection indicators were equally low, from 1.3% children having been on the Child Protection register for more than six months between birth and the child's third birthday, to 2.7% being placed on the child protection register during the same period (Supplemental Table 2. Corresponding power calculations, which were undertaken based on indicative data prior to the evaluation commencing, demonstrate the impact of these prevalence rates on the power to detect change.Although there is adequate statistical power to detect modest levels of impact (20% relative change or less) of the UHVP on the majority of outcome measures, this is not necessarily the case with indicators with very low prevalence i.e. unintentional injuries and child protection measures.These also require follow-up to a child's 3rd birthday, reducing the number of children that can be included in the exposed group in these data.This means that our power to detect modest impact on these outcomes is relatively low.Consequently, only more substantial impacts/ differences between unexposed and exposed groups will be identified as statistically 'significant' .For example, we will have estimated 61% power to detect a 20% change in the proportion of children admitted for unintentional poisoning, burns or scalds by their 3rd birthday as statistically significant (at the 1% significance level).This is still a feasible level of impact; hence all agreed outcome indicators are likely to be informative to some degree. Discussion The final indicator framework comprised eighteen indicators across eight core outcome areas and four broad groupings.The eight outcome areas were: parental smoking, breastfeeding, immunisations, dental health, developmental concerns, obesity, accidents and injuries, and child protection interventions.These were felt to be key to child health, as well as being outcomes that health visitors were able to influence.Many of these are central to policy priorities, not only in Scotland, but across the world, as they form risk indicators for health outcomes across the life course: these included exposure to second-hand smoke, being breastfed, receiving childhood immunisations, and being overweight or obese [18][19][20][21]. Dental attendance was included as a pathway to improved dental health: health visitors discuss dental health, registration and attendance with parents in infancy and beyond.Dental health among children in Scotland is particularly poor, especially among children living in more deprived areas and those with Looked After Status [22].This is an area where health visitors have the potential to improve outcomes through encouraging toothbrushing and dentist attendance.Developmental data are important for two reasons: first, health visitors work with parents to encourage activities that aid child development, e.g.reading, singing and play; and second, health visitors are key to identifying delayed development in early childhood as well as advising parents of and referring to appropriate services, whether that be diagnostic services, speech and language therapy, or early access to free preschool places [4].Accident and injury data attempt to capture accidents and injuries in the early years which are largely preventable, such as burns, scalds and head injuries, the majority of which happen in the home setting [23].Health visitors work with parents to put in place preventative measures such as locks on cupboards and stairgates, as well as discussing supervision of children.Finally, health visitors play a key role in identifying, alongside social workers, where families are struggling to cope.For this reason a range of child protection indicators were included in the framework.On the advice of specialists we consulted in this field, including academic social workers, measures of numbers of child protection registrations and children with Looked After Child status, as well as the length of time their names were spent either on the register or they were in care, were included.This is because it was felt that through seeing children more regularly in the early years, health visitors might make more referrals to social work with regards to child protection concerns, resulting in increases to these figures; if intervention occurred at an earlier stage, children and families should receive appropriate support in a timely manner, and thus may spend less time with their names on the child protection register or in the care of a Local Authority . It is notable though that indicators were only captured for fields in which national administrative data were collected.This meant that some outcomes which were deemed important to measure, such as quality of parent-child relationship or parental efficacy, could not be captured due to a lack of available quantitative data.Attachment behaviour is not currently captured in national data and it is debatable the extent to which this could be quantitatively measured at this scale.Attachment is robustly measured in an experimental setting through either observation of parent and child, or through a story-play task, usually conducted by a specially trained psychologist [24].This is not practical at a population level.This is in contrast to previous child health indicator frameworks which used a wider range of data collected specifically to evaluate the programme and are thus able to capture some of these 'softer' measures [25].As part of a national health service, resources are limited, and focus therefore is, understandably, directed towards provision of care, rather than data collection per se.Whilst Public Health Scotland frequently revisit which data are collected, it is not currently possible to collect and collate qualitative data at a national level. Although the framework has been defined by Scottish health and social care professionals, policy-makers and researchers, based on available Scottish administrative data, it has potential to be adapted and implemented for evaluation of child health service provision, and home visiting programmes, internationally.Previous research has demonstrated similar indicators of interests, however the challenge to date has been around access to consistent and high quality data [26].As countries have increasingly sophisticated data infrastructure programmes, the ability to monitor and assess progress towards enhancing children's experiences and outcomes will only improve. In addition, whilst the suggested framework covers many key components of child health and development, some of these factors, such as accidental injuries and child protection interventions, are (thankfully) relatively rare.Even in a population of more than 50,000 births per year, this causes problems in the power to demonstrate smaller changes following the implementation of an intervention such as the UHVP.By contrast, a ceiling effect may be present for high-prevalence outcomes such as childhood immunisation, where very few children do not receive all immunisations.Of course, this has the potential to change if the framework is used in different cultural contexts and highlights the importance of indicators being assessed in relation to the population in question before use. Although data coverage is high, on the whole, this was not the case with BMI measurements.This varied substantially by health board.Levels of overweight and obesity were high: 40.3% of children who were measured were overweight or obese at 27-30 months, compared with 22.8% at age 4-5 years [27].The researchers hypothesise that this may be related to selective weighing of apparently heavier children, as well as the use of the WHO growth standard, which is based on 'optimal' breastfed babies (i.e.mothers who were non-smokers, no health, environmental or economic constraints on growth; absence of significant morbidity; gestational age 259-294 days; and single term birth), rather than the UK90 standard [28,29].There is therefore a risk that any increase in improvements in coverage of this measurement will result in artificial improvements in this indicator, and this needs to be closely monitored. Strengths and limitations The strengths of this study are that it comprises a robust approach to development of a suite of measures which (a) reflect important aspects of the health, development, and wellbeing of pre-school children, (b) may be influenced by child health/home visiting programmes, and (c) are likely to be measurable using routinely available administrative data in high-income countries.The indicators were informed by a wide range of discussions with health professionals, policy-makers, academics and third-sector organisations, and cover a wide range of domains deemed to be important to early child health.Overall, data quality was high, with the exception of height and weight data, where a large proportion of data were missing, and high levels of overweight/obesity in the available data indicate that this might not be at random. Indicators were further limited by the availability of national data, resulting in some indicators of interest, e.g.attachment, not being able to be included in the indicator framework.Additionally, some outcomes had extremely low or high prevalence, which may limit their use.The indicator framework would need to be assessed for cultural appropriateness before transferring to other settings. Conclusions Good health in childhood is associated with more positive outcomes in adulthood [1].In high-income countries, such as Scotland, home visiting services, such as that undertaken by health visitors, plays a key role in ensuring that children and their families have the right support to enable the best start in life.As home visiting programmes evolve, it is crucial to understand whether changes lead to improvements in child outcomes.This paper sets out the process for developing a set of indicators using routinely-collected data, lessening additional burden on participants, and reducing response bias which may be apparent in other forms of evaluation.The resultant framework contained 18 indicators across 8 key outcomes and four broad groupings, allowing robust identification of trends and changes across time.Further research is needed to explore the transferability of this indicator framework to other settings. Fig. 1 Fig. 1 Universal Health Visiting Pathway timeline (produced and published by the Scottish Government [8]) Table 1 Final routine data outcomes alongside the logic model item
v3-fos-license
2017-06-27T00:57:21.886Z
2012-11-20T00:00:00.000
1081563
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mrmjournal.biomedcentral.com/track/pdf/10.1186/2049-6958-7-46", "pdf_hash": "50606eb408db97565d80b6348fb1a8fc1d02cf84", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44017", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "a2eaa257154f41de0a656826e153518db2ed973d", "year": 2012 }
pes2o/s2orc
A pilot survey on the quality of life in respiratory rehabilitation carried out in COPD patients with severe respiratory failure: preliminary data of a novel Inpatient Respiratory Rehabilitation Questionnaire (IRRQ) Background Measuring the state of health is a method for quantifying the impact of an illness on the day-to-day life, health and wellbeing of a patient, providing a quantitative measure of an individual’s quality of life (QoL). QoL expresses patient point of view by a subjective dimension and can express the results of medical intervention. Pulmonary rehabilitation is an essential component in the management of COPD patients, and measuring QoL has become a central focus in the study of this disease. Although nowadays several questionnaires for measuring the QoL in COPD patients are available, there are no questionnaires specifically developed for evaluating QoL in COPD patients undergoing respiratory rehabilitation. The aim of this study was to develop a novel questionnaire for the QoL quantification in COPD patients undergoing in-patient pulmonary rehabilitation program. Methods The questionnaire, administered to COPD patients undergoing long-term oxygen therapy into a respiratory rehabilitation ward, was developed by a simple and graphic layout to be administered to elderly patients. It included one form for admission and another for discharge. It included only tips related to the subjective components of QoL that would be relevant for patient, although likely not strictly related to the respiratory function. A descriptive analysis was performed for the socio-demographic characteristics and both the non-parametric Wilcoxon T-test and the Cronbach’s alpha index were calculated for evaluating the sensitivity of the questionnaire to the effects of respiratory rehabilitation and for identifying its consistency. Results The physical and psychological condition of the 34 COPD patients improved after the rehabilitative treatment and this finding was detected by the questionnaire (overall improvement: 14.2±2.5%), as confirmed by the non-parametric Wilcoxon test (p<0.01). The consistency detected by the Cronbach’s alpha was good for both the questionnaire at admission and at discharge (0.789±0.084 and 0.784±0.145, respectively), although some items did not adequately measure the intended outcome. Conclusions This proposed questionnaire represents a substantial innovation compared to previous methods for evaluating the QoL, since it has been specifically designed for hospitalized COPD patients undergoing respiratory rehabilitation with serious respiratory deficiency, allowing to effectively determining the QoL in these patients. Background Measuring the state of health is a method for quantifying, in a standardized and objective manner, the impact of an illness on the day-to-day life, health and wellbeing of a patient. This process is very similar to gathering a well-structured clinical history but, instead of the collection of simple clinical findings, it provides a quantitative measure of the individual's quality of life (QoL), which can be used for scientific purposes. At present, chronic obstructive pulmonary disease (COPD) is recognized to be one of the major causes of death in industrialized countries and it represent a multisystemic pathology that induces disabilities and handicaps [1]. Pulmonary rehabilitation is an essential component in the management of COPD patients, and its success is mainly obtained through the improvement in exercise capacity, dyspnoea and QoL [2][3][4][5][6][7][8]. Measuring QoL has, thus, become a central focus in the study of COPD. Traditionally, results in the fields of health/medicine and rehabilitation have almost always been measured through objective medical evaluations. On the other hand, recently there has been an ever-increasing focus on the patients' perspective [9]. Therefore, the evaluation of outcomes patient-focused and measuring the wellbeing perceived by the individual in the physical, psychological, social and material areas [10,11], have lead to the development of a new concept of QoL [11,12] and of patient's satisfaction. Indeed, the evaluation of the state of health and the influence of therapeutic intervention should incorporate not only changes in the gravity of illness, but also the impact upon the state of wellbeing. The World Health Organization (WHO) defines QoL as "an individual's perception of their position in life in the context of the culture and value systems in which they live and in relation to their goals, expectations, standards and concerns. It is a broad ranging concept, affected by the person's physical health, psychological state, level of independence, social relationships, and to salient features of their environment" [11]. Nevertheless, different studies and reports on QoL used different definitions for QoL [10][11][12]. However, among all definitions, there is agreement concerning the concept that QoL expresses patient point of view by a subjective dimension and that QoL can express the results of medical intervention [2,[10][11][12]. Therefore, the methods for evaluating QoL should be practical and simple to use, for clinical research and for evaluating the results of medical intervention. Two main kinds of questionnaires for evaluating QoL exist, one generic and one illness-specific. The former includes the Sickness Impact Profile (SIP), the Nottingham Impact Profile (NIP) and the Short Form 36 (SF-36) [13][14][15]. The latter allows comparing patients suffering from different and specific pathologies and it investigates the characteristic aspects of an illness, including questions concerning symptoms linked to the illness itself. Effectively, a fair number of instruments/ tools specific for COPD are at our disposal, including the Chronic Respiratory Questionnaire (CRQ), the St George Respiratory Questionnaire (SGRQ), the Maugeri Foundation Respiratory Failure Questionnaire, the Airways Questionnaire (AQ 30\20), the Breathing Problems Questionnaire (BPQ), the Pulmonary Functional Status & Dyspnea Questionnaire (PFSDQ) and the Pulmonary Functional Status & Dyspnea Questionnaire-Modified (PFSDQ-M) [16][17][18][19][20][21][22]. Nowadays, outcome measures are necessary for describing individual improvements and the efficacy of a rehabilitation program for COPD patients and consequently both CRQ and SGRQ represent the most used questionnaires for these patients, demonstrating to be responsive to respiratory rehabilitation [2,16,23,24]. Furthermore, due to the often incurable and relentlessly progressive respiratory deficiency led by COPD, the specific questionnaire named MRF-26 has recently been developed [18,21,22]. Therefore, the aim of the present study was to begin the development a novel questionnaire for the evaluation of QoL in patients suffering from respiratory failure due to COPD undergoing an in-patient pulmonary rehabilitation program. Ethical approval and Consent This study has been carried out in compliance with the Helsinki Declaration and it received the implied approval of the Ethical Committee of the Local Italian Health Authority "Azienda Sanitaria Locale" (ASL, reference number RM/H-05/2010). Furthermore, written informed consent was obtained from the patients for publication of this report and any accompanying images. Questionnaire characteristics The questionnaire, built by a simple and graphic layout to be administered to elderly patients, included one form for admission and another for discharge. Since irrelevant matters for patients and topics that are not source of satisfaction probably would never influence the patient's QoL, the questionnaire included only tips related to the subjective components of QoL that would be relevant for patient, although likely not strictly related to the respiratory function. Selection of items and score calculation After a detailed comparison of all available questionnaires for respiratory diseases, items from the MRF-26 were chosen. Further items were chosen according to the clinical experience of patient responsiveness to the respiratory rehabilitation. Topics were hence modified in order to adhere to the core aims of our questionnaire and arranged for an easy compilation. Both versions of the Admission Inpatient Respiratory Rehabilitation Questionnaire (Admission IRRQ) and Discharge IRRQ included 5 sections titled as follow sections: 1) "What symptoms have you got?", a series of statements generally used by people with respiratory disorders to indicate the frequency of listed symptoms in the last month; 2) "How do you live?", a series of statements related to normal activities, reporting how much the respiratory disorder(s) has limited those activities in the last month; 3) "My mood", a series of statements which describe the person's mood, reporting how often in the last month the patient felt the feelings described in the statements; 4) "How important is this to you?", a list of sentences concerning different areas of life and how important was each specific area; 5) "What I think about the treatment I am having", a list of statements concerning the pharmacotherapy and the treatment during the last month. Questionnaire administration and study population The IRRQ was administered to COPD patients in ordinary admission regimen for respiratory failure undergoing long-term oxygen therapy (LTOT) to the Respiratory Rehabilitation ward of IRCCS San RaffaelePisana and San RaffaeleVelletri from January 2011 to December 2011. The admission and discharge IRRQ was administered at time 0 (the day before the start of the rehabilitation cycle) and at time 1 (the day of discharge or on the day previous). Rehabilitation program The program consisted of two daily sessions, 90 minutes each, five days a week for four weeks in Inpatient care and included: respiratory muscle training, strengthening of the abdominal walls and of both the upper and lower limbs, physical exercise training via the use of cycle ergometer, treadmill and arm ergometer, bronchial clearing techniques such as PEP-MASK, relaxation techniques and psychological and educational support. Statistical analysis A descriptive analysis was performed for the sociodemographic characteristics. In particular, continuous variables were summarized with the median scores for nominal variables and both the absolute and percentage frequencies were reported. The two IRRQ questionnaires were summarized with the mode, expressed with the label and its frequency percentage, and with the count and corresponding frequency percentage of unanswered questions. For the 5 th section patients answered exclusively to questions pertinent to the type of treatment that they were undergoing (oxygen-therapy, ventilator, tracheostomy cannula). In order to verify whether the questionnaire was sensitive to the effects of respiratory rehabilitation and, therefore, to assess the improvement or deterioration of the physical or psychological condition of patients after rehabilitative treatment, the non-parametric Wilcoxon T-test was conducted on the total scores obtained from the first 4 sections of the IRRQs. The internal consistency of the questionnaire (sense/ construction/meaning) was evaluated by calculating Cronbach's alpha index for each of the 5 sections. In addition for each section, the likelihood of nonhomogeneity of each item was evaluated with respect to all other items of that section. This analysis was performed by calculating the item-total correlation and the item-total correlation minus each-one item, in turn for all items of a single section. It was not possible to carry out a combined analysis for the internal validity of the 5 th section since the patients responded exclusively to the questions which interested them personally. For this same reason, Cronbach's alpha was calculated for two of three subsections because only two patients had both oxygen therapy and a tracheostomy tube. One patient doing all three therapies was included in the group of patients who underwent both oxygen therapy and mechanical ventilation. Bland and Altman [25] were taken as reference for the degree of accuracy of the Cronbach's alpha index. They proposed that for scales which are used as research tools to compare groups, they may be less than in the clinical situation, when the value of the scale for an individual is of interest. To compare groups, values of 0.7 to 0.8 are regarded as satisfactory. On the other hand, for the clinical application, much higher values of alpha index are needed where the minimum is 0.90, and values > 0.95 are desirable. Finally, the floor or ceiling effects have been considered to be present if more than 15% of participants achieved the lowest or highest possible score, respectively [26]. The statistical analysis was carried out separately for the questionnaire administered in admissions and for that administered at discharge, eliminating from the study -one by one-all those patients not responding to at least one question. All analyses were carried out using the SPSS 12.00 for Windows statistics software package. Patients characteristics COPD patients undergoing LTOT that voluntarily agreed to fill the IRRQ included 26 males and 8 females (total patients: 34) and the average hospital stay was 28±4 days. All patients were older than 65 years and with a Mini-Mental State Examination (MMSE) score higher than 18. About 68% of patients were married and retired and 62% needed assistance in completing the questionnaire. There were no patients with neuromuscular diseases and/or unstable hemodynamic conditions (recent myocardial infarction and unstable angina). At the moment of admission, more than 50% of the subjects declared to have cough, morning dyspnoea (one person did not respond to the question), respiratory problems necessitating the intervention of the patient's general practitioner (one person did not respond to the question), sleep disruption caused by cough or dyspnoea, sleepiness during the day. The socio-demographic and clinical characteristics of patients that participated in the survey are summarized in Tables 1 and 2. General considerations on the questionnaire responses Table 3 shows the most frequent category of response for each question of the questionnaire and the frequency of missing answers for each question. At admission, more than 50% of the patients had declared to have a serious shortness of breath while walking uphill or while climbing stairs, that their families always help them cope with their respiratory problems, that they do not feel oppressed by their families nor by those around them and that family support is very important to them. Finally, 67.6% of patients stated they were not bothered by receiving therapy in front of others. The physical and psychological condition of patients after rehabilitative treatment improved from admission to discharge. In particular, the percentage of patients with shortness of breath when walking uphill, climbing stairs or with breathlessness in a "very important" manner, the percentage of patients that were not hindered by those around them, those patients that sometimes don't sleep soundly and the percentage of patients that does not mind taking medicines in front of others improved overall of 14.2 ±2.5%. On the other hand, the situation became worse concerning to being helped in dealing with respiratory problems by family members (−14.7, 52.9% from admission to discharge). Comparing the scores obtained for the four questionnaire sections of the two versions (admission and discharge) with the non-parametric Wilcoxon test, a significant statistical improvement (p<0.01) was observed for the first three sections, while significance (p=0.23) for section 4 was not attained (Table 4). Age and gender did not had effect on the questionnaire response and none of the patients had the lowest or the higher possible score on the questionnaire, indicating that there was nor floor nor ceiling effect at both admission and discharge. Questionnaire at admission The first section of the questionnaire presented 8 questions with the aim of evaluating patients' symptoms. In this section data from 31 patients out of 34 were available because 3 (8.8%) patients did not answer to at least one question. Cronbach's alpha demonstrated good internal consistency for the 8 questions (0.858). This index rose slightly (0.883) when the 5th and the last questions were eliminated ("I've been out of breath while walking uphill, while climbing stairs. . ." and "I have felt drowsy during the day"). Besides, this last question had a low correlation with the other items of the section (0.293). In the second section of the questionnaire there were 10 questions which had the aim of evaluating if the respiratory deficiencies could limit the execution of some activities. In this section, data from 27 patients were considered because 7 (20.6%) did not answer to at least one question. Cronbach's alpha showed a sufficient internal consistency for the 10 questions (0.754). The index, calculated on 27 patients, improved (0.845) when the last four questions were eliminated ("My family helps me deal with my respiratory problems", "The people around me help me deal with my respiratory problems", "My family hinders me/oppresses me", "The people around me hinder me/oppress me"). The third section of the questionnaire included 11 questions which had the objective of evaluating the mood of the person. In this section 30 patients answered to all questions, whereas 4 (11.8%) did not give at least one response. Cronbach's alpha showed a good internal consistency of the 11 questions (0.857). This index rose slightly (0.867) when the last two questions were eliminated ("I have had difficulty concentrating, thinking, making decisions", "I have wished I could die"). The fourth section of the questionnaire presented 14 questions with the objective of evaluating how important each area of life was to each patient. In this section full data were available on 27 patients because 7 subjects (20.6%) did not answer to at least one question. Cronbach's alpha did not show a good internal consistency for the 14 questions (0.687). The alpha value increased to indicate a good consistency (0.824) when a few questions were eliminated: "Doing work around the house", "Going out", "My sexual activity", "The support from my family", "The support from people around me", "The food I eat", and "My body image". In the 5 th section of the questionnaire there were 10 questions whose objective was to evaluate what patients think of the treatment they were having. Patients, however, responded exclusively to the questions which interested them, based on the type of treatment they were undergoing (oxygen therapy, ventilator with or without tracheostomy tube), with the exception of the first question to which everyone answered. One patient underwent a treatment with oxygen or a treatment with oxygen and ventilator, and either with or without a tracheostomy tube. As it was not possible, therefore, to carry out a combined analysis of internal consistency for this section, Cronbach's alpha was calculated relatively to two of the three subsections, as only two patients were undergoing oxygen therapy via cannula. There was then one further patient that carried out oxygen therapy with either ventilator or cannula, but he was included amongst the patients who practiced oxygen therapy and ventilator. For 21 patients who practiced only oxygen therapy (1 subject was excluded as he/she did not respond to all questions), Cronbach's alpha (0.125) showed that the 4 items did not adequately measure the intended outcome. For 10 patients (one subject was eliminated because not responding to all the questions), who did either oxygen therapy or ventilator, the alpha value suggested an almost-good consistency (0.809). The index reached a good level (0.892) when the following items were eliminated: "My oxygen limits my day-to-day activities", "Oxygen is of little use to me", "I find it embarrassing to be amongst people with oxygen" and "My ventilator is of little use to me". Questionnaire at discharge The questions of this questionnaire were exactly the same as those administered in admission, excluded for Section 4 in which the first question regarding the importance of "doing work around the house" was substituted with the importance of "reading". In the first section the number of patients considered in the study were 32 as 2 (5.9%) did not answer at least one question. Cronbach's alpha demonstrated good internal consistency of the 8 questions (0.829), even though the value was slightly lower than that found in the same section at admission. This index rose slightly (0.852) when the 6 th question, "Because of my respiratory problems I have called my doctor", was eliminated, considering the 33 patients who responded to all questions of the section as well as the remaining 7 items. In the second section, 29 patients were analyzed, as 5 (14.7%) did not respond to at least one question. Cronbach's alpha showed a low internal consistency for the 10 questions (0.569). This index rose slightly (0.647), and remained within the level of acceptability, when the 8 th question was eliminated: "The people around me help me deal with my respiratory problems". The third section, taking into account the 28 patients (82.4%) responding to all questions in the questionnaire, presented a Cronbach's alpha of 0.857. The index rose very slightly (0.869) when two questions were eliminated: "I have been dissatisfied with myself, in what I do and how I behave" and "I have had difficulty concentrating, thinking, making decisions". In the fourth section, considering only 24 subjects (70.6%), Cronbach's alpha was relatively high (0.881). The value became optimum when only for 4 of the 14 initial questions were considered: "Shortness of breath in the morning", "Shortness of breath while resting, "Shortness of breath while walking uphill or climbing stairs. . .", "Not being able to sleep because of shortness of breath". For the 5 th section of the questionnaire the Cronbach's alpha was calculated exclusively for the subsection oxygen-therapy, as no patient had a tracheotomy tube and only 6 subjects used a ventilator and, therefore, data were available on 24 patients. The low Cronbach's alpha (0.457) showed that the 4 items did not adequately measure the intended outcome. The index reached a value of 0.600 when the following items were eliminated: "My oxygen limits my day-to-day activities", "Oxygen is of little use to me", "I find it embarrassing to be amongst people with oxygen". Nevertheless, such a value does not suggest a good internal consistency. Discussion and Conclusions Outcome measures are necessary for describing the individual improvement needed to confirm the efficacy of a therapeutic program, both pharmacological and rehabilitative. In COPD, that by definition is an incurable and progressive pathology, the measure of the state of the health, which correlates with the disease status, represents a fundamental moment. During the last years, different questionnaires have been developed with the aim of measuring the QoL in these patients, and some of these questionnaires are highly responsive to rehabilitation [2][3][4][5][6][7][8][21][22][23]25]. Nevertheless, the available questionnaires are limited as they are predominantly focused on COPD patients in a stable phase, scarcely taking into consideration inpatients, patients with serious respiratory deficiency and patients undergoing mechanical ventilation or tracheotomized. Effectively, it results complex to gather appreciable improvements in functional areas for these cluster of COPD patients,. On the basis of these findings, our novel questionnaire includes substantial innovation as it has been designed for hospitalized patients and with serious respiratory deficiencies, undergoing oxygen therapy and/or mechanical ventilation and also via invasive measures. The 44 items identified for the proposed questionnaire have been selected from the MRF-26, although further items have been chosen according to the clinical experience on the respiratory rehabilitation responsiveness of COPD patient with severe respiratory failure. Effectively, no questionnaires available in literature resulted sensitive enough to reveal modifications of the QoL in serious COPD patients, even mechanically ventilated and/ or tracheostomized. Our novel questionnaire is organized in two versions, one specific for admission and the other for discharge. These versions differ from each other since the questions administered at discharge refer explicitly to the period of hospitalization. In addition, compared to the most widely used questionnaires, our questionnaire dedicates much more attention to mood disorders, which are often present in COPD patients and frequently correlated with hypoxia. Furthermore, in accordance with an "approach based on necessity", the new 4 th section included in this questionnaire proposed also questions on activities truly important to the patients and their QoL. Nevertheless, the novelty introduced with the 4 th section was associated with a low internal consistency, particularly at admission. However, the quality of the 4 th section significantly improved by eliminating few questions, including the "importance of doing work around the house" at admission and the "importance of reading" at discharge. Therefore, our novel questionnaire, and particularly the 4 th section, will undergo a deep revision during the next validation study by testing the proposed IRRQ on a validation population represented by the same cluster of COPD patients, but enhancing the population size. A further characteristic of novelty for the proposed IRRQ is represented by the 5thsection, that exclusively refers to the specific therapy administered to each patient. Therefore, since the item of this section "What do I think of the treatment I am having" allowed comparing the feeling of different subjects that receive different treatment, we believe that this approach permitted to compare subjects undergoing a variety of treatment regimens. Although the preliminary data of this novel questionnaire suggest that it fitted well on the study population, it remains essential to compare its effectiveness, and the putative superiority, with gold standard questionnaires most currently used. In any case, the comparison with others existing questionnaires will be complex mainly for the innovative characteristics of our questionnaire, that make it significantly different compared to the others. Effectively, the differences and innovations are noteworthy, as we administered the questionnaire both at admission and discharge to an extremely specific target population represented by hospitalized and serious COPD patients undergoing respiratory rehabilitation and, in addition, nowadays there is no gold standard questionnaire that selectively evaluates the QoL in the population cluster enrolled in our study. However, it might be of interest to challenge our novel questionnaire with gold standard in the same field in order to assay its efficacy in respiratory disease other than COPD, such as neuromuscular pathologies. Finally, although the preliminary data of this study represent the initial development of a survey on the QoL in respiratory rehabilitation carried out in COPD patients with severe respiratory failure, our findings are promising and suggest that, after a validation study, the proposed IRRQ might determine the real QoL in patients suffering from respiratory failure due to COPD undergoing an in-patient pulmonary rehabilitation program.
v3-fos-license
2024-03-31T15:21:37.130Z
2024-03-27T00:00:00.000
268785914
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2024.1357747/pdf?isPublishedV2=False", "pdf_hash": "1d7ed1a363a7d85ffce54f172c8a0732add012af", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44025", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c036530d4452c17f08c27a35ecf605e5e232cd3f", "year": 2024 }
pes2o/s2orc
A rare morphology of the cardiac fibroma in a child: a case report Here we report a rare morphology of a cardiac fibroma in a child. A 2-year and 8-month-old toddler came for “chronic constipation” and was found to have a heart murmur on cardiac auscultation. Further transthoracic echocardiography suggested “a strong echogenic mass in the left ventricular wall, with some part of “a string of beads” in shape extending into left ventricle outflow tract”, which was atypical for either a tumor, thrombus or vegetation. The child underwent resection of the mass and mitral valvuloplasty. Pathological examination confirmed the mass as a cardiac fibroma. Introduction Primary cardiac fibromas in children are exceedingly rare and predominantly occur in infants and young children under the age of 2 (1, 2).These fibromas are typically solitary and mainly located in the left ventricle, with the right ventricle and ventricular septum being less common sites of occurrence (3).Cases may present with evident clinical symptoms and signs, though some remain asymptomatic (4).Physical examination often sees the presence of a heart murmur in symptomatic children.Echocardiography can detect homogeneous exogenic masses within the cardiac chambers (Figure 1E), while computed tomography or magnetic resonance scans can provide a more precise assessment of the tumor's location, size, number, and hemodynamic alterations (5,6).Here we report a case of cardiac fibroma with an atypical morphology.Surgical excision and pathology confirmed it as a cardiac fibroma. Case report A 2-year-and-8-month-old toddler was admitted to the hospital for evaluation of chronic constipation attributed to long-standing low-fiber diet and poor therapeutic effect of prolonged lactulose use on softening stools.A heart murmur was noted on physical examination.The child had no relevant medical history or signs of infection, trauma, cold, or any other predisposing factors.Preoperative transthoracic echocardiography revealed a hyperechoic, approximately 32 mm by 16 mm mass in the posterior wall of the left ventricle (Figure 1A).In addition, an echogenic "string of beads" was observed wiggling in the left ventricular outflow tract, with one end connected to the posterior part of the left ventricle and the other end appearing to be connected to the left coronary sinus of the aorta (Figure 1B).The sizes of the four chambers were considered normal, and the ejection fraction was 63%.Interestingly, this child showed no clinical symptoms and results of coagulation assays, neutrophil, C-reactive protein, complete antinuclear antibody, antineutrophil cytoplasmic antibodies, and antistreptolysin O titer tests were all insignificant.A diagnosis of tumor, thrombus or vegetation was yet to be made. Though the child was generally doing well and hemodynamics stable, there remained a concerning risk that the string part may break off and result in embolism.After careful consideration and discussion in a multidisciplinary team, surgical excision was planned.During the procedure, an incision was made in posterior leaflet of the mitral valve to expose the mass.It was shown that part of the mass was like a string of beads (Figure 2A), while the other part was embedded in the posterior left ventricular wall, close to the posterolateral papillary muscle (Figure 2B).The mass was predominantly white, with an intact capsule and a tough texture.After successful removal of the mass, water injection test showed significant regurgitation of the mitral valve from anterior leaflet prolapse.Mitral valve repair was performed.The ascending aortotomy was performed to exclude any residual mass in the aorta, though there was no residual mass found.Postoperative transesophageal echocardiogram showed complete removal of the mass (Figure 1C) and competent mitral valve (Figure 1D).Subsequent histopathological analysis confirmed the mass as a cardiac fibroma with myxoid degeneration (Figure 3).The results of the immunohistochemical analysis of the heart tumor specimen was as follows: Ki-67(10%+), DES (+), SMA (+), CR (focal +), CD34 (vascular +), Vim (+), EMA (−), CK (−), CD163 (+), ALK (−).The recovery was uneventful and the patient was discharged on postoperative day 10.Upon Follow-up, investigations including chest radiogram and electrocardiogram revealed no significant abnormalities.Transthoracic echocardiography demonstrated mild hyperechogenicity of the left ventricular papillary muscles, potentially related to postoperative changes.There was trivial regurgitation of the mitral valve (Figure 1F).Left ventricular systolic function was preserved.The patient's family reported no issues with daily activities or exercise tolerance. Discussion Cardiac fibromas are extremely rare heart tumors, particularly in children, and typically occur between a few months to a few years of age.The clinical presentation of cardiac fibromas in children can vary greatly.Some patients may not exhibit noticeable symptoms, while others may experience symptoms such as heart murmurs, shortness of breath, and arrhythmia (7).In this particular case, the (8).Echocardiography is the most frequently employed diagnostic tool as it provides for the visualization of the location, size, and characteristics of a cardiac tumor (9,10). In the present case, however, as the morphological characteristics were not typical for a fibroma, and could not exclude thrombus or vegetation in this case, it was difficult to make a confirming diagnosis before surgical excision and pathology.Rhabdomyoma, which originates from cardiac fibroblasts, is a hamartoma formed during the development of heart muscle cells and accounts for more than 60% of primary heart tumors in children.It is more likely to occur in the left and right ventricular wall or septum (11).Fibroma, which is derived from connective tissue fibroblasts, is the second most common benign primary cardiac tumor in children and is more common in infants under 1 year of age.The most common location is the ventricular septum and free wall of the ventricle, rarely the atrium (12, 13).Cardiac myxoma, another common type of cardiac tumors, consists of large numbers of stellate or polygonal myxoma cells with myxoid stroma.It is the most common cardiac tumor in adults, but rare in children.It is often found in the left atrium and rarely in the heart valves and ventricles (14).Malignant cardiac tumors in children are rare, accounting for about 10% of all cardiac tumors in children. Most of these are metastatic malignancies and the incidence is 10-20 times that of primary malignancies.For thrombus, echocardiography often shows sessile masses, enlarged atria, low cardiac output.Clinical signs include congestion and response to thrombolytic therapy.For vegetations, Echocardiography often reveals vegetations with irregular mobility that are adherent to valves, findings that are highly associated with infective endocarditis (15)(16)(17). Whatever it is, the management is typically determined by the severity of symptoms, the size and location of the mass, and the overall health condition of the patient.Surgical resection is required for cases involving severe symptoms or masses that impede heart function (18)(19)(20)(21)(22).The aim of surgical removal is to completely excise the mass while preserving normal heart tissue.Care must be taken in order to preserve the adjacent cardiac structures and function.The tumor may affect valvular apparatus, making valve repair necessary.However, this can be achieved with satisfactory outcome as shown in this case. Conclusion In summary, cardiac fibromas are uncommon tumors of the heart, particularly in children.The diagnosis and treatment of cardiac fibromas in children require a comprehensive evaluation of symptoms, imaging, and laboratory test results.Sometimes, it may be difficult to differentiate it with other tumors, vegetation or thrombus.This case emphasizes the varying morphology of the cardiac fibroma and significance of diagnosing and surgically treating this condition in The close-up of the specimen (A) subsequent histopathological analysis confirmed the mass as a cardiac fibroma with myxoid degeneration (B,C).On the pathological slide it shows on the left the mass part and on the right the string part (D) the histology images of the wiggling string part (E) the histology images of the mass part (F). FIGURE 1 FIGURE 1 Preoperative transthoracic echocardiography reveals a slightly hyperechoic, approximately 32 mm by 16 mm mass in the posterior wall of the left ventricle (A) echogenic "string of beads" is observed wiggling in the left ventricular outflow tract, with one end connected to the posterior part of the left ventricle and the other end appearing to be connected to the left coronary sinus of the aorta (B) postoperative transesophageal echocardiogram shows complete removal of the mass (C) postoperative transesophageal echocardiogram shows competent mitral valve (D) preoperative transthoracic echocardiogram shows competent mitral valve (E) postoperative transthoracic echocardiogram mitral valve shows trivial regurgitation (F). FIGURE 2 FIGURE 2 It shows that part of the mass resembles a string of beads (A) the other part is embedded in the posterior left ventricular wall close to the posterolateral papillary muscle (B). FIGURE 3 FIGURE 3 Tian et al. 10.3389/fcvm.2024.1357747Frontiers in Cardiovascular Medicine 03 frontiersin.orgchildren and contributes to the overall body of knowledge and promoting additional research in this area.
v3-fos-license
2018-04-03T01:20:56.371Z
2018-03-01T00:00:00.000
4330456
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/abcd/v31n1/0102-6720-abcd-31-01-e1346.pdf", "pdf_hash": "c2c9db0a33aca5a697a51f86a0024f40eef1fc12", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44026", "s2fieldsofstudy": [ "Medicine" ], "sha1": "69910812d8fad8ea2163b3fdb3b88723f10ede24", "year": 2018 }
pes2o/s2orc
PERCUTANEOUS RADIOFREQUENCY ASSISTED LIVER PARTITION WITH PORTAL VEIN EMBOLIZATION FOR STAGED HEPATECTOMY (PRALPPS) ABSTRACT Background: When a major hepatic resection is necessary, sometimes the future liver remnant is not enough to maintain sufficient liver function and patients are more likely to develop liver failure after surgery. Aim: To test the hypothesis that performing a percutaneous radiofrecuency liver partition plus percutaneous portal vein embolization (PRALPPS) for stage hepatectomy in pigs is feasible. Methods: Four pigs (Sus scrofa domesticus) both sexes with weights between 25 to 35 kg underwent percutaneous portal vein embolization with coils of the left portal vein. By contrasted CT, the difference between the liver parenchyma corresponding to the embolized zone and the normal one was identified. Immediately, using the fusion of images between ultrasound and CT as a guide, radiofrequency needles were placed percutaneouslyand then ablated until the liver partition was complete. Finally, hepatectomy was completed with a laparoscopic approach. Results: All animals have survived the procedures, with no reported complications. The successful portal embolization process was confirmed both by portography and CT. In the macroscopic analysis of the pieces, the depth of the ablation was analyzed. The hepatic hilum was respected. On the other hand, the correct position of the embolization material on the left portal vein could be also observed. Conclusion: “Percutaneous radiofrequency assisted liver partition with portal vein embolization” (PRALLPS) is a feasible procedure. INTRODUCTION W hen a major hepatic resection is necessary, sometimes the future liver remnant (FLR) is not enough to maintain sufficient liver function and patients are more likely to develop liver failure after surgery 9,10 . In order to avoid that undesirable situation, in patientes with normal liver function and with less than 20-30% of FLR, percutaneous portal vein embolization (PVE) used to be the gold standard to achieve its hypertrophy. Although it is a good approach and a technique with a high success rate, it takes between four to six weeks to achieve the goal of hypertrophy, and meanwhile, the tumors could go on growing and even worse, appearing more 8 . To improve that, Schnitzbauer et al 21 introduced a novel technique called associating liver partition and portal vein ligation for staged hepatectomy (ALPPS). Is a procedure with two steps. The first one consist in an open surgery in which is performed a ligation of the portal branches feeding the side to be resected plus a liver partition. The second step is the hepatectomy. This technique was proven to increase the FLR in less than 10 days and in between 40-80% volume growth by avoiding the formation of collateral vessels 26 . It was a promising approach except for high morbidity and mortality rates which raise to more than 70% and 10% respectively 8 . For that reason, many variants of this technique have been developed 26 . Among them, Mini ALLPS was described by De Santibañes et al 6 . Despite of being a less complex procedure, still remains as a two stage open surgery with no despicable morbidity 8 . Then,Jiao et al 16 introduced the splitting of liver parenchyma assisted with radiofrequency performed laparoscopically and named it as radiofrequency assisted liver partition with portal vein ligation (RALPP). Also, other sources of energies have been used in animals 19 and also in humans such as Gringeri et al 12 called "laparoscopic microwave ablation and portal vein ligation for staged hepatectomy (LAPS)". They all have something in common: the less invasive approach in order to reduce morbidity and mortality. Therefore, to keep on this evolution, in this study we presenta novel technique called "Percutaneous radiofrequency assisted liver partition with portal vein embolization" (PRALLPS) to demonstrate its feasibility. Animals and protocol The present study is a prospective and experimental study in animals approved by the Ethics Committee of IHU. It has been held in IHU Strasbourg, France in conjunction with the DAICIM Foundation from December 2016 to July 2017. The 3 R ethic principles (refinement, replacement and reduction) has been strictly adhered to 6,16 . Four pigs (Sus scrofa domesticus)both sexes with weights between 25 to 35 kg were used. The animals were housed in individual cages, respecting the circadian cycle of light-darkness, and with a constant humidity and temperature. The environment was enrichedby the presence of toys. The day before surgery, the experimental subject had been fasted for 24 h, but with free access to water. Anxiety related to moving the cage to the operating room and/or imaging platform was controlled by an intramuscular injection of ketamine (20 mg/ kg) + azaperone (2 mg/kg, Stresnil; Janssen-Cilag, Belgium) 1 h before the procedure. Induction was performed with intravenous injection of propofol (3 mg/kg) + pancuronium (0.2 mg/kg). Anesthesia was maintained with 2% isoflurane. Pigs were sacrificed by injection of a lethal dose of general potassium chloride anesthesia. The study protocol consisted of intervention (PVE plus radiofrequency liver partition), euthanasia in two pigs and liver explantation, and second intervention in the remaining two pigs (laparoscopic hepatectomy) and afterwards euthanasia. Tecnique of PVE and percutaneous radiofrequency liver partition The procedure begins with the percutaneous embolization of the left portal vein. For this, an abdominal ultrasound (US) was performed (Acuson S 3000 -Siemens)locating the liver 9 . A branch of the right portal vein was identified. Under US guidance, the vein was accessed using a Chiba 21 G (Cook) needle. The position was confirmed by injecting contrast through the needle under fluoroscopic control (Artis Zeego -Siemens). A portography was done. Once inside the vein, a guide (Guidewire 0.018¨ -Cook) was introduced. The needle was replaced by an introducer (Neff Introducer Set -Cook) using a Seldinger technique. Through the introducer, a catheter (BostonScientific Bern 4 Fr Catheter) was placed in the left branch of the portal vein over accessory guides (Guidewire 0.035¨ Roadrunner -Cook; Guidewire 0.035¨ -Amplatz). The embolizationwas performed using coils of different sizes (Nester Embolization Coils -Cook), including 14x20 mm, 10x20 mm, 8x14 mm, 6x14 mm and 4x14 mm. Correct embolization was confirmed with a final portography 1 . Then, the intrahepatic path was embolized(Veriset Haemostatic Patch -Medtronic, Figures 1 A and B) Afterwards, a computed tomography (CT, Somatom Definition AS Plus -Siemens) with IV contrast (Ioméron 400 mg/ml -Bracco) was obtained with venous, arterial and portal phases ( Figure 1C). A subtle difference was identified between the embolized area and the normal liver parenchyma. Three simultaneously radiofrequency ablation (RFA) needles (Radiofrequency Cool Trip System Needle -Medtronic) were set in place using fusion of images between US and CT as a guide (Figure 2).They were separated from each other by approximately 2 cm (Radiofrequency Cool Trip Ablation System Equipment -Medtronic). Subsequently, the ablation was performed for 6 min on each needle. The ablation area of each needle was approximately 3 cm in diameter. At the end of each ablation period, the needles were removed and replaced in the same manner by repeating the procedure until complete partitioning along the anterior face of the liver. The border between the parenchyma corresponding to the embolized portal sector and the normal one serves as a reference as well as the right hepatic vein.The depth of the partition was approximately 4.5 cm. A new CT scan was then repeated, with the same protocol as described above. The liver partition area could be identified, thus confirming the feasibility of the procedure performed so far (Figure 3). RESULTS The animals were operated on after 2 h of the radiofrequency liver partition.In two pigs, a total hepatectomy was performed after their euthanasia (Figure 4), with the only objective of comparing the CT image of the liver after ablation with the final operative piece. In the remaining two animals, a right hepatectomy was performed by laparoscopy (Karl Storz, Figure 5). The reference for the approach of the liver was the ablation line. Its depth was assesed using translaparoscopy ultrasound (Siemens Acuson P300 LP323 Transducer). To complete the parenchyma partition, we used energy devices (Sonicision Cordless Ultrasonic Dissector 5 mmx39 cm -Covidien) and staplers for the vascular and biliary parts (Stapler Endo GIA -Covidien, Stapler Endo GIA Articulating Reload with Tri-Staple Technology 45 mm Vascular/Medium -Covidien). We used six reloads in one surgery and five in the remainder. Finally, the piece was removed through a medial incision. All animals survived the procedures. There were no bleeding complications. The two pigs that underwent laparoscopic resection were sacrificed at the end of it. There were no complications during the ablation period directly related to this procedure. However, during laparoscopic surgery, small areas of ablation have been observed outside the desired area, such as the spleen of the pig and asmall area in gallbladder (without perforation). It was not necessary to suspend proceedings or take any action. CT with IV contrast after liver laparoscopic resection showed a good vascularized liver remnant ( Figure 6A). In the macroscopic analysis of the pieces, the depth of the ablation was analyzed ( Figure 6B).The hepatic hilum was respected. On the other hand, the correct position of the embolization material on the left portal vein could be also observed ( Figure 6B). DISCUSSION In most of major liver resections, the percutaneous PVE is the gold standard to achieve the hypertrophy of FLR. Although it has a high success rate, it takes to much time to achieve hypertrophy, and meanwhile, the tumors could increse their size 8 and if the FLR does not enhanced its volume enough, patients lose precious time. A complementary procedure for the PVE is the embolization of the ipsilateral hepatic vein. This can be done simultaneously with the PVE or sequentially. The former one has the disadvantage of being an expensive procedure with severe potential complications perfomed in many patients that would have been achieve the hipertrophy even without the hepatic vein embolization. The latter, has the same time issue as the PVE alone 18 In this scenario, the introduction of the ALPPS technique showed to be a major change 8,12,21. . It has allowed to perform hepatectomies of greater parenchyma volume without presenting postoperative hepatic insufficiency and in much less time than PVE 4,7,20 . Its disadvantage is being a major surgery in two stages, with a high percentage of associated morbidity and mortality. In order to reduce them, the original technique was modified by the development of the new mini-ALPPS technique 26 and later also performed by laparoscopy. On the other hand, the radiofrequency ablation have shown an impressive progression in terms of equipment allowing to perfom liver partitioning in major liver surgery 5,6 as well as in the laparoscopic approach of ALLPS 11,13 . In this path, it seems that the development of a new procedure that could increase the FLR in a faster manner with a similar morbidiy and mortality than the PVE, would be the highest goal. In the present study, wasdemonstrated that not only it is possible to perform the liver partition percutaneously but also the laparoscopic liver resection: both together makes PRALPPS technique. This brand new procedure has two potential benefits: it would reduce the time to achieve the FLR hipertrophy because it uses the same concept as the ALPPS technique and also would reduce its morbidity and mortality rates based on the evidence that the percutaneous procedures have less inflamatory response. With respect to the limitations of the present study, we should mention that it was held with a small sample size, sufficient to demonstrate its feaseability but not for analize its safety. Regarding this, we experienced two posible complications: the unwanted ablation of the spleen and the gallbladder. It must take into account that most ALPPS procedures are related to right, non-left liver resections as in this study. In addition, the anatomical arrangement of the pig spleen is completely different to the human one. Beyond these special considerations, the correct position of the needle within the hepatic parenchyma when initiating the ablation is very important to avoid these problems 22 . Probably, the use of new needles with smaller ablation areas could be a potencial solution in the future. CONCLUSION Percutaneous radiofrequency assisted liver partition with portal vein embolization (PRALLPS) is a feasible procedure. However, new studies to asses its security should be carried out.
v3-fos-license
2023-02-28T16:03:47.879Z
2023-02-26T00:00:00.000
257226531
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1080/23311983.2023.2180877", "pdf_hash": "d0144d23a0488be99a3639a6c43c1af20a0f60f7", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44027", "s2fieldsofstudy": [ "Business" ], "sha1": "4e9c35de42aecccfe79b1e1162c03b135435dfa0", "year": 2023 }
pes2o/s2orc
Hypocoristics in the Ammani-Jordanian context: A Construction Morphology perspective Abstract The current study explores the patterns of hypocoristics in Ammani-Jordanian Arabic in view of Construction Morphology. The most common hypocoristic patterns are addressed with reference to the social factors (gender and age) that may contribute to the templates and functions of the hypocoristic structure. This paper argues that Ammani-Jordanian Arabic speakers produce various hypocoristic patterns to signify a variety of functions. A questionnaire is designed to explore the formation of hypocoristic patterns among 51 Ammani Jordanians from three different age groups (children, young and elderly). The study shows that the most common hypocoristic pattern used by all participants includes reduplication, truncation, affixation, and adding Ɂabu “father of” and ʔum “mother of” to the male and female names, respectively. The study also reveals how these processes can be used to form hypocoristics of different name types (monosyllabic names, disyllabic names, nonce names, compound names, foreign names, etc.). We also show that the hypocoristic templates may vary according to the gender of the name. The current findings help foreign learners of Arabic to better comprehend the Jordanian culture, including the use of hypocoristics. Introduction Hypocoristic names (nicknames) refer to a shortened version of a name or a word used with terms of endearment, pet names, or fondling endings (Newman & Ahmad, 1992).They also reflect the affection of the speaker and the diminutive nature of the referent (Obeng, 1997).Cross-linguistically, a hypocoristic name takes place using different word-formation processes such as truncation/shortening as in (1a) and (1b), suffixation as in (1c), reduplication as in (1d), truncation plus suffixation as (1e) and (1 f). Hypocoristization may involve more complex word-formation processes.In Akan, hypocoristics are formed by morphological and morphophonological processes including compounding and reduplication (along with prolonging the phonic and syllabic unit), deletion, tonal change, and vowel harmony, as in kúùkúlá (a feminine day-name for people born on Wednesday) which is formed "by reduplicating [ku], prolonging the [u] vowel of the prefix, and affixing the final syllable" (Obeng, 1997, p. 45).In Greek, multiple hypocoristic forms can be derived from one name; for example, the name Alexandros has the following hypocoristic forms: Alex, Alexis, Alekos, and Alekakos (Leibring, 2016).In Spanish, some hypocoristic names require further simplification and stem modification via truncation of the first consonant(s) and reduplication of the remaining consonant as in Adolpho/Rodolpho → /fofo/ (Lipski, 1995, p. 392). Cross-culturally, hypocoristics have various characteristics and perform many functions when used in particular contexts.Forming hypocoristics in Australia and New Zealand maintains relationships with people either when forming them as variations of slang, humour or pet names, as in boss-cockie for "a farmer, larger than a Cockatoo who employs other labour as well as working himself" (Bardsley & Simpson, 2009, p. 53), swaggie for swagman, and Tassy for Tasmania, respectively (Bardsley & Simpson, 2009).As part of the Australian ethos, Australian hypocoristics add the suffix -ie to a truncated form of the base word such as mozzies "mosquitoes", sunnies "sunglasses", or mushies "mushrooms" which function differently from their typical role as diminutives.They reveal a love of informality, good humour rather than endearment and jocular cynicism (Wierzbicka, 2009). In addition, Spanish hypocoristics are found in children's language and in adults' intentional simplifications when speaking to children (Lipski, 1995). 1 Among the seven types of hypocoristization in Hausa produced distinctly and for different purposes, one type of hypocorism is chiefly produced by females talking to children, and it expresses endearment and tenderness (Newman & Ahmad, 1992). 2 In another type of hypocoristics in Hausa, females use it to signify warmth, affection, diminutiveness, and respect (Newman & Ahmad, 1992). 3Intriguingly, hypocoristics in Akan are also used in contexts other than between equals and close people.They are used between superior-to-subordinate contexts as well as subordinate-to-superior contexts.However, communicative rules in Akan are strictly enforced to mark age, gender, social class, etc.For instance, it is common for a social inferior to address a respectable superior as "Papa Koo Nimo" or "Agya Koo Gyasi".In this context, Papa "father" is used for someone you respect like your father.The same applies to Agya "father" for someone with the exact status as the speaker's father (Obeng, 1997).Nevertheless, in case a hypocoristic form was not preceded by a polite word or expression, it would express disrespect towards the superior referent (Obeng, 1997).Given this overview, and due to their unpredictable morphophonological patterns, investigating hypocoristics has become a major area of interest within the field of linguistics. Theoretical background /hypocoristics in Semitic languages A considerable amount of literature has been published on Semitic hypocoristics (e.g., Abu-Mansour, 2010;Bat-El, 1994;Davis & Zawaydeh, 2001;Zawaydeh & Davis, 1999).Most of these studies have analyzed the formation of hypocoristics within two theories: Optimality theory and word-based theory of morphology.This section considers three aspects of hypocoristics: the morphology of hypocoristics in Semitic languages, an overview of previous accounts of hypocoristics across Arabic dialects, and a brief on Construction Morphology, a theory of morphology developed by Booij (2010Booij ( , 2016) ) and is employed in the analysis of hypocoristics in the present study.Then, a discussion of previous accounts of Arabic hypocoristics is provided. The morphology of hypocoristics in Semitic languages Morphologically, there are two viewpoints on the status of the consonantal root across Semitic languages.Some linguists argue for the morphemic status of the consonantal root (e.g., Prunet, 1998), while others working on Hebrew or Arabic deny this view (e.g., (Bat-El, 1994;Ratcliffe, 1997) as cited in Davis and Zawaydeh (2001).For instance, Bat-El (1994) and Ussishkin (1999) argued that Modern Hebrew denominal verb formation does not explicitly reference the consonantal root.Moreover, the study on Arabic hypocoristics that adopt a stem-based view, as analysed by Davis and Zawaydeh (2001), or a word-based view of morphology, such as Ratcliffe (1997) and Benmamoun (1999), gives little weight to the special status of the consonantal root.Both accounts suggest that Arabic verbal morphology depends on words rather than consonantal roots.While Benmamoun speculates that some parts of Arabic morphology may necessitate the use of a consonantal root, Ratcliffe (1997) flatly opposes this idea.Ratcliffe (1997) claims that consonantal root operations can be phonologically described as sonority through operations. An overview of previous accounts of Arabic hypocoristics across dialects In a study on the formation of hypocoristics in colloquial Arabic depending on word-based theories of morphology, Davis and Zawaydeh (2001) present data to support their argument that Arabic hypocoristic formation "is an example of an output-to-output word-formation process that references the consonantal root" (Davis & Zawaydeh, 2001, p. 514).Data such as "dja:na: dajju:n" are provided to prove that it does not matter if "the consonant in the full name is an onset, coda, singleton, or geminate," because the root consonant is the only critical element for its appearance in the hypocoristic.In this example, the hypocoristic formation makes a "reference to the consonantal root as it appears in the prosodified full name and thus reflects an output-to-output derivation" (Davis & Zawaydeh, 2001, p. 516). Abu-Mansour (2010) addresses the failure of names related to glide-medial and glide-final roots to form the hypocoristic formation C 1 aC 2 C 2 u:C 3 in Makkan Arabic.The issue is that medial glides / w/ and /j/ have dual behaviour.For instance, if the medial glide of the root is /w/, the name fails to take the hypocoristic pattern C 1 aC 2 C 2 u:C 3 , but in hypocoristic nouns with a medial glide /j/, it successfully forms it.It is argued that Makkan Arabic speakers produce one pattern of hypocoristic formation with two manifestations, pattern I /C 1 aC 2 C 2 u:C 3 / and pattern II /C 1 aC 2 C 2 u/, in which an alternative explanation of the failure of glide-medial and glide-final roots to form /C 1 aC 2 C 2 u:C 3 / hypocoristics is presented (Abu-Mansour, 2010).Depending on the OCP as a constraint in Arabic phonology, it is revealed that "the glide-medial roots continue to fail to form Pattern I hypocoristics" as the name zakijja: its root is /zkj/, *zakku:j fails as a hypocoristic, while Pattern II tends to avoid this violation of the OCP (Abu-Mansour, 2010, p. 31). As far as Jordanian Arabic is concerned, Zawaydeh and Davis (1999) provided the first account of hypocoristic formation in this dialect.Their study examines only one disyllabic pattern of hypocoristics that has the shape: CVC.CVVC(V), where the first vowel is /a/ and the second is long /u:/ (dja:na "Dyana [prober name]" /djn/: dajju:n(e)).The researchers mainly focus on problematic examples and explain their singularity as having different patterns or reflecting other constraints.Moreover, they argue that Arabic hypocoristics refer to an output root of the full name that is different from the input lexical root of the same name, and that "root consonants can be referred to in output-output constraints" proving that this phenomenon is essential for understanding the unusual language games and speech errors reported in Arabic (Zawaydeh & Davis, 1999, p. 136). Drawing on Davis andZawaydeh's (1999) tenets, Farwaneh (2007) presents other hypocoristic examples to challenge the morphemic status of the root as well as the assumption of a hypocoristic morpheme.The hypocoristics considered in her study are derived from hollow (glide-medial) roots, weak (glide-final), and reduplicated hypocoristics.The researcher offers a word-based analysis of hypocoristics within both Optimality Theory and Correspondence Theory as the latter provides an "account for all types of hypocoristics considered: geminated wallu:d, affixed mannu:l partially reduplicated farfu:ħ, totally reduplicated zanzu:n or marked with final spreading dallu:l" (Farwaneh, 2007, p. 47).The researcher concludes that as a morphological entity, the root is not necessary nor sufficient since it fails to set the aforementioned types of hypocoristics in one framework (Farwaneh, 2007). Construction Morphology Construction Morphology is a theory of morphology developed by Booij (2010Booij ( , 2016) ) that is concerned with how new words of a certain type can be formed to illustrate the different phonological and morphological processes involved in word formation.The following schema (from Booij, 2017) represents the form-meaning correlation of the morphological constructions for a verbal base like dance, walk or sing followed by the suffix -er, and shows its corresponding systematic meaning pattern "one who Vs", where V stands for the meaning of the verb. ( This way of accounting for morphological patterns has been developed in the theory of Construction Morphology as outlined in Booij (2010).In this schema, the angled brackets identify the constructional schema.The variable x represents the phonological content of the base word.The relationship between form and meaning is specified by the coindexation.The index i indicates that the meaning of the base word (SEM) recurs in that of the corresponding complex word.The index j is used to show that the meaning of the construction correlates with the form as a whole.The double arrow indicates the correlation between form and meaning. In addition to accounting for the different phonological and morphological processes involved in word formation, Construction Morphology could help to illustrate the mapping between form and meaning in different contexts by using constructional schemas, i.e., schematic representations of morphological constructions.Thus, it could reduce the degree of arbitrariness between form and meaning.According to this approach, once language speakers have been exposed to a large set of words that belong to a certain type (hypocoristics in our case), they figure out the abstract morphological patterns that they will use later to form new words of that type in different contexts.While previous accounts of hypocoristics have mainly focused on the formation of hypocoristics and left out meaning, Construction Morphology includes aspects of form and meaning and could therefore be a complete theory for hypocoristics. This study The study of hypocoristics in the Jordanian Arabic-speaking context is important for many reasons.Awareness of hypocoristics in a language boosts its overall performance.Moreover, sociolinguistic aspects of hypocoristics are essential, especially for people from different socio-cultural/dialectal backgrounds and for foreign learners.Because attitudes towards the different uses of a language in different societies may be either positive or negative (Bayyurt, 2013), being aware of the sociolinguistic factors in cross-cultural communication is very essential to achieve positive sociopragmatic knowledge.Accordingly, a foreign learner might use a hypocoristic in an inappropriate context, thus reflecting disrespect for others, especially when addressing elderly people or in the context of superiors' interaction with subordinates.A hypocoristic name also plays a significant role in determining an individual's identity and may even be more popular than their given name. Moreover, hypocoristics are markers of in-group identity, and their use enhances social bonds, so a foreign speaker should be aware of using hypocoristics in their appropriate contexts, trying to share cultural heritage between speakers.Kidd et al. (2016) demonstrated that "linguistic convergence is associated with the social closeness between speakers" (p.727).They found that using hypocoristics within the same social group results in convergence, while using them with an outgroup person results in divergence, just as communication accommodation theory suggests (Ayoko et al., 2002). In the Jordanian Arabic context, each name can have one or more hypocoristic forms with different morphological and phonological processes.Sociolinguistically, hypocoristics indicate either a positive or a negative connotation when used in specific contexts.Given the above, the present study aims to explore the patterns of hypocoristics in Jordanian Arabic. 4Specifically, the current study investigates the most common patterns of hypocoristics used by three different age groups (children, young and elderly).The study aims to contribute to previous literature by investigating hypocoristic in the framework of Construction Morphology (Booij, 2010(Booij, , 2016)), which may present a potentially new avenue for hypocoristic research since, as mentioned earlier, Construction Morphology includes aspects of form and meaning and could therefore be considered a complete theory for hypocoristics. The layout of the paper is as follows: the next section addresses the method and procedure of data collection.Then, the paper presents data on the most common hypocoristic patterns used by three different age groups in Ammani-Jordanians (children, young and elderly) and displays how these groups produce different hypocoristic patterns for the same name.Furthermore, the paper discusses how the produced hypocoristic patterns achieve certain purposes as either to show intimacy, endearment, humour, or respect.The last section concludes with implications and some recommendations for future research. Participants The participants of this study were 51 native speakers of Ammani-Jordanian Arabic (23 males and 28 females) divided into three different age groups: 17 children aged: (8-14); 18 young aged (16-58); and 16 elderly aged 60-above.The participants had no history of hearing or speech disorders.The appendix includes a detailed table of the participants' demographic profiles.Participants were asked to voluntarily be part of this study, and they were assured that their names will be kept anonymous and that they have the choice to discontinue the oral questionnaire anytime. Materials The current study used an oral questionnaire designed by the researchers and confirmed by a specialist to ensure validity.The questionnaire measured and evaluated the formation of hypocoristic patterns among Ammani-Jordanian Arabic speakers.Since it was an oral questionnaire, it consisted of one main question asked to the participants after explaining the aim of this research to investigate the differences in hypocoristics formation.Seventeen names were selected for this paper and divided into the following types: 1. monosyllabic names category that includes two common names and one nonce name; 2. disyllabic names category that involves four common names and one nonce name; 3. compound names that have four names; and 4. foreign names category that involves three names.The ratio of male names to female ones is 1:1.6 (8 male names and 5 female names). Procedures After being self-selected, each participant was met individually in a quiet room to limit the interventions and told that the questionnaire concerns knowing the hypocoristic patterns used in Ammani-Jordanian Arabic; each session took a maximum of 15 minutes.To obtain naturally occurring data, children were interviewed by their parents/legal guardians.They explained the idea of the research to their children with some examples, then they asked them to form one hypocoristic for each given name.Children's sessions lasted more than the time determined (20-25 minutes on average) as their parents had to explain to them the process more than once.Strict confidentiality was assured, and instructions were given to the participants on how to answer the questionnaire orally.They were asked to give one hypocoristic form for each name.All responses were recorded using a smartphone microphone (Android, Samsung Galaxy A21s).Data were then stored in a compressed file on the same smartphone.The data were transcribed using IPA symbols by the researchers and doublechecked by a phonetician.Statistical analysis was performed using IBM SPSS Statistics for Windows, version 26 (IBM Corp., Armonk, N.Y., USA).To guarantee accuracy, the researchers ran a descriptive analysis of frequencies to examine the difference in producing different hypocoristics of the same name by different age groups.The data were discussed and interpreted in the light of the Construction Morphology schemas (Booij, 2010(Booij, , 2016) ) which make it possible to express generalizations about hypocoristic forms and meaning. Results and discussion This paper focuses on five types of names, i.e., Arabic monosyllabic names, disyllabic names, nonce names, compound names, and foreign names.The outcome hypocoristic patterns are discussed accordingly.To start with the monosyllabic Arabic names, Tables 1 (a-c) present the hypocoristic patterns produced by the participants for the selected monosyllabic common names nu:r and ʒu:d and the nonce monosyllabic name nu:s by age group. When viewing the hypocoristic forms of monosyllabic names in Tables 1 (a &b), it can be noticed that the most used form by the children and young participants is reduplicating the first CV or CV: for the common and the nonce monosyllabic names.For example, many participants tend to produce the hypocoristic form nu:nu: for nu:r (children: 52.9%, young: 77.7%) and ʒu:ʒu: for ʒu:d (children: 64.7%, young: 72.2%), which involves reduplicating the first CV: and omitting or truncating the final consonant; see, Davis and Zawaydeh (1999) for a similar dataset.This process is usually the first option for producing the hypocoristic name because it is easier for an individual to produce than a more complex alternative (e.g., nannu:ʃ for nu:r).The participants in the current study reported that this form indicates diminutive, intimacy and endearment.This reduplication pattern can be represented by the following schema in which the form is represented on the lefthand side of the schema and the meaning is specified on the right-hand side. (3 As illustrated in Table 1 (c), the elderly group also uses the same process but less than the other groups (nu:nu: 18.8%, ʒu:ʒu: 31.2%).Instead, the most used form by this group is adding the term "Ɂabu" to the male name ʒu:d and the term "ʔum" to the female name nu:r, in addition to adding the definite article el-to the beginning of each name as follows "ʔum el-nu:r" (43.8%) and "ʔabu -el-ʒu:d" (43.8%).What is interesting is that this method is found to be used even with the nonce name nu:s.The results show that some elderly participants consider it as a male name and add "Ɂabu" to it (31.2%),whereas others assume that it is a female name and hence add the term "ʔum" instead (25%).This hypocoristic form is related to teknonymy that is used in Ammani-Jordanian Arabic, by which a person would be called the father/mother of his/her eldest male child out of respect.As a hypocoristic, this pattern is restricted to gender in that the term "Ɂabu" is associated with male names only, while the term "ʔum" is associated with female ones (for some details of the functions of such constructions as address terms in Jordanian contexts, see also Al-Khawaldeh et al., 2023).Using this pattern for hypocoristic formation indicates respect and appreciation for the receiver, as reported by the participants.The general schemas for these forms are as follows: respect and appreciation] j > Another hypocoristic form illustrated in Tables 1 (a-c) and used by all the groups is the addition of the suffix -a (e.g., nu:ra), -u (e.g., ʒu:du), -i: (e.g., nu:si:), or ti: (e.g., nu:rti and nu:sti:) after the full name.The -i and ti suffixes are considered possessive suffixes when added to the full name (Prunet & Idrissi, 2014, p. 179) (as illustrated in 6).On the other hand, if they are added to a hypocoristic, as in nasnu:sti:, they express diminutive, as represented in (7).For details on the formation of diminutives in Ammani-Jordanian Arabic, read Mashaqba et al. (2022b).This was confirmed by the participants of the present study.It could also be noticed that the suffix -i appears after the male name ʒu:d, whereas the suffix -ti appears after the female name nu:r.This pattern is also used with the nonce name nu:s by all the age groups as some participants produce the form nu:si while others produced nu:sti:.On the other hand, the results of the current study show that the addition of the suffixes -a or -u to form hypocoristics in Ammani Arabic indicates tenderness and affection (see schema in 8). Table 1. (a) Number and percentage of hypocoristics of monosyllabic names produced by the children group (n = 17) Tables 2 (a-c) display the hypocoristic patterns formed by each age group for the disyllabic common names xali:l and ʔasˁi:l and the nonce name xami:l. The results presented in Tables 2 (a-c) show that all the age groups opt to form the hypocoristic pattern xallu:l and ʔasˁsˁu:l, in which the second consonant is being geminated and the long vowel / i:/ is changed into the long vowel /u:/ to form the hypocoristic pattern C 1 aC 2 C 2 u:C 3 where the vowel melody is always /a/-/u:/, while the consonants coincide with those of the actual name.See below the rule for forming this pattern: Another common hypocoristic pattern among the three groups involves adding the term "Ɂabu" before the name and changing the name itself morphologically as in Ɂabu-l-xill for the name xali:l and Ɂabu-l-ʔasˁɑ:jil for the name ʔasˁi:l.As shown earlier (in schema 4), this type of hypocoristic pattern is also found in the responses of the monosyllabic names, but the difference is that the pattern with disyllabic names involves some morphological changes to the name itself (such as truncation, reduplication, or affixation).However, as the participants reported, the meaning connected with this form is still the same, i.e., respect and appreciation. Moreover, the results in Tables 2 (a-c) indicate that reduplication is not limited only to monosyllabic Arabic names since many participants tend to reduplicate some parts of the disyllabic names to form its hypocoristic form, e.g., xu:xu:, xalxal, sˁu:sˁu:, sˁalsˁal.This also shows that the participants from all age groups tend to form the templatic shape [C 1 uC 1 u] for monosyllabic and disyllabic names.This indicates the high degree of productivity of this construction as it is believed that patterns that apply to a high number of items also tend to be highly applicable to new items (Bybee, 1985).Although this template applies only to the first consonant, forming the hypocoristic based on the second or the third consonant would result in an inappropriate hypocoristic (Davis & Zawaydeh, 1999).For instance, lu:lu: does not seem to be an appropriate hypocoristic for the name xali:l, neither does mu:mu: for the nonce name xami:m.This explains why none of the participants from any age group produced such a hypocoristic.However, in the name ʔasˁi:l, the first consonant is a glottal stop, so in this case the second consonant is used to form the hypocoristic sˁu:sˁu: instead.This form is found in the children and young groups' responses.This shows why none of the participants from any age group produced the hypocoristic *ʔuʔu for the name ʔasˁi:l.Another pattern used by all the groups is adding a suffix to the full name as in xali:lo: for xali:l, ʔasˁi: lo: for ʔasˁi:l. For the dissyllabic nonce name xami:l, the results reveal that the participants produce hypocoristic patterns similar to those of the common names.Thus, participants tend to reduplicate the first CV of the name as in the hypocoristic xamxam, keep the name as it with adding a suffix at the end as in xami:lu, or add the term "Ɂabu" at the beginning with changing the name morphologically as in ʔabu-l-xama:jil.In sum, the most common hypocoristic patterns of disyllabic Arabic names involve reduplication, affixation, and adding the term "Ɂabu" to male names and changing the name morphologically, in addition to geminating the second consonant and changing the long vowel /i:/ into the long vowel /u:/ to form the hypocoristic pattern [C 1 aC 2 C 2 u:C 3], as in ʔasˁu:l. The present study also investigates the case of forming hypocoristics for names sharing the same syllable shape but with different usage according to gender as in ʕali (CVCV) and ʕulɑ (CVCV), in which the former is used for males and the latter for females.Despite the slight phonological differences, they both start with the consonant /ʕ/ but one is followed by the short vowel /a/ and the other by the short vowel /u/, and both names end with an open syllable.Table 3 presents the patterns produced by all the participants regardless of their age group as the aim of this investigation is only to see whether the morphological similarity (in terms of root morpheme) between these names would result in similarity in hypocoristic formation processes. As illustrated in Table 3, the most used pattern for both names is ʕallu:ʃ which involves C 2 gemination (the /l/ sound in this example), and the suffixation of the-u:ʃ to the open coda syllable (as illustrated in 10).This form was used for ʕali by 70.6% and for ʕula by 66.7%.This hypocoristic pattern is related to names that end with an open syllable.It involves the duplication/gemination of the second radical [C 2 ] and the suffixation of-u:ʃ (as represented in 10), where the vowel [a] and the-u:ʃ suffixation are invariable while the consonants coincide with those of the actual name. This indicates the similarity in the hypocoristic formation process between these names.Other forms produced by the participants include reduplication as in ʕulʕul for ʕali (9.8%) and lu:lu: for ʕula (25.5%).Some participants (19.6%) add the suffix -a:wi to the name ʕali, and some add the term ʔum-to the name ʕula and change the name morphologically into ellu:l to form ʔum-ellu:l (7.8%).Therefore, one can argue that the morphological similarity between names can result in similar hypocoristic forms regardless of their gender. Furthermore, this study includes compound names to explore the form of hypocoristics that Ammani Jordanians use for such names.Table 4 presents the number and percentage of the hypocoristic forms for compound names produced by the participants. As represented in Table 4, the most common patterns of compound names include either omitting the whole second part of the name as in the hypocoristic nu:r for nu:r il-hudɑ or omitting the second part and simultaneously forming a hypocoristic to the first part using reduplication such as nu:nu.A second way involves the deletion of the first part of the compound name, with some changes done for the second part, as in nada, nɑddu:ʃ, nu:nu, or nɑddu:ʃe for qɑtˁr in-nɑdɑ.Thus, the forms produced for compound names involve either changing the first part of the name and omitting the second one or deleting the first part of the name and changing the second one. In brief, one part of the compound name is deleted when forming a hypocoristic in Ammani-Jordanian Arabic (as in 11 & 12). Ammani speakers often use foreign names either for themselves or for naming their pets (for details on the positive attitude Ammani people have toward their prestigious spoken dialect, see, Mashaqba et al., 2023).Accordingly, this paper includes three foreign names (Ɂiliːn, fredi, and pʰɔ:l) in the questionnaire to examine the hypocoristic formation process.Table 5 presents the most common names Ammani Jordanians use for these names. The results in Table 5 show that the traditional process of reduplicating some parts of the name is also used in forming hypocoristics of foreign names.For instance, the most produced hypocoristic pattern of the name Ɂileːn is lu:lu (58.8%), followed by li:li (29.4%).Another pattern of the name Ɂileːn is found which encompasses preserving the first part of the name and omitting the final consonant Ɂili (11.8%), which leads to a phonological change of the long vowel [e:] into the short vowel [i].The same is applied to the other foreign names; fredi: > fu:fu: (52.9%), fred (47.1%); pʰɔ:l: > pupu (37.3%), pulpul (49%), pʰu: (13.7%). With regard to the social contexts, almost all the participants confirmed that hypocoristics in Ammani Arabic are used between equals and close people or between superior-to-subordinate status, and that in all cases, using hypocoristics indicates informality.Unlike some other languages (e.g.Akan, where hypocoristics can be used in subordinate-to-superior contexts), the Ammani context does not use this structure and would express disrespect towards the superior.Moreover, most of the participants reported that they generally use hypocoristics when they have a positive attitude towards the person referred to in order to show intimacy, familiarity, and friendliness and to consolidate a social relationship.On the other hand, most of the participants reported that they would use the full name of a person rather than a hypocoristic if they have a negative attitude towards him/her.However, some participants stated that in specific contexts, they may use hypocoristics with a negative connotation, and in this case, they would have a sarcastic and negative meaning. To summarise, the results of the present descriptive study indicate that the common hypocoristic patterns produced by the three age groups involve reduplication, affixation, and truncation.In reduplication, part of the base name is repeated to create the hypocoristic.Affixation involves the addition of a suffix; e.g., -i, -o, to the base name to form the hypocoristic.Truncation involves shortening the name by eliminating one of its parts.Among the interesting hypocoristic patterns is adding the term "Ɂabu" to male names and "ʔum" to female names, which is the process that indicates that some patterns are restricted to the gender associated with the name.It is important to note that the aforementioned processes can operate independently, such as ʒu:di which includes affixation only, whereas in some cases more than one process can be used to form a hypocoristic, as in sˁu:sˁi: that involves both reduplication and affixation. Further, the results show that the hypocoristic forms for the nonce monosyllabic and disyllabic names are formed through the same processes adopted for the common names.In the case of compound names, the results reveal that either the first part of the name is changed and the second part is omitted (nunu or nu:r for nu:r Ɂil-hudɑ), or the first part of the name is omitted and the second is changed as in the hypocoristic forms of qɑtˁr in-nɑdɑ ~ nada or nɑddu:ʃ.Moreover, the findings demonstrate that morphological similarity between two names can result in similarity in their hypocoristic forms, regardless of the names' gender, as seen in ʕallu:ʃ, which is found to be the most common hypocoristic for the male name ʕali and the female name ʕula.Finally, regarding the foreign names, the paper finds that they follow the same patterns used for the native Arabic names. Conclusion The current study seeks a descriptive analysis to explore and describe the patterns of hypocoristics in the Ammani-Jordanian context on the age of the participants and the gender of the name.The data used in the current study include common monosyllabic names, common disyllabic names, nonce names, compound names, and foreign names.The findings show that the hypocoristic formation processes used by Ammani Jordanians include reduplication, affixation, truncation, and the use of gender-specific terms like "Ɂabu" with male names and "ʔum" with female names.The examined age groups use all these hypocoristic processes; however, the frequency of their usage may differ from one group to the other.For instance, the children and young participants are found to prefer reduplication to form hypocoristics with monosyllabic names; whereas the elderly group shows a tendency towards adding the terms "Ɂabu" and "ʔum" to form a hypocoristic instead.As explained earlier, this process refers to the respect and the significant value intended for the receiver. This study contributes to our understanding of the common hypocoristic patterns produced by Ammani-Jordanians from different age ranges and shows how various forms of hypocoristics a single name can carry.Another key contribution of the present study is that it investigates hypocoristics in light of a new framework, namely, Construction Morphology (Booij, 2010(Booij, , 2016)), that can be considered a complete theory of hypocoristics as it includes aspects of form and meaning at the same time.The study opens new directions for future research and concludes with pedagogical implications for learners of Arabic and English.The findings of the present study raise awareness to the pragmatic knowledge of hypocoristics by Arabic L2 learners in performing tenderness, intimacy, politeness, endearment, or respect to obtain successful cross-cultural communication.Such studies would be of great help to L2 Arabic learners and would constitute an important step toward research-based teaching of cross-cultural communication.Future work can investigate other dialects that may have patterns that are not found in Ammani Jordanian Arabic.It would also be interesting for further research to assess the hypocoristic patterns from the perspective of prosodic morphology theory instead of only depending on the C-V level of representations.Future work is suggested to examine changes in the productivity (or even linguistic loss) of hypocoristic patterns (if any).You are free to: Share -copy and redistribute the material in any medium or format.Adapt -remix, transform, and build upon the material for any purpose, even commercially.The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: Attribution -You must give appropriate credit, provide a link to the license, and indicate if changes were made.You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. No additional restrictions You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. Cogent Arts & Humanities (a) Number and percentage of hypocoristics of disyllabic names produced by the children group (n = 17) Table 5 . Number and percentage of hypocoristics of foreign names produced by all the groups (n = 51) © 2023 The Author(s).This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license. and dialog with, expert editors and editorial boards • Retention of full copyright of your article • Guaranteed legacy preservation of your article • Discounts and waivers for authors in developing regions Submit your manuscript to a Cogent OA journal at www.CogentOA.com ,
v3-fos-license
2019-04-22T13:11:09.963Z
2018-02-28T00:00:00.000
126092578
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://zenodo.org/record/1200124/files/215.pdf", "pdf_hash": "f9f1c893eef018f11254606b63d04f9d16b78b0c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44028", "s2fieldsofstudy": [ "Medicine" ], "sha1": "10781cf60365cc4ac7977ed8846944f289d0ff74", "year": 2018 }
pes2o/s2orc
REVIEW OF INCIDENCE OF CARCINOID LUNG TUMORS IN IRAQ. Background: Carcinoid lung tumors are rare tumor which tend to be slow growing. They are one type of neuroendocrine tumors, they have two type, typical (low grade) and atypical (intermediate grade). 1.2% of primary lung tumors are carcinoid. The average age of people affected is 40-50 years for typical subtype while atypical have been reported in virtually every age group. Cough, dyspnea, hemoptysis and recurrent chest infection are the main presenting symptoms. Chest x-ray, CT scan of chest and bronchoscopy are the main tools in the diagnosis of carcinoid lung tumor. Management was mainly surgical with resection of the affected lobe or ISSN: 2320-5407 Int. J. Adv. Res. 6(2), 1802-1808 1803 Introduction:-Carcinoid tumors Are rare tumors which tend to be slow growing. They may not cause any symptoms for several years. Most of these tumors occur in people over the age of 60 years. They are one type tumor of the neuroendocrine system. Most carcinoid tumors are found in digestive system, but they can also develop in lung, pancreas, kidney, ovaries or testicles (1). Carcinoid tumors of lung:- It arises from Kultchisky Amine precursor uptake Decarboxylation(APUD) cells in the bronchial epithelium (2,3).The bronchopulmonary carcinoid tumor classified on the basis of WHO. guidelines 2004 and that describe four subtype of bronchopulmonary neuroendocrine tumor which are typical (low grade), atypical (intermediate grade), large cell neuroendocrine carcinoma and small cell lung carcinoma (4).Both typical and atypical carcinoid tumors consist of small nests or interconnecting trabeculae of uniform cells separated by a prominent vascular troma and numerous thin-walled blood vessels (2,3).The difference between typical and atypical bronchopulmonary carcinoid tumor illustered in As neuroendocrine tumors, carcinoids are capable of producing a variety of biologically active peptides and hormones, including serotonin, adrenocorticotropin hormone (ACTH), antidiuretic hormone (ADH), melanocytestimulating hormone (MsH), and others (3,5,6).Excess serotonin production has been implicated in the development of carcinoid syndrome. This syndrome is characterized by a constellation of systems and manifests when vasoactive substance from tumor enter the systemic circulation escaping the hepatic degradation. This is the case when carcinoid tumors metastasize to the liver or they arise in the bronchus. These tumors release too much of the hormone serotonin and several other chemicals that cause the blood vessels to open (dilate) (2,7).Ectopic production of ACTH and Cushing syndrome have been reported in association with typical and atypical carcinoid tumors. Although less than 1% of pulmonary carcinoid tumors produce Cushing syndrome, it is the second most common neuroendocrine syndrome produced by these tumors. In addition, these tumors are responsible for development of about 1% of cases of Cushing syndrome. When a patient is found to have an ectopic source of ACTH production, the lesion is generally a pulmonary neoplasm of same type (3,5,6).The syndrome of inappropriate AVP (arginine vasopressin) secretion or syndrome of inappropriate secretion of ADH (SIADH) can be produced by pulmonary carcinoid tumors, although it more commonly is associated with small cell lung carcinoma. The production of excess circulating AVP creates hypothermia secondary to water retention. Patients present with weight gain, weakness, lethargy, and mental confusion and, in sever cases, can develop convulsion and coma(3,5,6). Imaging studies:-Chest radiographyA Changes associated with bronchial obstruction include persistant atelectasis, consolidation secondary to pneumonia, and of branchiectasis and hyperinflation changes (6). 1804 Computed tomography scan:-High-resolution cr scan is the best type of CT examination for evaluation of a pulmonary carcinoid tumor. It may reveal nodules or masses that not well visualized on plain chest radiograph by virtue of their small size or their position, such as t located in a retrocardiac position. Magnetic resonance imaging MRI generally provides information similar to that of CT studies. Dynamic MRI may be a useful complimentary examination in selected cases (13). Positron emission tomogrophy Positron emission tomography (PET) studies utilize the fact that malignant cells possess a higher metabolic activity rate than do healthy cells. A tagged glucose molecule, FDG (2-fluorine-18) fluoro 2 deoxy-D glucose), is administered, and metabolic analysis of this substance within the ells of the imaged organ system or the whole body is conducted. PET scan appears to have a considerable sensitivity and specificity for the identification of malignant lesions (14). Diagnostic Procedures:-Bronchoscopy:-Because of majority of bronchopulmonary tumors are centrally located, they are amenable to bronchoscopic evaluation. It is difficult to distinguish typical from atypical carcinoid tumor with small biopsy typically obtained by flexible bronchoscopy (4).The characteristic bronchoscopic appearance of a cherry-red coloured, smooth polypoid, vascular tumor that bleeds easily and profusely (2).Hemorrhage is the main complication previously but now its rare, however the risk of hemorrhage can be reduced by administration of epinephrine solution before biopsy. Incase of significant hemorrhage which is difficult to control, a neodymium: yttrium-aluminum-garnet (Nd:YAG) laser is helpful (4). Peripheral carcinoid lesion can be evaluated by CT-guided percutaneous transthoracic needle biopsy and video assisted thoracoscopic biopsy(4). Management:surgical therapy:- Surgical resection is the primary mode of therapy for carcinoid tumors of the lung. A variety of forms of resection have been util successfully and with excellent long term results. Endobronchial management Typical bronchopulmonary carcinoid tumor which is strict endoluminal lesion and no evidence of lymph node invasion can be treated with flexible bronchoscopy and cryotherapy. Also endobronchial resection should be reserved for patient who are not amenable to surgical intervention Bronchoscopic resection using an Nd YAG laser with or without photodynamic therapy also has been utilized in selected cases. As yet, these forms of treatment have been reserved for pre resection reduction of intrabronchial tumor mass or for palliative management of airway obstruction in cases in which the patient was considered otherwise inoperable (20,21). Patients and Methods:- This is a retrospective and comparative study of 20 patients with pulmonary carcinoid tumors, who were admitted and surgically treated at the Department of Thoracic Surgery of the Al-shaheed Ghazi Al-hariri Hospital, Medical City Teaching Complex during 18-years period (1996-2013 The medical records plus surgeon note of these patient with the diagnosis of carcinoid lung tumor, were reviewed, collecting information relevant to patients variables with regard to patients age, sex, presenting symptoms, radiographic findings (chest x-ray and computed tomography), typical bronchoscopic appearance, pre-operative preparation, operative findings, post-operative complications and results of histopathology of the resected specimen. Rigid bronchoscopy was used almost exclusively in all patients and the typical bronchoscopic appearance of the tumor and its exact location were documented. All patients after a full pre-operative preparations including basic, specific investigations and blood preparations, underwent formal postero-lateral thoracotomy and the collapsed or the diseased lobe or lobes or lung was resected in the classical way in terms of ligation and division of the arterial supply and venous drainage of the affected area, then division and securing bronchus and finally testing for air leak, inserting two intercostal drains and closing the incision in layers. All of our patients had uneventful post-operative course, patients were discharged home in a good condition with no recurrence was observed in the following years in these patient attending regular follow up until recently only two patients were seen and followed. Results:- Results of 20 patients with carcinoid tumor, who were admitted and surgically managed at Department of Thoracic Surgery of AHshaheed Ghazi Al-hariri Hospital, eleven of them were female constituting (55%) and the rest (ninepatients) were male constituting (45%). The youngest patient was a 25-years old women, and the oldest was a 58 years old man. Most of the patients (8 patients (40%)) fall between 20 30 years old. The distribution of our patients as regard their age and sex is shown in figure (4). Figure 4:-age and sex distribution Fifteen patients presented with dry irritative cough constituting 75%). Easy fatigability and shortness of breath were seen in eleven patients (55%). Recurrent chest infection was seen in six patients (30%) only. Hemoptysis was seen in five patients (2596) only. Carcinoid syndrome presented with attack of (flushing, sweating, diarrhea, fever) was so rare that it is seen in only one patient (5%) The distribution of the above mentioned clinical presentation is shown in figure (5). Chest radiography was performed to all patients, as the primary imaging modality and the radiographic appearance of these patients is illustrated in CT scan was done to all patients and showed an endobronchial mass localized to a lobe or lobes or main bronchus, which coincides with the radiographic findings. The typical bronchoscopic finding was documented in all cases and its typical appearance of cherry-red coloured, smooth, polypoid, vascular tumor that bleeds easily on touch, was seen in almost all patients. Biopsy was done in the first two patients followed by sever bleeding, that was controlled with difficulty. All these patients underwent formal resection of the affected lobe, lobes or lung after a full preoperative assessment making these patients in an optimum condition prior to surgery. The resected specimens were sent for histopathological study. The modalities of the surgical resection are illustered in All of our patients ran an uneventful post-operative course with only wound infection in two patients treated conservatively by culturing of the wound discharge and accordingly antibiotic given. The obtained histopathological report in 19-patients confirmed that the mass is typical of carcinoid tumor with tumor free resected margin. In only one female patient with attack carcinoid syndrome prior to surgery, the histopathological report turned out to be an atypical carcinoid of intermediate grade, who was refered to the oncologist post-operatively for radiotherapy. The patient defaulted during the follow up period. Two of our patients are currently followed up complaining of wheezy chest during winter time only. Discussion:- Carcinoid tumors are rare entity but still encountered in the Thoracic Surgery Department. They are uncommon lowgrade malignant tumor which are most commonly seen in the gastrointestinal tract while pulmonary carcinoids come in the second most common site (25). our small number of patients confirmed that it is an uncommon condition, and this coincides with other study by Akiba et all (26).who reported 32 patients over 24-year (1965-1989) only. Female patients were affected more than male patients in our study 11:9, with a ratio 1.25:1 and this coinsides with other study (27).However, it is in contradiction to other studies, reporting a higher incidence among male patients (26),where as Hamid et a reported equal sex incidence. The majority of our patients 13-patient (65%) seen between 20-40 years of age which is similar to other studies (28,29). Cough, dyspnea and hemoptysis, were the most common presenting symptoms and similar symptoms reported with other studies (28,29).
v3-fos-license
2020-11-12T09:08:16.739Z
2020-01-01T00:00:00.000
229247208
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/63/e3sconf_ebwff2020_05022.pdf", "pdf_hash": "4c9521fd74fd107e421c56c93b4bc5ff75e524ab", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44032", "s2fieldsofstudy": [ "Economics", "Environmental Science", "Agricultural and Food Sciences" ], "sha1": "a106786b6d84f2f3ee19b1f1b03f1b0b4a54bd30", "year": 2020 }
pes2o/s2orc
Modelling of state support for biodiesel production Government support for the development of biofuel production is a relevant part of the system of budget regulation of agricultural production in the Russian Federation. Currently, there is no sound financing method for mechanisms of state regulation of biofuel production which impedes impartial allocation of funds and makes this procedure non-transparent and not motivated enough. In view of this situation, a mathematical economic model was developed that allows one to calculate the optimum level of government support for every type of biofuel considering main areas of state support. We propose to consider three scenarios for the determination of the optimum level of public funding. The first one allows for optimization of the level of government support considering sizes of agricultural production for the i-th crop to provide farms of the region. The second scenario suggests the determination of the maximum profit from the biofuel production through increased agriculturally used areas. Finally, the third one considers calculation of the minimum expenses of achieving the volume of production that provides the farm with raw materials. According to the first scenario, the optimum level of government support for the field should be 1163.6 million rubles. In the implementation of the second scenario in the Samara region, the agriculturally used area planted with oil crops should be increased by 47.1 thousand ha. Introduction Currently, increasing attention is paid to the use of alternative fuels, due to the reduction in the worldwide supply of biogenic energy carriers, tightened exhaust emission standards, and limitation of carbon monoxide emission [1][2][3][4]. As an alternative fuel, biodiesel is one of the best options among other sources due to its environmental friendliness and functional properties similar to diesel fuel. Biodiesel is nothing more than methyl ether, which has the properties of a combustible material and is obtained as a result of a chemical reaction from vegetable fats [5][6][7][8]. It is obtained from vegetable oils by transesterification: methanol is added to the vegetable oil in a ratio of approximately 9:1 and a small amount of catalyst. From one ton of vegetable oil and 111 kg of alcohol (in the presence of 12 kg of catalyst), approximately 970 kg (1100:l) of biodiesel and 153 kg of primary glycerol are obtained. It is recommended to use potassium or sodium methoxides (methylates) as catalysts, after which the mixture is processed in a cavitation reactor [9][10][11][12][13][14]. Its chemical composition allows it to be used in diesel engines without other substances that stimulate ignition. The following useful properties of biodiesel should also be noted: biodiesel undergoes almost complete biological decay: in the soil or in water, microorganisms recycle 99% of biodiesel in 28 days; less CO2 emissions; low a number of components content in exhaust gases, such as carbon monoxide CO, unburned hydrocarbons, nitrogen oxides NOX and soot; low sulphur content; good lubricating characteristics. An increase in the service life of the engine and fuel pump by an average of 60% is achieved [14][15][16][17][18]. Governments promote the development and use of biofuel. For example, in the USA, in accordance with the adopted program, the share of renewable fuels has increased by 10% over the period from 2005 to 2017. In member states of the European Union, a Directive on the Promotion of the Use of biofuels was adopted, under which it is required to achieve at least 10% of biofuel in total fuel consumption by 2020 [19]. Biofuel production is also considered an important strategy to achieve the goals of the Paris Agreement [20]. Despite Russia being one of the largest oil exporters, many Russian scientific and manufacturing institutions have taken an active interest in production and consumption of environmentally friendly bioenergy carriers produced from renewable biological feedstock [21][22][23]. The use of biodiesel in agricultural production is a prerequisite for reducing the cost price of manufactured goods and increasing production efficiency. Government support for the development of biofuel production is a relevant part of the system of budget regulation of agricultural production. Currently, there is no sound financing method for mechanisms of state regulation of biofuel production which impedes impartial allocation of funds and makes this procedure non-transparent and not motivated enough. Materials and methods The study suggests the development of a mathematical model for optimization of government support for biodiesel production. This economic-mathematical model is based on principles of linear programming. Linear programming is a branch of mathematics concerned with maximizing and minimizing linear functions under constraints in the form of linear inequalities. The development of mathematical economic models for optimization includes several interrelated steps: First, there is a need to state the mathematical economic problem of rational use of land resources. When planning the state support for biodiesel production, we need to find a balance of crops that would be profitable. Their land size and the most profitable combinations should be economically feasible and organizationally viable in the long run. In other words, there is a need to consider both climate and economic factors and to determine a production structure of crops suited for biofuel production that would ensure the application of the objective function. Second, we calculate the technical and economic coefficients of production costs and yield of crops suitable for biofuel production. The third step of the mathematical economic model (matrix) development consists of its construction and solving using a computer. Finally, the last step of the development of the optimization model for state support of biodiesel is to analyze the obtained results. The mathematical model includes [24]: 1. The objective function, subject to maximization or minimization: where n -total number of unknown variables; j -sequential number of a variable ( ); cj -evaluation of the objective function per j unit; xj -unknown; 2. System of linear inequalities: where a ij , b i -given constants; i -sequential number of constraint ( ); 3. Non-negativity constraints on all variables included in the system: (3) The model should have the ability to apply to individual households and household groups of different forms of business organization. The main sources of information for the development of the numerical mathematical economic model are the data obtained from annual reports and financial plans of agricultural organizations of the Samara region, scientifically grounded crop rotation with regard to specific climatic conditions of the region, crop farming flow charts and regulations in agriculture. Results and discussion The results of the financial analysis of crop producers of the region show that no more than 15% of financially stable agricultural organizations producing oil crop get government support for fuel and lubricant materials [25]. Table 1 shows a list of crops cultivated in the Samara region that can be used to produce biofuel. Currently, the most common biofuel is rapeseed methyl ester (RME), which is extensively used in Sweden, Germany, France and other countries. Up to 30% of it can be added to diesel fuel without additional engine modification. Western European countries have decided on mandatory addition of 5% of RME to diesel fuel, though in some countries (Sweden, for example), RME is used as a substitute for diesel. Thus, we consider that the production volume of methylated vegetable oils will increase, agritechnologies will improve, which will result in the reduction of their cost prices to an acceptable level [26][27][28][29][30][31][32][33][34][35][36][37][38]. Many scientific research institution and universities, including the Samara State Agrarian University and Povolzskaya machinery testing station, conducted research on the use of RME biofuel, and developed utility flow schemes, fuel supply systems for tractors, adapted for the use of biofuel [25]. It was established that the reduction in engine power output of biofuel is insignificant, and fuel consumption increases by 5-8%. Engine life does not change. Biofuel also has promising lubricating properties. Soot emissions decrease by 50%, carbon dioxide -by 10-12%, sulfur -by 0.05% as compared with 0.2-0.5% for diesel fuel. Technology for the conversion of vegetable oils to biofuel has developed considerably over the past years, especially in Tatarstan. The resulting products (diesel fuel, forage pulp and glycerin) are in demand, and their joint production makes the process cost-effective. The simplicity of the technology and economic characteristics of the process make biofuel more appealing for agricultural producers, considering that diesel fuel is the main fuel in agriculture. The first organization to produce biofuel in the Samara region was "Biosam" in Krasnoyarsk Krai. On the basis of the laboratory of the Department of Tractors and Vehicles of the Samara State Agrarian University, the Biosam company tested biofuel samples produced by MIXER. The samples have demonstrated great anti-wear and antis-cuffing properties [25]. Results of bench testing of engines running on alternative fuel conducted by Povolzskaya machinery testing station showed that:  engine power for blends of diesel and biofuel in different proportions is close enough to engine power for diesel fuel and is within tolerance limits, and differences are insignificant. A slight increase in engine power for 50% biofuel blend is due to high kinematic viscosity of blends, that allows reduction of leakage in plunger pairs;  fuel consumption rate for engines running on blend is higher than for diesel fuel due to the lower calorific value of biofuel. We also calculated the comparative effectiveness of biofuel production. In accordance with this calculation, the cost price of 1 litre of own-produced biofuel is 30-50% lower than the wholesale price of diesel fuel [38]. The analysis of crop producers of the region and social survey of managers and experts reveals that their situation is compounded by price disparities, lack of technical equipment, high costs of fuel, and lack of outlets for crop products. This has identified the need for the development of economic-mathematical model that allows one to calculate the optimal level of government support for every type of biofuel considering forms of government support. The developed methodology allows for fast calculation of the optimal level of government support for biofuel production based on three scenarios (for all forms of business organization) using the Microsoft Excel tool. We developed a model for optimization of government support for the production of biofuel by households growing oil crops, considering the directions of financing. 1. X 1-8 -types of oil crops; 2. X 9-14 -main directions of government support; 3. X 15-17 -forms of business organization. The model uses the following notations: 1. Z -the level of state support for the cultivation of the i-th crop for biofuel production; 2. C i -costs of the unit for the i-th production technology; 3. X i -lookup value of the i-th variable denoting the production technology and the level of support; 4. A i -market output per unit for the i-th production technology; 5. a ij -availability of the j-th resource per i-th unit; 6. b ij -the amount of cash per year; 7. N i -minimum area under the i-th oil crop used for the production of biofuel, cultivation of which must be guaranteed; 8. K i -limited i-th production; 9. Q -limitations of economic resources Objective functions: 1. optimization of the level of government support considering the main types of oil crops used for biofuel production to provide the region with biofuel: 2. maximization of the area under oil crops used for the production of biofuel to increas the profit; 3. minimization of costs while achieving the production volume that provides the households with enough biofuel; The model allows us to obtain an optimal structure of production volume, operating costs, gross fuel price and fuel commodity price, expected profit in the context of a form of business organization. Note that this model considers the optimal distribution of government support for all forms of business organizations (costs). The main objective of the developed methodology is to achieve the optimal production volume at minimal cost, which would allow for determination of the optimal level of government support for biofuel production to provide the households. Solution of this problem would allow developing an optimal and effective system for the government regulation of biofuel production. In accordance with the objective functions, we propose to consider three scenarios of optimal government support: 1. The first scenario allows for optimization of the level of government support considering the levels of agricultural production for the i-th crop to provide farms of the region; 2. The second scenario allows for the determination of the maximum profit from the biofuel production through increased agriculturally used areas; The third scenario allows for the calculation of the minimum expenses of achieving the volume of production that provides the farm with raw materials. According to our calculations based on the first scenario, the optimum level of government support for the field should be 1163.6 million rubles. In this case, financial resources should be distributed to the following targets:  preferential tax, loan and financing systems for producers accumulate funds of 465.5 million rubles;  creation of biofuel production -407.3 million rubles;  crop insurance -174.5 million rubles;  development of information consulting service and technical re-equipment -91.9 million rubles;  other targets -24.4 million rubles. In the implementation of the second scenario in the Samara region, the agriculturally used area planted with oil crops should be increased by 47.1 thousand ha. This would solve the problem of providing farms producing biofuel with raw materials, and improve the difficult financial situation of some producers. In accordance with the third scenario, budgetary expenditures for the achievement of planned standards of production will equal to 32567 thousand rubles. The difference between the gross cost and diesel fuel per farm is 86.1 thousand rubles. The most beneficial for agricultural organizations would be the reduction of tax payments to the federal and regional budgets. Conclusions Practical implementation of the proposed mathematical model includes the involvement of leading experts in the field of cultivation of oil crops, who can describe these functions on a quantitative basis. There is also a need for extensive testing of the model and its identification based on insufficient and inaccurate data to estimate parameters and correct the expert functions. Since we need to develop a different system of the government regulation of the field based on modern information technologies, we used methods of mathematic modelling. The developed methodology allows us to develop an optimal structure of biofuel production, estimate money and labour costs, gross and commodity value of biofuel, and expected margin of all organizational forms of production. Generally speaking, this will allow for optimization of used tools in terms of effective use of budget funds and provision of the Samara region with biofuel. It is essential to point out that issues of biofuel production are not only related to government support. Measures of government support can only be effective if they are based on identification and implementation of internal growth reserves and effective production of biofuel, which will result in excellent performance of farms that cultivate oil crops and produce biofuel and the agrifood complex.
v3-fos-license
2022-09-15T15:05:32.206Z
2022-09-13T00:00:00.000
252234522
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ijournalse.org/index.php/ESJ/article/download/1358/pdf", "pdf_hash": "88ce92a30713b9d06265537b33496d0c5ac11f0f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44034", "s2fieldsofstudy": [ "Computer Science", "Political Science", "Sociology" ], "sha1": "100bf2cf749656b7c0580d87fd8af6473734bd86", "year": 2022 }
pes2o/s2orc
Public Perceptions on Application Areas and Adoption Challenges of AI in Urban Services Artificial intelligence (AI) deployment is exceedingly relevant to local governments, for example, in planning and delivering urban services. AI adoption in urban services, however, is an understudied area, particularly because there is limited knowledge and hence a research gap on the public's perceptions-users/receivers of these services. This study aims to examine people’s behaviors and preferences regarding the most suited urban services for application of AI technology and the challenges for governments to adopt AI for urban service delivery. The methodological approach includes data collection through an online survey from Australia and Hong Kong and statistical analysis of the data through binary logistic regression modeling. The study finds that: (a) Attitudes toward AI applications and ease of use have significant effects on forming an opinion on AI; (b) initial thoughts regarding the meaning of AI have a significant impact on AI application areas and adoption challenges; (c) perception differences between the two countries in AI application areas are significant; and (d) perception differences between the two countries in government AI adoption challenges are minimal. The study consolidates our understanding of how the public perceives the application areas and adoption challenges of AI, particularly in urban services, which informs local authorities that deploy or plan to adopt AI in their urban services. earlier events and afterwards redesign its functionality according to the data and resulting events [11]. One of the most widely used machine learning applications is targeting advertising campaigns and making personalized suggestions based on earlier behavior. They apply the principle where the algorithm learns from earlier cases and does not use predefined decision paths. It is the most suitable for an analysis that is based on discrete categories and continuous numerical variables [12]. Urban services generally aim to increase elements of "good" local governance defined by transparency and openness, reliability and trustworthiness, and inclusiveness [13]. The provision of traditional public services such as health and security (e.g., police and fire departments) are the traditional responsibilities of local governments and their organizations. Thus, cities are responsible for maintaining and developing their vicinities, and for this purpose, there are a significant number of applications where AI can improve the efficiency and the reliability of public/urban services [14,15]. For example, traffic control applications and big data analytics are essential in developing road infrastructure and smart mobility services. In the future, autonomous vehicles will likely be in constant data exchange with the road infrastructure and traffic monitoring technologies, selecting optimal routes for each trip [16]. Our study focuses on public opinions of AI integration into urban service provision structure to contribute to the efforts in bridging the knowledge and research gap on the topic. It aims to explain people's behaviors and preferences regarding the most suited urban services for the future application of AI technology and the critical challenges for governments to adopt AI for urban service delivery. The study applies an extensive survey for data collection from Australia and Hong Kong. It presents an extensive array of questions and their answers in relation to the prospects of AI applications and currently considered challenges and obstacles that AI-based service development is likely to encounter from the user's point-of-view. The survey captures public perceptions of AI and related beliefs. The questionnaire includes claims related to public (local government) funding on AI, skills, and technical capabilities to adapt AI solutions in urban service palettes, as well as issues of transparency and trust in digitalization and AI in society. 2-Literature Background: Societal Issues and AI in Urban Services AI is one of the most disruptive technologies of our time with many powerful applications that cities around the globe have started to take advantage of [17]. According to Yigitcanlar et al. [18], "in the context of cities, AI is the engine of automated algorithmic decisions that generate various efficiencies in the complex and complicated local government services and operations. Managing city assets with structural health monitoring, energy infrastructure fault detection and diagnosis, accessible customer service with chatbots, and automated transportation with autonomous shuttle busses are among the many examples of how AI is being utilized in the local government context." Some of the leading cities in actively investing and exploring the capabilities of AI include, but not limited to, San Francisco, London, Montreal, Tel Aviv, Singapore, New York, Beijing, Bangalore, Paris, and Berlin [19]. Technology adoption, including AI, entails technological and societal challenges on which earlier research has identified several determinants [20]. The main technical features include operational reliability; fluent customer feedback between end-user and service or technology provider; and an easy-to-use end-user interface [21]. Thus, data security and provision responsibility are the foundation for e-service implementation, including AI [22]. Most of the contemporary urban e-services are essentially exchanges of information, forms and agreements, and their digitalization process is, in principle, straightforward. However, due to the common lack of collective designing of public sector information technology system architectures, the task has proven to be more difficult in practice than anticipated [23,24]. In terms of societal issues, local governments provide majority of the public services in cities. Hence, understanding user/public opinions regarding the limited local government financial resources and investment capabilities are important [25]. This way, it is possible to construct a general view of the beliefs and attitudes that the respondents have on the public sector possibilities to support AI in their e-service development [26,27]. Local government collaboration is one potential means to increase investment capabilities. In general, collaboration modes include different forms of partnerships (e.g., public-private-partnerships) and have become one of the main forms of developing service provision. Similarly, information and communication technology (ICT) user studies have highlighted the importance of the endusers' technology knowledge and skillsets [28]. To handle the extensive field of AI applications and their adoption challenges, we have identified the following topics (Table 1) in the literature [9, [29][30][31][32]. These identified broad societal categories are linked to empirically investigated application areas and challenges in this paper. These categories are elaborated on below. 2-1-Control Systems and Security (SC1) Table 1 starts from the assumption (depicted as SC1) that transparency and service quality are essential to the delivery of public sector services. These are questions of control and monitoring that are fundamentally intertwined with cybersecurity and trustworthy governance [26]. Machine learning and AI contain trust issues; therefore, they should also be looked at from the risk and problem perspective. McGraw et al. [33] provided an extensive taxonomy of machine learning, and AI risks that current system architectures face. This taxonomy is the basis for constructing questions related to the challenges of this paper's empirical part. These risks include, but are not limited to, data manipulation and falsification, data confidentiality and trustworthiness, and transfer risks. The risk aspect is operationalized through ethical considerations of the respondents in the survey. Regulation plays a significant role here as the legislation related to decision-making and responsibilities, particularly concerning public officials when executing decisions, becomes blurred if AI decision-making is involved [34]. 2-2-Digital Divides, Social Structuring and Community (SC2) AI brings a new view on the debates concerning digital divides (SC2) [35][36][37]. The public perception of the digital divide is an important study subject because the AI concept is not entirely rooted thus its meaning varies when discussed in different contexts. AI also involves highly complex technological development processes, and the amount of people having in-depth knowledge regarding AI in technological terms is minimal. Thus, survey studies on the new and emerging technologies provide a general view of respondents' beliefs and attitudes [38][39][40][41]. These are also questions of social coherence and equality in a community [42,43]. As the computerized systems become more complex, it is likely that social differentiation between those who can make use, or even understand, the systems put them in a better position than disadvantaged groups. As the common technology sphere has become ubiquitous (e.g., internet-of-things), several traditional explanative variables have lost their explanative power (e.g., education and income). The differences and significances are most prominent in work and content-related issues. For example, highly educated people tend to appreciate news and fact related services and information sources more than other educational groups. However, general questions such as the total time of daily technology use do not necessarily differentiate between the education or income groups. The future topics consider short and long impacts that AI might have. In this regard, AI development is one of the current megatrends that create societal differentiation and digital divides [44]. 2-3-Economy and Business (SC3) In the case of the economy and management (SC3), the company size and investment power are significant factors in moulding public perceptions. Global technology giants (e.g., Alphabet, Meta) are the most often associated with AI development through customer profiling [45]. Questions of trust and cybersecurity are most often associated with them [46]. Therefore, it is necessary to look at the actual AI services people commonly use [47]. In practice, urban e-services create immaterial (services) and material (product) flows. These include urban data management, intelligent transport systems and services (e.g., traffic optimization and product deliveries). Reliability and trustworthiness are the most significant attributes defining the successfulness of the service in question, and in critical societal domains such as social security and health technologies, the trust requirements are decisive [48]. In general, urban e-services on which AI has the most substantial impact that varies according to the particular provision logics, through private service provision models, public in-house production models and their combinations, i.e., public-private partnerships. Sectoral cooperation has been an important method of improving quality in online environments [49]. 2-4-Information Society and Know-how (SC4) Technologies have a significant impact on the future of information society, work, and education-based know-how (SC4). Education is essential here, as highly educated people tend to recognize and are more equipped for critical assessment of e.g., end-user agreements required by most of the e-services and applications. In the case of future of work, AI enhances productivity and innovation to reduce costs and increase efficiency. Commonly we can separate between simple automation tasks (machines substitute human work) and higher-level AI applications (e.g., automated decision-making) [50]. This connects machine learning algorithms to the adjustment of the decisions (each decision affects the future decisions made by the AI). It also minimizes errors and mistakes in repetitive tasks, thus decreasing human errors. The efficiency claim also includes the idea that dangerous and hazardous tasks can be transferred to robots increasing the safety levels in relevant occupations [51]. There is also another side, these issues may be looked at from the point-view of disadvantages. These include considerations of the cost and benefit, as in some cases, technology implementation costs (e.g., in robotics) may be too high for a feasible return of investment [52]. 2-5-Sustainability, Wellbeing, and Health (SC5) The final category (SC5) brings up important elements of sustainability (clean environment), health and wellbeing. Life quality issues are the most often associated with e-health technologies and remote healthcare. AI-based health services possess significant future potential, particularly for the commonly diagnosed health problems where the existing data volume is high enough to enable highly accurate diagnosing. The adoption and use of automated or AI-based health consulting or medical prescriptions are based on trust and minimize the possibility of an error. As such, earlier research has proven that public attitudes towards technology (in general) are dependent on respondent's age, education, and occupation [53][54][55]. Health e-services are not an exception. Identified factors also entail respondents' attitudes towards sustainability issues and appreciating cleaner production and sustainable consumption in urban spaces. The presented complex mix of SCs is the foundation of the empirical section detailing the current socioeconomic structuring of AI perceptions (see Section 4). While there are some studies on what urban managers think on the prospects and constraints of AI in the context of local government services [18], there is a knowledge gap in understanding what the public thinks about AI adoption in urban services-especially considering the societal differentiation and digital divides. This study focuses on the socioeconomic variables (see Table 2) and locations (Australia and Hong Kong) to bridge this gap. We use two main domains of looking at public opinions: first, in terms of future applications and their impact on daily life, and second, in terms of the obstacles and challenges that AI may entail (see variables in Table 3). The most beneficial function of AI technology is: To automate data collection, management, and analysis/To complete tasks otherwise requiring human input/To learn, evolve and improve decision-making over time/To monitor the environment, sense changes and adjust decision-making accordingly/Other The most promising about the future use of AI is to: Enhance productivity and innovation/Reduce costs and increase resources/Reduce error and mistakes/Increase free time for humans to complete other tasks/Improve safety by completing dangerous tasks for humans/Improve functionality of basic services/Optimise energy consumption and production/Assist in the development for change and potential risks/Reduce crime and monitor illegal behaviours/Aid in disaster/Emergency prediction, management, planning, and operations/Provide support to citizens in need/other The biggest disadvantage of AI technology: AI will be highly costly/AI could make many tasks completed by humans obsolete/AI is only as good as the programming and data that is an input/There is a risk that some will abuse AI for their benefit/Other Future consideration of AI is a challenging topic for a survey study. However, it is likely that traditional (e.g., education and age) explanative variables of social sciences become strongly visible. Empirically, a large set of questions are focused on beliefs and opinions concerning AI and local government. The paper applies aspects of technological possibilities, economic feasibility, and social consequences entailed by AI development to systematize the approach. For example, in the AI future application section, questions related to inclusion are important indicators in assessing societal impacts. The empirical data concerns both positives (opportunities) and negatives (challenges). This enables numerous comparative alternatives for analysis and interpretation of the content questions. Due to the complexity of the presented theme and the blurred and varying use of the term AI, most of the questions are asked in dichotomous form (yes/no). These methodological decisions are detailed in the following section. 3-1-Data Collection The study adopted a case study approach to investigate public perceptions of AI in the context of urban services. The study selected two case studies-i.e., Australia and Hong Kong. Following an ethics approval (#2000000257) granted by Queensland University of Technology's Human Research Ethics Committee, an online survey was developed to collect data from the public in Australia (Sydney, Melbourne, Brisbane) and Hong Kong. The questionnaire was developed using the key AI and public perceptions literatures. The survey focused on capturing participants' responses to explain people's behaviour and preference regarding the most suited urban services for the future application of AI technology and key challenges for local governments to adopt AI for local service delivery. An online enterprise survey platform-i.e., Key Survey-was utilized to conduct the survey. The minimum number of participants (384 at confidence level 95% and margin of error 5%) was determined based on methods suggested in the literature [56]. Only adults (people over 18) were invited to participate. The survey was open between November 2020 and March 2021. A professional survey panel company and social media channels were used to recruit participants. In total, 851 valid responses were received (about 23% response rate). The socioeconomic characteristics of the sample are given in Table 2. Table 2 shows descriptive statistics for geographic variables inside this study. Some of the independent categorical variables have a small sample size. For example, the survey includes 851 participants, but only five from gender "other" and 24 participants over 85. This survey includes respondents from Australia (n=604) and Hong Kong (n=247). Participants were asked to select all that apply to answer two questions related to AI services. These questions were: "Which of the following urban services are most suited for the future application of AI technology" and "Which of the following are the key challenges for local governments to adopt AI for local service delivery?". The answers to these questions were converted to 0 or 1. As mentioned previously, our objective is to gain knowledge about the participants' behaviour and get insight into public perceptions concerning the use of AI in local government and urban services. Variables used in our survey are listed in Table 3. 3-2-Research Method Participants were given various alternatives from which to choose, and they could answer questions by selecting 'all that apply'. Data and answers were analysed using descriptive statistics. One way to analyse discrete variables is by applying a binary logistic regression model for the dependent variables after converting it to dummy variables, which takes values (0 or 1). The following equation shows a logistic regression model: where ( ) represents the odd ratio, { 0 , … , } are the model parameters, and { 1 , … , } denote the independent variables. We apply a stepwise regression approach to find the best candidate for the logistic regression model. The overall process of the research methodology is illustrated as a research flowchart in Figure 1. 4-Analysis and Results The study examines critical areas of public understanding, optimism, and concern on the societal application of AI technologies, based on a representative public opinion survey of Australians and Hongkongers. To explore respondents' views regarding the application and challenges of AI for the 'social good', the participants were asked to answer two questions. First, participants were asked, "Which of the following urban services are most suited for the future application of AI technology?". Their responses were then used to assist us in better understanding public opinions on AI's potential applications for social good. During the survey process, participants were instructed to choose all appropriate options from the following list: The second survey question was: "Which of the following are the key challenges for local governments to adopt AI for local service delivery?". The purpose of this question is to explore respondents' opinions about potential challenges in adopting AI technology in public services. The following is a list of possible responses to this question:  Limited local government financial resources and investment capabilities for AI projects;  Limited project coordination for the AI implementation between other neighbouring local governments and the state government;  Limited technical local government staff and know-how on AI projects;  Limited interest in AI-based services from the local community;  Limited trust of the local community to the AI technology;  Lack of transparency and community engagement of the AI-based decisions;  Heavy dependency on the AI technology companies/consultant for project/service delivery;  Ethical concerns on AI of the local community;  Lack of regulations on the AI utilization in the local government context;  Lack of clarity on if/how AI will be used for the common/social good of all community members;  Lack of clarity on how digital divide and technology disruption on the disadvantaged communities will be addressed;  Limited human oversight over AI decisions concerning the local community. The following sections discuss major trends identified from public perceptions in Australia and Hong Kong for applying AI in urban services and associated challenges among five societal categories (SC). Each SC encapsulates a grouping of application and challenge areas to reflect the broad spectrum of urban services (see Section 2). The findings of the binary logistic regression model are reported in this section. The results are presented in tables, which only include descriptions of model coefficients that are statistically significant (p-value 0.05). Table 3 serves as a reference table detailing variables and their definitions. The coefficient estimate, standard error, odds ratio, and probability are reported as well as the corresponding p-value. 4-1-Applications and Challenges in Adopting AI in Control Systems and Security (SC1) Section 4.1.1. contains the application area's results "Aged-care and disability". Once these results have been given as an example, they are separated into their section to provide a more extensive explanation to assist the reader in better comprehending the analysis. Section 4.1.2. provides a general discussion of the participants' responses for all other application areas that belong to the SC1, which comprises: Animal rescue and control (Table A1); Crime and security (Table A4); Disaster/emergency prediction and management (Table A6); Pandemic monitoring and control (Table A15), and; Urban development control and monitoring (Table A17). The corresponding statistical models are presented in included in Appendix A. Aside from the various conceivable applications of AI technology in public services, challenges of AI adoption are also foreseen. As a result, the following sections are directed to an analysis of the survey results that disclose interesting tendencies and a brief discussion of public concerns about the potential drawbacks of AI. Globally, there is still a lack of knowledge on how to harness the potential of AI and assure sustainability, justice, management of information asymmetry, and failure risk in these environments. The opinion survey conducted among Australian and Hong Kong citizens indicates differences in how they perceive and identify the significant challenges when implementing AI technology. Section 4.1.3. and subsections discuss in detail responses regarding "Limited local government financial resources and investment capabilities for AI projects" as an example, while Section 4.1.4. offers a general discussion of the participant's responses to the other SC1 challenges. SC1 comprises two challenging areas: Lack of regulations on AI utilization in the local government context (Table B8), and; Limited human oversight over AI decisions concerning the local community (Table B11). The corresponding statistical models for these challenging areas are presented in Appendix B. Table 4 summarizes some of the findings from Hong Kong and Australia on the use of AI in aging and disability care. Table 4 contains the results for Australia and Hong Kong. Appendix A includes the results for the other potential applications areas of AI for social good. Table 4 reveals that Australians and Hongkongers have very different views regarding the application of AI for aged-care and disability. This result is expected and intuitive considering the sociodemographic and economic differences in the two countries. The only thing in common is their negative view regarding the application of AI to improve the functionality of essential services (e.g., healthcare, education). Further info on both country contexts is provided below. The Australian Context For the age ranges 55-65 (AGE5) and 75-84 (AGE7), the coefficients of 1.66 and 1.69, respectively, show that these respondents favour using AI for aged care and disability. According to the results, there is a high probability (84%) that people in these age categories believe that AI can be employed in aged-care and disability. Many older adults value independence and choose to live in their own homes with proper support rather than entering institutional care. Remote monitoring technology, such as video cameras that monitor people's actions at home can help seniors live independently. Therefore, people in these age groups who are most likely to benefit from the potential application of AI in elderly care and disability are more likely to consider how it can help them and their relatives. The use of AI in aged care and disabilities is supported by participants who generally identify AI with machine learning (coefficient 0.63). With a 63% probability, people who have this attribute are more likely to believe that AI can be used in aged care and disability. This outcome might be explained by individuals who had prior exposure to machine learning are more likely to have had favourable experiences with AI. Another trend arising from our survey is that unemployed participants (EMP2) favour using AI for aged-care and disability with a coefficient of 1.07. There is a high probability (74%) that unemployed people believe that AI can be applied for aged-care and disability. Unemployed people are more inclined to adopt AI since they are less likely to be aware of the obstacles connected with AI implementation. Ironically, those who do not comprehend the fundamental ideas of AI (UND4) demonstrate the highest support for its application to aged-care and disability (1.95 and 88%, respectively). The increased acceptance of this group in adopting AI technology despite having no prior knowledge may be explained because they associate their lack of understanding of AI with other technologies, they are familiar with but do not comprehend. The results showed that those who learnt about AI through university/courses (SOURCE10) and social media/internet (SOURCE8) are opposed to adopting AI for elderly and disabled care (coefficients -1.44 and -1.54, respectively). People with this background have the lowest probabilities (19% and 18%, respectively) to believe AI can be used in aged-care and disability. People in such categories may have a lower level of acceptance since they are aware of the considerable challenges connected with this application and other areas where AI is likely to be adopted first. Similarly, participants who see a promising use of AI to improve the functionality of essential services (e.g., healthcare, education) (PROMI6) and feel in general neutral about AI (FEEL2) disfavour the use of AI for aged care and disability with coefficients, -1. 26 and -0.78, respectively. Participants from these categories may have a low level of adoption because they see more potential in deploying AI technology in other areas, such as telemedicine. Nevertheless, respondents from those groups show a probability to believe the AI can be applied for age-care and disability of 22% and 31%, respectively. The Hong Kong Context Coefficient, -1.47, for GEN2, indicates that female participants disfavour the use of AI for aged-care and disability, with only 19% believing in this application area. In contrast, age group 25-34 (AGE2) and participants who associated AI with advanced predictive analytics (PRED) favour the application of AI for aged-care and disability with coefficients 2.75 and 2.81, respectively. This result is intuitive as young people who appreciate AI's potential for advanced predictive analytics are expected to be optimistic about AI and its many applications. People from these groups believe that AI can be applied in aged-care and disability with a probability of 94%. Participants who feel unsure (PROSPECT2), neural (PROSPECT4), enthusiastic (PROSPECT5), and excited (PROSPECT6) about an AI future favour the use of AI in aged-care and disability. People from those groups have a high probability, 94-98%, of believing in the application of AI for aged-care and disability. Participants who see the most beneficial functions of AI technology to: (a) Learn, evolve and improve decision-making over time (BENEFIT3), and; (b) Monitor the environment, sense changes and adjust decision-making accordingly (BENEFIT4) support (coefficients 2.11 and 2.48, respectively) its' use for aged-care and disability. A high percentage of people from these groups, BENEFIT3 (89%) and BENEFIT4 (92%) believe that AI can be applied in aged-care and disability. On the other hand, participants who first identified AI with humanoid robots (ROB) were less likely (coefficient -1.49) to support the use of AI in aged-care and disability. There is a low probability (18%) that people from these groups believe in this application area for AI. People who hold this view could assume that robots will be used to take care of people's needs rather than many other relevant ways AI can be used for this application. The results showed that those participants who anticipate a noticeable impact of AI on daily life in the next 5 to 10 years (IMPACT3) are disfavoured (coefficient -1.65) to use AI for aged-care and disability. People who hold this viewpoint revealed a low probability of 16% believing that AI can be used in aged-care and disability. Although people with this view agree with the likely impact of AI shortly, they do not see a potential use for this application. Coefficient, -2.64, for SOCIETY2 indicates that participants who see no impact of AI in society also disfavour the use of AI for aged-care and disability. Only 7% of those respondents believe AI can help individuals who are elderly or disabled. In contrast, people who see AI as something negative for society (SOCIETY3) agree (coefficient 3.86) that it can be applied for aged care and disability. Even though their overall view of AI is negative, they still see some potential benefits. Individuals who share this viewpoint (98 %) believe that AI could be used for aged-care and disability. Another tendency appears to be a relationship between having no prior experience with AI (EXP3) and support (coefficient 3.16) for the use of AI for aged-care and disability. The results indicate a high possibility (96%) that these individuals feel AI can be used for aged-care and disability. Even though they have no experience with AI, they may be aware of its potential. Similarly, they could be making analogies with other technologies they do not understand but use regularly, such as the internet and cell phones. 4-1-2-Trends Regarding Other Applications Areas of Adopting AI in SC1 Many advanced techniques, including security systems and devices, employ AI algorithms to improve their capabilities. This section explored respondents' perspectives on prospective areas where AI technology might improve control system security (SC1). As shown in Tables A1, A4, A6, A15 and A17, in Australia, older generations demonstrated far higher levels of trust in using AI technology to regulate service/security, implying that they are comfortable utilizing current workforce technologies. In general, the younger respondents (under 25) appear to be more cautious of implementing this technology than more senior respondents, which might be because the more youthful population perceives new technology as more threatening than older responders. In Hong Kong, however, age groupings had little impact on participants' decisions to use AI in control and security systems, as they do in Australia. Hong Kong shows mixed views from unsure to enthusiastic, which correlates to little knowledge on the topic. In terms of demographic characteristics, there is no discernible trend. In other words, no apparent trends emerged in terms of respondents' age, gender, socioeconomic level, and so on. Table 5 shows the regression analysis results for "limited local government financial resources and investment capabilities for AI projects" as a critical challenge for the adoption of AI by local governments from Australia and Hong Kong. The first part in Table 5 includes results for Australia, while the second provides those for Hong Kong. The first paragraphs discuss the findings from assessing what challenges the government may face when implementing AI in public goods based on replies from Australian respondents, followed by an analysis of responses from Hong Kong respondents. In sum, our study revealed that Australians and Hongkongers' perceptions of AI are impacted inversely by their feelings and level of excitement about AI. In general, Australians appear more favourable than Hongkongers about their government's capacity to deploy AI, which might be due to Australia's lower financial demands and accompanying stress. Further info on both country contexts is provided below. The Australian Context Participants who may not feel comfortable living or work in a fully autonomous place (COMFORT2 and COMFORT3) think that financial resources are likely to be a key challenge (coefficients 1.08 and 0.92, respectively) for adopting AI local governments. There is a high probability (72-75 %) that people who hold these beliefs feel that financial resources can hamper AI adoption; this outcome is predictable and logical because they are already uncomfortable with automation, which is strongly connected with AI. In contrast, participants who have rarely or never interacted with AI applications (FREQ4 and FREQ5) do not think (coefficients -1.25 and -1.79, respectively) that financial resources can be a challenge for adopting AI by local governments. There is a low probability (14-24%) that people who have rarely or never interacted with AI believe that financial resources could be a challenge for local governments to adopt AI. This group's unfamiliarity with AI makes it difficult to understand the potential difficulties of adopting complicated technology. For example, those who have experience with smart speakers using AI technology are aware of how limited their capabilities currently are and how expensive and complex is likely to be the development of significantly superior technology. The coefficient for feeling neutral about AI in general (FEEL2) is -1.02, suggesting that individuals with this profile are unlikely to consider "Limited local government financial resources and investment capacities for AI projects" as a significant barrier to local governments embracing AI. There is a low probability (27%) that people who feel neutral about AI believe that financial resources can negatively affect it. Individuals who share these beliefs are unlikely to have strong feelings towards AI, possibly due to a lack of exposure or awareness on the matter. Similarly, people who feel enthusiastic and optimistic about a future with AI (PROSPECT5 and PROSPECT7) are unlikely (coefficients -1.03 and -1.29, respectively) to see financial resources as a significant challenge for adopting AI by local governments. The probability that respondents with these characteristics believe that a lack of financial resources would hinder the adoption of AI by local governments is low (26-27 %), which is understandable given that they are already excited and optimistic about the technology. The Hong Kong Context The following are Hong Kong residents' perceptions on the most significant challenges the government will encounter in implementing AI technology in public services. The coefficient for the unemployed group (EMP2) is 4.21, indicating that participants belonging to this group are likely to see "Limited local government financial resources and investment capabilities for AI projects" as a key challenge for local governments. There is a high probability (99%) that unemployed people believe that financial resources can challenge the adoption of AI by local governments. This result is expected and intuitive given that unemployed people, especially those living in expensive places like Hong Kong, are likely to be aware of the importance of financial resources to develop and deploy the technology. In contrast, participants with a high income (INC6) do not see (coefficient -2.88) financial resources as challenging for local governments to deploy AI. There is a low probability (5%) that people with high income believe that financial resources can be a significant challenge for local governments to deploy AI. These results are consistent with the previous and intuitive as people with high incomes are less likely to see limitations faced by others regarding financial resources, including local governments. Participants who learnt about AI using social media/internet (SOURCE6) view financial resources as a significant challenge for the deployment of AI by local governments. There is a high probability (98%) that people with this background believe that financial resources are a significant challenge for the implementation of AI. People who learned about AI through these channels will likely understand the expense and associated challenges when adopting AI technologies for the public good. Hence, they are likely to see financial resources as a significant issue for local governments. Another trend seems to link participants who believe that the most promising future applications of AI are connected to "reduce costs and increase resources (PROMI2)", "reduce error and mistakes (PROMI3)", and/or "increase free time for humans to complete other tasks (PROMI4)", and/or who see as "a risk that someone will abuse AI (DISAD4)", and/or who "feel neutral or negative about AI in general (FEEL2 and FEEL3)" in considering financial resources as a key challenge for implementing AI by local governments (coefficients 5.92, 6.27, 3.3, 3.14, 2.31, and 8.23, respectively). There is a high probability (91-100%) that people with these views find limited financial resources as a major barrier to AI deployment. Respondents who link AI with humanoid robots (ROB) regard financial resources as a major barrier to local governments when deploying AI (coefficient -2.64). A small percentage (7%) of people with this view believe that limited financial resources can challenge the implementation of AI. In contrast, participants who first associate AI with a dystopian future controlled but computers (DYST) or who consider that AI's abilities are "solving problems using data and programmed reasoning", "learn from previous mistakes to inform future decision-making", and "analyze its environment and make decisions based on this analysis" believe that financial resources are likely to challenge (coefficients, 3.64, 9.02, 7.98 and 7, 83) the deployment of AI by local governments. There-is a high probability (97 -100%) that people with these views consider limited financial resources as a key challenge for the implementation of AI. That is, people who give significant value to the abilities of AI are likely to be aware of the corresponding cost behind its development and implementation. As shown in Table 5, participants who view the most beneficial role of AI as "to monitor the environment, sense changes and adjust decision-making accordingly" (BENEFIT4) do not see (coefficient -4.45) financial resources as a key challenge for the implementation of AI by local government. There is a low probability (1%) that people with this view believe that the deployment of AI can be affected by limited financial resources. It could be that people with this view consider the benefits of AI so essential and high that financial resources are not considered. When looking at the outcomes related to participants who feel neutral about an AI future (PROSPECT4), it is noticed that individuals with this profile are unlikely (Coefficient -3.61) to consider financial resources as a significant challenge for AI implementation. In contrast, those who feel enthusiastic (PROSPECT5) about an AI future are highly likely (probability 93%) to see financial resources as a significant challenge for the deployment of AI by local governments. People who may feel comfortable with a fully autonomous place to live or work (COMFORT2) or who have never interacted with AI (FREQ5) are likely to consider financial resources as a key challenge (coefficient 2.75 and 8.04, respectively) for the implementation of AI by local governments. Our survey results show a high probability (94-100%) that this sort of participant places a high value on financial resources for local government AI adoption. In contrast, the results showed that financial resources are not considered a significant challenge for implementing AI by local governments (coefficient -5.81) for those who think that society will not change because of AI (SOCIETY2). As expected, Australians and Hongkongers have significantly different response behaviour regarding the role of limited local government financial resources and investment capabilities for AI projects. The only common factors were their "neutral feelings toward the AI in general (FEEL2)", "enthusiastic perspective towards the future of AI (PROSPECT5)", "limited interaction with previous applications of AI (FREQ5)", and "level of comfort with a fully autonomous place to live or work (COMFORT2)". COMFORT2 and FREQ5 had an equivalent impact on Australian and Hong Kong residents' perceptions of the importance of financial resources in the deployment of AI by local governments. That is, participants from Australia and Hong Kong who have never interacted with AI or who may feel comfortable with a fully automated place to work or live are unlikely to see limited financial resources are a major challenge for the deployment of AI. The limited experience of participants from those groups with the AI technology and potential bias regarding what is involved behind automation at home and work does not let them appreciate the potential role of financial resources. 4-1-4-Trends Regarding Other Challenging Areas of Adopting AI in SC1 According to our survey, Australian public opinion about the challenges in public sector adoption of AI in control systems and security is patterned by gender, with 61% of males associating the possible main challenges with a lack of regulations on AI usage in the local government context; and 67% who believe that limited human oversight over AI decisions affecting the local community will be the main challenge facing AI adoption (Tables B8 and B11). In contrast, participants who do not feel that the primary problems in implementing AI are related to control and security is represented by people who declared no understanding of the basic concepts of AI (UND4). In Hong Kong, there were no noticeable trends in terms of respondents' age, gender, or socioeconomic status. 4-2-Applications and Challenges of Adopting AI in Digital Divides, Social Structuring and Community (SC2) SC2 has three application areas: Community support and engagement (Table A5); Housing and homelessness (Table A13), and; Urban planning and development (Table A18). The analysis of the application areas of this social category focuses on understanding respondents' beliefs and attitudes towards highly complex technological development processes and on assessing social coherence and equity in Australia and Hong Kong. In Australia, aside from age once again having an essential impact on the participants' decision to use AI (with a more significant chance of supporting the use of AI with rising age), another notable tendency would be where the participants learned about AI. In general, participants reveal that they are opposed to employing AI for community support and engagement and housing and homelessness, regardless of the source from which the participant learned about AI (see Tables A5 and A13). According to the Hong Kong analysis, participants with a promising vision for the future use of AI (PROMI4, PROMI7, PROMI10, and PROMI 11) shows a high probability (100%) to believe that one of the most suited urban services to apply AI technology is in community support and engagement. In contrast, participants who think that the AI's abilities are related to the ability to replicate and respond to human speech (ABS2) and the ability to solve problems using data and programmed reasoning (ABS3) shown the lowest probabilities (0%) to believe in the application of AI technology in this area (see Table A5). When looking at the survey results for housing and homelessness in Hong Kong, for most of the socioeconomic variables used to characterise the sample, the coefficient was negative, suggesting that the participants tend not to believe in the future application of AI technology in this area (see Table A13). Advances in machine learning and AI techniques have enabled the application of learning algorithms from entertainment, commerce, healthcare to social problems, such as algorithms to inform policies that guide the delivery of homeless services. However, although many Hong Kong residents believe in the use of AI in community engagement, they do not appear to consider that this technology will be applied to housing and the homeless, which may imply that the application of AI in this area is unclear for the participants. It is also interesting to note that medium-high income Hong Kong residents are sceptical about the use of AI in housing and homelessness. This group's probability of believing that AI technology would be used in this area is merely 1%. As AI systems progress, they will be able to make judgments without the need for human input. However, one possible concern is that, while executing the tasks they were intended to accomplish, AI systems may unwittingly make judgments inconsistent with their human users' values, such as physically injuring humans. In this survey, four challenging areas were considered in SC2, and these are: Limited project coordination for the AI implementation between other neighbouring locals (Table B1); Limited interest in AI-based services from the local community (Table B3); Limited trust of the local community to the AI technology (Table B4), and; Ethical concerns on AI of the local community (Table B7). The biggest problems connected with using AI in public services, according to Australians aged 55 to over 85, appear to be tied to the local community's limited faith in AI technology and ethical concerns about AI (see Tables B4 and B7). In Hong Kong, there are no clear trends in respondents' socioeconomic status for the areas that belong to SC2. However, the binary logistic regression model shows high proportions of 'strong belief' that the local community's lack of trust in the technology will be a potential challenge associated with AI adoption in public services. The highest percentages relate to participants' employment status (93%), source of AI learning (92% for SOURCE 6 and 90% for SOURCE 8) and those who consider autonomous automobiles (82%) and dystopian future (90%) when thinking what AI implies (see Table B3). Furthermore, 97% of participants in Hong Kong with a higher level of education (postgraduate degree) consider that one of the central concerns encountered with AI adoption is connected to ethical issues (see Table B7). 4-3-Applications and Challenges of Adopting AI in Economy and Business (SC3) The capacity of AI to deal with massive amounts of incoming data is its primary advantage over humans. For example, to forecast future stock values, you may utilize data from the company's activities, reviews, news, Twitter mentions, and a variety of other sources. Four application areas connected to SC3 were explored in this study: Business development and assistance (Table A3); Economic development (Table A7); Infrastructure management (Table A11), and; Transport management (Table A16). Notably, when we examined the results for these areas (Tables A7, A11, and A16), we observed that their age once again influenced participants' decisions on using AI technology in urban services in Australia. The older generation's faith in the potential for AI to contribute to economic activity sectors may be due to their far higher confidence in themselves as employees, and most of them may not feel a robot could do jobs better than them. Therefore, this might explain why more senior respondents appear to be more convinced of the technology's deployment as compared to younger respondents. In Hong Kong, education level appears to influence participants' beliefs regarding the application of AI in business support and development, economic development, and transportation management. For example, participants who have completed Year 11 or equivalent (EDU2) agree that AI is unlikely to be used in the economy and industry, as demonstrated in Tables A3, A7 and A16. On the other hand, people in Hong Kong who know how AI may boost consumer involvement and help automate the most time-consuming tasks seem more likely to trust the application of AI in public goods (economic development and infrastructure management). This disparity can be explained by the fact that people with low levels of education may be unaware of the potential contribution that AI can make to the economy. In contrast, people with a higher understanding of AI technology are more likely to favour applying AI in economic and business areas. Although the use of AI for economic purposes might be advantageous, the deployment of AI in business can face several challenges. Therefore, we considered the following challenging area for SC3: Heavy dependency of the AI technology companies/consultant for project/service delivery (Table B6). Respondents in both regions who are excited about AI adoption in public services do not feel that one of the concerns connected with employing this technology for economic objectives would be the high dependency created by using AI technology companies for projects/service delivery. As shown in Appendix B, Table B6, in Australia, only 25% of respondents with exciting prospects related to AI indicated support that companies will heavily depend on AI technology, and only 11% expressed similar beliefs in Hong Kong. Presumably, because those participants are excited about using AI technology in public services, these individuals have a more optimistic perspective of how AI may assist the economy compared to the drawbacks it brings, such as the risk of companies becoming overly dependent on the technology. Another pattern would be that 96% of Hong Kong residents who believe AI will lead to a dystopian world believe corporations will depend heavily on AI technology for project/service delivery (Table B6). Artificial intelligence already has a significant impact on human economies and societies, and it will have an even more substantial impact in the future. As a result, it is projected that those who have a dystopic perspective on the future usage of AI will believe that there will be a lot of reliance on this technology. 4-4-Applications and Challenges of Adopting AI in the Information Society and Know-how (SC4) The following three application areas make up SC4: Arts and culture (Table A2); Education (Table A8), and; Information and assistance (Table A10). In Australia, participants with postgraduate degrees believe AI will be used in arts and culture with an 82% probability of adoption. Participants in Hong Kong who admitted to utilizing AI technology on a regular basis (FREQ2), on the other hand, expressed lower levels of support about the use of AI to generate culture for popular consumption (28%). Because technology such as virtual reality and 3D printing are currently in use in both regions, the results may represent a difference in what AI entails between Australia and Hong Kong. For example, in the film business, AI has assisted animators in mapping face characteristics and motions to their characters. Aside from that, anyone with a computer may utilize software capable of generating films, altering images, or drawing graphics. Another notable tendency that emerged in terms of the use of AI in education is that those with no degree and higher degree are more inclined to reject its use in this sector in Australia (Table A8). Surprisingly, respondents in Australia (81%) and Hong Kong (99%) with a medium-high income believe that AI should not be employed in education. Aside from that, no significant tendencies emerged. This outcome is most likely due to the misconception that the purpose of using AI in education is to replace teachers rather than to assist them in recognizing each student's potential and limits. This is especially evident in Hong Kong, where participants who see a dystopian future in which robots "take on jobs" and/or "take over the globe" are convinced that AI technology would be employed in education (probability of 95%). AI is increasingly being utilized to make choices in the absence of people. Although AI can produce less biased sentencing and parole judgments than humans, algorithms trained on biased data may discriminate against specific groups. Furthermore, the AI employed in this application may lack transparency, such that human users do not comprehend what the algorithm is doing or why it makes conclusions in specific instances. In that regard, the following two challenging areas make up SC4: Limited technical local government staff and know-how on AI projects (Table B2), and; Lack of transparency and community engagement of the AI-based decisions (Table B5). Participants from Australia (65%) and Hong Kong (91%), who believe that one of AI's fundamental abilities is "to solve problems using data and programmed reasoning" believe that the main challenges for society when implementing AI in public services are a lack of transparency and community engagement in AI-based judgments (Table B5). Concerns about fairness and transparency in applying AI in control systems and security may indicate that human users may not comprehend what an algorithm is doing. In other words, they do not understand the outcome of an AI model, which makes sense once it is often challenging to explain results from large, complex neural network-based systems. Besides that, curiously, respondents who reported often using technology such as chatbots and Google maps have varied beliefs depending on where they reside. People in Australia (78%) believes that a lack of transparency and community participation in AI-based judgments will be a major difficulty when implementing AI, but most Hong Kong residents (89%) do not believe this would be a major challenge (Table B5). Although regularly consuming distinct forms of AI technology, citizens of Australia and Hong Kong appear to have diverse concerns about AI implementation due to differing perceptions of what AI is. 4-5-Applications and Challenges of Adopting AI in Sustainability, Wellbeing, and Health (SC5) By detecting energy emission reductions, CO2 removal, assisting in developing greener transportation networks, monitoring deforestation, and anticipating extreme weather events, AI can enhance global efforts to safeguard the environment and conserve natural resources. Furthermore, AI can help healthcare workers better understand the day-today habits and requirements of the individuals they care for, allowing them to provide further feedback, advice, and support for remaining healthy. In this study, we grouped four application areas in the SC5, which consist of Environmental conservation and heritage protection (Table A9); Healthcare (Table A12); Parks and recreation (Table A14), and; Water management (Table A19). For Australia and Hong Kong, people with medium-high income (INC5) do not favour adopting and using AI in environmental conservation, heritage protection, and water management (Table A9 and A19, respectively). Furthermore, we observed in Australia medium to high levels of support for the use of AI to address environmental challenges, healthcare, and water management among Australians who link machine learning with AI (see Tables A9, A12, A19). These findings may also hint at a distinction between individuals who truly comprehend how AI can be utilized and others who have only a vague concept of what AI is and how it may be used. The two areas of challenge in SC5 are a lack of clarification on "whether/how AI will be utilized for the common/social benefit of all community members" (Table B9) and a lack of clarity on how the digital gap and technological disruption on disadvantaged communities will be addressed (Table B10). In Australia and Hong Kong, 64 and 77 % of respondents who associate AI technology with Machine learning believe that one of the most significant challenges associated with adopting this technology is a lack of clarity on whether AI will be used for the common/social good of all community members. These findings may indicate a lack of trust on the part of this group's members in the adoption of this technology by governments in this area. In Hong Kong, participants who believe that the most significant disadvantage of AI technology is that this technology will be highly cost (DISAD1) do not think that the challenge the government will face will be associated with a lack of clarity on if/how AI will be used for the common/social good of all community members (coefficient -2.13) (see Table B9). On the other hand, those participants from Hong Kong who believe that AI contributes by increasing free time for humans to complete different tasks (PROMI4) showed that they think that the biggest challenge in adopting AI technology will be associated with a "lack of clarity on if/how AI will be used for the common/social good of all community members" (probability of 94 %) (Table B9). Furthermore, Hong Kong residents who believe AI should be used to assist citizens (PROMI 11) believe that the main challenge the government will face when adopting AI for social good is a "lack of clarity on how the digital gap and technological disruption on disadvantaged communities will be addressed" (see Table B10). Because the technology industry has traditionally been hesitant to promote workplace equality, it is not unexpected that this group considers this area to be a significant challenge when applying AI to social goods. According to respondents in Australia who believe that society will become worse because of the use of AI (SOCIETY3), a challenge for the government in adopting AI for the public good would be the lack of clarity on how the digital divide and technological disruption will be managed in disadvantaged communities (probability of 70 %) (See Table B10). It appears that those who believe that society will suffer unfavourable changes are more likely to think that when the government applies these new technologies, there will be a lack of equity and inclusion. 5-Findings and Discussion Partly due to the rapid AI development, there have been more AI applications for urban services in recent years [57][58][59][60]. While successful AI applications are linked with people's perceptions, little is known regarding people's perception of integrating AI into urban services. To address this issue, this study explains people's behaviours and preferences regarding the most suitable urban services for the future application of AI technology and the key challenges for governments to adopt AI for urban services. The study considers the challenges and obstacles in AI-based services from a user's point of view. An empirical investigation of public perceptions from Australians and Hongkongers was conducted. The key findings of the study are as follows:  Attitudes toward AI applications and their ease of use have significant effects on forming an opinion on AI. For example, two-thirds of Australia participants and most participants from Hong Kong, who consider AI's fundamental ability to solve problems using data and programmed reasoning, believe the obstacles in implementing AI for public services are mainly due to a lack of transparency and community engagement in AI-based AI judgments.  Initial thoughts regarding AI's purpose seem to significantly affect the perception of application areas and the adoption challenges of AI. About 96% of participants without prior experience with AI, that is, they may know AI clearly, believe that AI can be applied for aged-care and disabilities. Australians who lack an understanding of AI's fundamental concepts support more AI-based aged care and disability applications. In contrast, those who know more about AI are not that optimistic.  Perception differences between Australian and Hongkongers in AI application areas are significant. Australians are more optimistic about AI applications. A quarter of research participants from Australia with exciting prospects on AI agree that companies will heavily depend on AI, while only 11% express similar beliefs in Hong Kong. Most Australians with postgraduate degrees trust that AI will be used in arts and culture, but many Hong Kong people are not optimistic.  Perception differences between Australian and Hongkongers in government AI adoption challenges are insignificant. Compared to Hongkongers, Australians are more optimistic regarding their government's ability to deploy AI. 78% of Australians believe that a lack of transparency and community participation in AI-based judgments will be a major hurdle in implementing AI, but most Hong Kong residents view it as a major challenge. Below, we elaborate on the factors that affect participants' perceptions of AI. This way, the study findings will inform local authorities, which deploy AI in urban services and offer directions for future research. 5-1-Factors Behind Different Perceptions on AI The digital divide has been intensified due to the use of AI applications during the recent decade as some benefit more than others due to different access to AI technology [61]. For example, younger generations are familiar with digital tools like mobile phones and online games, while many older digital migrants are unfamiliar with digital tools. Older adults who have retired may have fewer financial resources or a lack of motivation to learn new technologies. Similar problems occurred among those with disabilities. This study sheds light on the digital divide as per individuals' knowledge, income, financial resources, education background's impact on people's perceptions of various AI applications, such as aged-care and disability, local governments, art, and culture. Some factors that impact public perceptions are discussed below. 5-1-1-Impact of Lack of Transparency on Perceptions of AI for Decision-making (SC1) About 65% of participants from Australia and 91% from Hong Kong consider the obstacles of AI for public services are primarily due to a lack of transparency and community engagement. Humans may not understand what an algorithm does and cannot understand the outcome of an AI model. While there are many white-box models at present, many people, including those who teach AI in higher education, may still believe that all AI are the Black Box. As AI has changed so fast, except for those who always do research and keep an eye on AI development, we may not notice that many AI models have already raised their level of transparency to the white box. 5-1-2-Digital Divide and the Impact of Knowledge on Perceptions of AI for Aged-care and Disability (SC2 and SC5) This study reveals that the more we know about AI, the less likely we believe AI can help aged-care and disability. 18% of Hongkongers who met AI humanoid robots before do not believe AI can be used for aged-care and disability. In sharp contrast, 96% of participants without prior experience with AI believe that AI can be applied for aged-care and disabilities. In Australia, respondents who learnt about AI from universities, courses, social media, and the internet do not believe AI can be applied for aged-care and disability. 84% of people aged 55-65 and 75-84 and 74% of unemployed people believe that AI can be applied for aged-care and disability. Taking care of older and disabled people requires strong AI equipped with more than one capability area like humans. Many people who do not know AI or lack AI knowledge may have too much fantasy about AI. Some may have watched movies and TV series about AI humanoid robots and may think AI chatbots can communicate with us like humans, take care of the elderly like a maid, and water the plants in the garden. Nevertheless, most AI can mainly perform single weak AI tasks like predicting the prices, classifying images, and sending reminders for the elderly to take pills. For example, the AI chatbot used in many shopping malls, elderly houses and schools cannot understand most humans' questions as we may ask by using many different collocations and words, and it cannot understand the hidden meaning of humans. Technology as such is unlikely to appear soon. 5-1-3-Impact of Financial Resources on Perceptions of AI Adoption by Local Governments (SC3( In Australia, 72-75% of participants who may not feel comfortable living or working in a fully autonomous place think that financial resources are likely to be a key challenge for adopting AI by local governments. In Hong Kong, 99% of unemployed people believe that financial resources can challenge the adoption of AI by local governments. We speculate that unemployed people with high living costs are more aware of the importance of financial resources in technology development and deployment as they face financial problems in many different aspects. We speculate that there is better financial protection when Australians are unemployed. Financial pressures for unemployed people are higher in Hong Kong. While Australians and Hongkongers who have never interacted with AI are unlikely to see limited financial resources as a significant challenge for AI deployment, Australians are more optimistic than Hongkongers regarding their governments' ability to deploy AI. Again, this could be because Australia's financial pressures and associated stress are lower than Hong Kong's. Further study may be done through qualitative studies to investigate its reasons. 5-1-4-Impact of Income on Perceptions of AI for Education and Local Context on Perceptions of AI for Arts and Culture (SC4) While AI applications often require huge expenses, we may expect that application of AI in education may be welcome less by the low-income group. It is quite surprising that 79% of respondents from Australia and 99% from Hong Kong with a medium-high income believe that AI should not be used in education. We speculate the reasons for this may not be linked with the expenses among these groups as they are wealthy. Nevertheless, many parents may have to spend extra time and money letting their kids join these extra classes. Some parents may even learn AI by themselves to ensure they know AI to help their kids and ensure kids' competitiveness. These technology migrants, however, often find it challenging to learn even though they are adults. As a result, they may consider incorporating AI in education as inappropriate or causing trouble. Participants with postgraduate degrees in Australia believe that AI will be used in arts and culture with an 82% probability of adoption. Participants in Hong Kong are not optimistic on this aspect, though. Compared to Australians, Hong Kong people are not keen on culture and arts activities. As Hong Kong is a city with the most prolonged working hours, many people prefer to rest when they do not need to work. Those with more leisure time may have many other activities to choose from day to night-time, like shopping (open seven days and till night-time), engaging in various types of sports, watching movies online. Arts and culture are often not a top priority among many people. That is why Hong Kong was also known as 'culture dessert'. As many people do not have time to participate in arts and culturerelated activities, they are also likely, not aware of the relevant AI application or believe that AI will be used in arts and culture. The other reason is that the relatively low business values in arts and culture-related activities, development, and application of AI on arts and culture activities are not popular in Hong Kong. 5-2-Practical Implications to Public, Planning and Policy Both investigated country contexts have different approaches to AI deployment. In Hong Kong the deployment of AI in Both investigated country contexts have different approaches to AI deployment. In Hong Kong, the deployment of AI in urban services is deliberately connected to a strategy of the local government meant to diversify the local economy and compete internationally [62]. Whereas in Australia, the deployment of AI in urban services is less local economy focused and more service efficiency and quality centred [63]. Therefore, in both countries, urban policy is one of the strongest drivers of public perception formation. Additionally, education is often considered a meaningful way to change people's thoughts. Nevertheless, when many medium-high income groups believe that AI should not be used in education, it is high time to study the underlying reasons. On the other hand, misunderstandings like AI technologies make Black Box decisions imply that continuous education and research are essential as there is fast development in AI technologies. Unlike subjects like English literature, rapid development in AI implies the importance of continuous updates and lifelong education. But how can we properly educate, for example, urban policymakers, city managers, planners, and the public about AI since this technology is so hard to understand even when one does not have a background in computer science or engineering. The recent rise of the explainable artificial intelligence (XAI) movement along with the need for sound AI strategies might help [64,65]. When governments provide most public services with AI, successful implementations and applications require public support. Public opinions collection regarding AI becomes necessary. An overview of the beliefs and attitudes on AI can ensure smooth AI implementation. This paper provides us with a general understanding of which types of people are more supportive of AI and challenges in AI implementation, which is helpful for urban services planning. Governments may raise fiscal expenditure on AI education to reduce public misunderstanding of AI capability, for example, due to a lack of transparency in AI. More grants and funding can be provided for AI research to develop more robust AI, enhance its ability to assist the aging population, provide innovative solutions for arts, culture, and education activities, and improve AI transparency in decision-making. Relevant education, continuous education funds, and fiscal policies can help achieve these goals. Lastly, the study findings inform local authorities that deploy, or planning to adopt, AI in their urban services. Specifically, insights generated in this study help local governments identify the most appropriate urban planning and decision-making processes with the greatest potential to utilize AI services and applications, while taking public concerns into account. This sensitivity should also be preserved while developing and test piloting AI-related urban services and applications, as paying special attention to equitable deployment of AI in urban services will generate opportunity for wider public acceptance and adoption/utilization. 5-3-Limitations of the Study The study generated invaluable insights into the public's perceptions of AI application areas and the adoption challenges of AI in urban services. However, the following limitations of the study should be noted: First, while the sample size is adequate for the survey, having more participants might have surfaced additional perspectives. Second, the study focused on two countries' contexts. While the statistical representation requirements were met, expanding the study to a larger number of countries might have provided extended insights. Third, there are some representation differences between the study participant characteristics and the actual resident characteristics of the case cities, which might have some impact on the results. Fourth, the study findings are only quantitatively assessed, the open-ended questions' answers are not factored in this paper, as this data will be analysed thematically and will be reported in another paper. However, the authors have read and checked all qualitative responses to make sure they are not contradictory to the findings reported in this paper. Next, there might be unconscious bias in interpreting the study findings. Lastly, our prospective studies will consider tackling these issues. 6-Conclusion AI is highly popular across the public sector due to the efficiencies its applications generate in the delivery of government services. Among many existing and potential application areas in local governments, AI adoption in planning and delivery of urban services stands out. Contrary to its increasing importance and potential, AI adoption in urban services is still an understudied area, and in particular, the understanding of what users/public think about AI utilization in these services is limited. To bridge this research and knowledge gap, the study reported in this paper explored people's behaviors and preferences regarding the most suited urban services for application of AI technology and the challenges for governments to adopt AI for urban service delivery. The analysis of the survey data collected from the public in Australia and Hong Kong revealed the following invaluable findings. First, it is found that attitudes toward AI applications and ease of use have significant effects on the public's forming an opinion on AI. Second, the public's initial thoughts regarding the meaning of AI have a significant impact on AI application areas and their adoption challenges. Third, not surprisingly, public perception differences in AI application areas between the case country contexts of Australia and Hong Kong are significant, highlighting the geopolitical context-driven nature of technology adoption. Lastly, however, the perception differences between the public in Australia and Hong Kong regarding government AI adoption challenges are minimal-that is, they affirm somehow the universality of the adoption barriers. These findings, while shedding light, contributing to bridging the knowledge gap, and informing local authorities that deploy or plan to adopt AI in their urban services, indicate that further empirical research is needed to better understand user/public acceptance and adoption barriers of AI. Along with this, the challenges faced by local governments practicing responsible AI principles need to be investigated. Our prospective studies will concentrate on these two critical research topics. 7-2-Data Availability Statement Data sharing is not applicable to this article. 7-3-Funding This research was funded by the Australian Research Council Discovery Grant Scheme, grant number DP220101255. 7-4-Acknowledgements The authors thank the editor and two anonymous referees for their constructive comments. The authors are also grateful to all study participants for sharing their perspectives. 7-5-Ethical Approval An ethical approval is obtained from Queensland University of Technology's Human Research Ethics Committee (Approval No: 2000000257). Survey participants provided their consent to participate in the study and publication of their views by agreeing on a statement on that matter at the beginning of the questionnaire. 7-6-Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this manuscript. In addition, the ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancies have been completely observed by the authors.
v3-fos-license
2021-09-01T15:16:29.082Z
2021-06-17T00:00:00.000
237822185
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-613486/latest.pdf", "pdf_hash": "825c9bf073601d3ae5b846ca8e0b6f6911a7a50a", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44035", "s2fieldsofstudy": [ "Biology" ], "sha1": "ca8b65784ad82caa733a99bb007aab52110458ed", "year": 2022 }
pes2o/s2orc
Allophlebia, a new genus to accomodate Phlebia ludoviciana (Agaricomycetes, Polyporales) Allophlebia is proposed as a new genus in Meruliaceae based on morphological characters and molecular data. The genus, so far monotypic, is typified by Peniophora ludoviciana and the new combination A. ludoviciana is proposed. The type species is characterized by a resupinate basidioma, a monomitic hyphal system with clamp connections, two types of cystidia (leptocystidia and metuloids), clavate basidia, and hyaline, thin-walled and ellipsoid basidiospores. A phylogeny for Allophlebia and related taxa was inferred from ITS and nLSU rDNA sequences and new information on the geographic distribution of A. ludoviciana is provided. Introduction Phlebia Fr. (Polyporales, Meruliaceae) was described by Fries in 1821 and intended for species with a hymenium composed of irregular veins and ridges. Fries (1828) pointed to P. radiata as the most typical member of his new genus and this species is now generally accepted as the type (Donk 1957). Species in Phlebia sensu lato usually have resupinate basidiomata that are ceraceous to subgelatinous in fresh specimens, and with a membranous, firm ceraceous, corneous, or coriaceous consistency when dried. The hymenial surface varies from smooth, tuberculate, odontioid, merulioid to poroid. The hyphal system is monomitic, rarely dimitic, with hyphae clamped and embedded in a more or less evident gelatinous matrix. Cystidia can be present or absent; basidia are clavate, narrow, with a basal clamp and disposed in a dense palisade; and basidiospores are allantoid to ellipsoid, smooth, thin-walled, IKI−, and CB− (Eriksson et al. 1981;Bernicchia and Gorjón 2010). All species analyzed are saprobes on decaying wood (Nakasone 1990). The original concept for Phlebia was considerably broadened along the years (Donk 1931, 1957, Nakasone 1991, 1996, 1997, 2002, Nakasone and Burdsall 1984. However, this wide concept for Phlebia proved to be polyphyletic (Larsson et al. 2004, Binder et al. 2013, Floudas and Hibbett 2015, Justo et al. 2017. Several genera have been introduced or resurrected to accommodate different species of Phlebia, e.g., Cabalodontia Piątek, Crustodontia Hjortstam & Ryvarden, Cytidiella Pouzar, Hermanssonia Zmitr., Jacksonomyces Jülich, Mycoacia Donk, Mycoaciella J. Erikss. & Ryvarden, Phlebiopsis Jülich, Scopuloides Hjortstam & Ryvarden, and Stereophlebia Zmitr. Other Phlebia species have been moved to other genera, most notably to Crustoderma Parmasto and Skvortzovia Bononi & Hjortstam. After such removal and transfer of species and after adjustments for synonyms, the genus still holds around 100 species, many of which are based on names for which there are no modern interpretation (www.mycobank.org). According to molecular data, P. radiata together with many other Phlebia species belong in Meruliaceae in Polyporales (Justo et al. 2017), while a few are recovered in Hymenochaetales (Larsson et al. 2006). During studies of corticioid fungi from northeast Brazil, specimens of Phlebia ludoviciana (Burt) Nakasone & Burds. were collected. Molecular phylogenetic analyses showed that this species could not be placed in any of the corticioid genera already described. Thus, the aims of this paper were to describe a new genus for P. ludoviciana and to discuss the geographical distribution of this species. Specimens were identified based on macro-(measures, texture, consistency, shape, and color of the basidiomata) and micro-morphology and sections of the basidiomata were checked with 3% potassium hydroxide solution (KOH), stained with 1% aqueous phloxine. Melzer's reagent and Cotton Blue were used to analyze, respectively, dextrinoid and amyloid (IKI+/IKI−), and cyanophilous (CB+/CB−) reactions of the microstructures. Presence/absence of sterile structures and basidiospores was noted and measurements of at least 20 of them were taken, when possible (Hjortstam et al. 1987;Watling 1969). The material was deposited in the Herbarium Pe. Camille Torrend (URM), Departmento de Micologia (UFPE), and in the Herbarium of the University of Oslo (O). DNA extraction, PCR amplification, and sequencing Basidiomata fragments (30-50 mg) were removed, placed in tubes of 1.5 ml, and stored at -20°C until DNA extraction. The method of DNA extraction followed Goés-Neto et al. (2005) and the reaction mix and parameters for PCR reactions of the ITS and LSU regions followed Smith and Sivasithamparam (2000), using the primer pairs ITS4-ITS5 and LR0R-LR5, respectively (White et al. 1990;Moncalvo et al. 2000;Lima-Júnior et al. 2014). The purification of PCR products was done with ExoSAP-IT™ PCR Product Cleanup Reagent (Thermo Fisher Scientific, USA), following the manufacturer's recommendations. The samples were sequenced at the Plataforma Tecnológica de Genômica e Expressão Gênica do Centro de Biociências (CB), UFPE, Brazil, or sent to Stab Vida Lda (Madan Parque, Caparica, Portugal). The cycle sequencing was carried out with the same primers used for PCR reactions (Moncalvo et al. 2000). All new sequences were deposited in GenBank (National Center for Biotechnology Information, Bethesda, MD, USA). Phylogenetic analyses The 2.0 Staden Package software was used for analyses and edition of electropherograms (Bonfield et al. 1995). These sequences were subjected to BLASTn search in NCBI to recover similar sequences from GenBank and used in the dataset to establish phylogenetic relationships (Table 1). Each gene region was aligned with the MAFFT v.7 online server using default settings (http://mafft.cbrc.jp/alignment/server/), then improved manually using MEGA 7.0 and combined to form the concatenated dataset (Kumar et al. 2016). The ITS and LSU regions were first analyzed independently (data not shown). Since no important topological differences were detected, the regions were combined into a single matrix for the final analyses. The models of evolution were obtained from MEGA 7.0 (Kumar et al. 2016) and confirmed in TOPALi v2.5 (Milne et al. 2008) for each dataset. Phylogenetic analyses and tree construction were performed using maximum likelihood (ML) and confirmed in Bayesian algorithm (BA). ML analysis was performed using MEGA 7.0 (Kumar et al. 2016) with 5000 bootstrap replications and based on GTR + G + I model. BA analyses were run in TOPALi v2.5 (Milne et al. 2008) with 5×10 6 generations, also based on GTR+G + I model. Statistical support for branches was considered informative with Bayesian posterior probabilities (BPP) ≥0.95 and bootstrap (BS) values ≥70%. The trees were visualized with FigTree (Rambaut 2014) and the final layout was made in Adobe Illustrator CS6. Results Five specimens were sequenced (URM 93082, URM 93251, URM 93329, O-F-110340, O-F-110341), generating five ITS and four LSU sequences (Table 1). These were combined with ITS and LSU sequences selected through BLAST searches against GenBank. No strongly supported topological conflict was detected among the datasets analyzed (ITS, LSU, and ITS+LSU). Thus, only the combined analysis is presented here, performed mainly with ITS sequences since only that region is available for some key specimens. The combined dataset included 174 sequences (116 ITS and 58 LSU) and comprised 2138 characters including gaps. Climacocystis borealis (Fr.) Kotl. & Pouzar and Junghuhnia nitida (Pers.) Ryvarden were selected The results of the phylogenetic analyses generated from ML and BA showed similar tree topologies and small or insignificant differences in statistical support values. Thus, the ML tree with bootstrap support values (BS) and posterior probabilities (PP) from the BA analysis was used to show the results of this study (Fig. 1). The newly generated sequences were placed in a strongly supported clade (BS 99%, PP 0.99) with several samples of A. ludoviciana previously deposited in GenBank. Other sequences at GenBank identified differently also grouped in the same clade. The A. ludoviciana clade was phylogenetically separated from the clade representing Phlebia s.s, and from other described genera (Fig. 1). Discussion When combining Peniophora ludoviciana to Phlebia, Nakasone et al. (1982) grouped this species with P. brevispora, P. subochracea, and P. subserialis in section Leptocystidiophlebia Parmasto based on morphology and culture characteristics. Our results show that P. ludoviciana is phylogenetically close to P. subochracea, while P. brevispora and P. subserialis are distantly related, both f r o m e a c h o t h e r a n d f r o m P . l u d o v i c i a n a a n d P. subochracea (Fig. 1). Floudas and Hibbett (2015) identified as P. subserialis were placed in three different clades, one corresponding to A. ludoviciana and sister to P. subochracea, one close to P. nothofagi and P. fuscoatra, currently belonging to Mycoacia, and the last one belonging to the Phanerochaete clade and provisionally identified as Phanerochaete 'krikophora'. Justo et al. (2017) recovered P. ludoviciana (FD-427, reported as Phlebia sp. in GenBank) in a clade with P. subochracea I (HHB8715) reported as Phlebia cf. subserialis in GenBank), both representing A. ludoviciana and sister to P. subochracea II. In our study, the Allophlebia clade is phylogenetically separated from the Phlebia s.s. clade (BS=87/PP=0.96) as well as from other genera in Meruliaceae and from other sequenced species of Phlebia recovered outside Meruliaceae. It is strongly supported as a monophyletic group (99) (Fig. 1), and in accordance with the recommendations by Vellinga et al. (2015). The new genus may also include Fungal sp. (TP2) f r o m T h a i l a n d ( K l o m k l i e n g e t a l . 2 0 1 4 ) and P. ochraceofulva (FBCC295) from Sweden (Kuuskeri et al. 2015), but they represent isolates without vouchers, which prevents morphological studies. The five sequences of A. ludoviciana generated in our study clustered with two sequences from the USA and French Guiana and originally collected on Salix humboldtiana in Argentina, is characterized by membraneous basidiomata and one kind of cystidia, viz. strongly encrusted metuloids projecting beyond the hymenium (Rajchenberg and Wright 1987). The type of P. subserialis is from France and sequences from there and other European countries, as well as one sequenced specimen from India (Table 1), are distantly placed in the phylogenetic tree (Fig. 1). Phlebia subserialis has narrower leptocystidia (3-4 μm), lacks encrusted cystidia, and has longer, suballantoid basidiospores [6-7(-8) × 2-2.5 μm] (Bernicchia and Gorjón 2010). It is unclear why this species has been confused with A. ludoviciana. One reason could be that some early mycologists established an opinion that the two cystidia types in A. ludoviciana are just a single type in different stages of development (Rogers and Jackson 1943). Specimens of P. subserialis reported in the Americas should be reevaluated (Nakasone et al. 1982). Grammothelopsis puiggarii is a species characterized by large, angular pores (1-2 per mm), large, dextrinoid, thick-walled basidiospores and dextrinoid skeletal hyphae (Rajchenberg and Wright 1987). This species is currently placed in Polyporaceae and cannot possibly be confused with A. ludoviciana. The sequences named G. puigguarii are most likely the result of contamination or sequencing mistakes. The specimens of A. ludoviciana studied by Nakasone et al. (1982) were all collected on dead wood of various angiosperm tree species. The specimens sequenced by us and by earlier studies were also all collected on decaying angiosperm wood. The environmental sequences of A. ludoviciana in GenBank were mostly generated from living tissue of angiosperm plants representing the genera Elaeis (oil palm), Hevea, Polylepis, Phragmites, Rubia, and Solanum. Sequences were also generated from the rhizosphere of Broussonetia, from Nyssa rail ties, dry grassland soil, air, and from nests of Atta and Cyphomyrmex ants (Fig. 1). This information adds to the growing body of evidence indicating that basidiomycetes with a saprophytic lifestyle may serve also other ecological functions (Pinruan et al. 2010;Martin et al. 2015).
v3-fos-license
2018-04-03T04:51:51.731Z
2015-02-13T00:00:00.000
15497365
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://europepmc.org/articles/pmc4441882?pdf=render", "pdf_hash": "1ec73b152f0e4bd2ef532a5abd15878b53132d9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44038", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "7f6ed71313001c9a0e8a71d9867e6c2c35411fa3", "year": 2015 }
pes2o/s2orc
‘No matter what the cost’: A qualitative study of the financial costs faced by family and whānau caregivers within a palliative care context Background: There has been significant attention paid in recent years to the economic costs of health service provision for people with palliative care needs. However, little is known about the costs incurred by family caregivers who typically provide the bulk of care for people at the end of life. Aim: To explore the nature and range of financial costs incurred by family caregiving within a palliative care context. Design: In-depth qualitative interviews were conducted with 30 family/whānau caregivers who were currently caring for someone with a life-limiting illness or had done so within the preceding year. Narrative analysis was used to identify impacts and costs at the personal, interpersonal, sociocultural and structural levels. Setting: Auckland, New Zealand. Findings: Costs of caregiving were significant and, for participants, resulted in debt or even bankruptcy. A range of direct (transport, food and medication) and indirect costs (related to employment, cultural needs and own health) were reported. A multi-level qualitative analysis revealed how costs operated at a number of levels (personal, interpersonal, sociocultural and structural). The palliative care context increased costs, as meeting needs were prioritised over cost. In addition, support from statutory service providers to access sources of financial support was limited. Conclusion: Families incur significant financial costs when caring for someone at the end of life. Research is now needed to quantify the financial contribution of family and whānau caregiving within a palliative care context, particularly given attempts in many countries to shift more palliative care provision into community settings. What is already known about the topic? • • Family members provide the majority of care for people with palliative care needs. • • Research conducted with older people indicates that the financial costs associated with family caregiving can be significant. • • Little is known about the nature and extent of these costs within a palliative care context. What this paper adds? • • This study identifies that family caregivers experience a range of direct and indirect costs associated with caregiving. • • The palliative care context exacerbates many of these costs. • • Support from statutory service providers to access sources of financial support was limited. 'No matter what the cost': A qualitative study of the financial costs faced by family and whānau caregivers within a palliative care context Introduction A key policy priority in many developed countries is shifting the provision of palliative and end-of-life care from acute hospital settings into the community. 1,2 One perceived benefit of this approach is a reduction in acute hospital costs for people in the last year of life; it is argued that this money would be better spent providing community health services which could enable people to remain in their own home up until death. 3 However, one issue that has been neglected within this debate is that reducing hospital admissions would have implications not only for community service provision but also for family caregivers who, it has been estimated, already provide 75%-90% of home-based care for people who are near the end of life. 4 A recent systematic review conducted to explore the financial costs incurred by family caregivers identified no previous studies focusing specifically upon the financial costs of family caregiving at the end of life, although there was some evidence that these costs were significant. 5 Indeed, studies were identified which demonstrated that economic costs associated with caregiving have negative implications for the well-being and health of family caregivers, 6,7 and financial strain is even associated with family preference for comfort care over life-extending treatment. 8 Since the review was conducted, one article has been published which does quantify the costs of caregiving for family caregivers in Canada. Drawing on data from the family caregivers of patients with cancer accessing a home-based palliative care programme, Chai et al. 9 concluded that 'unpaid care costs accounted for the largest portion of total palliative care costs, averaging 76.7% over the last year of life' (p. 34). This study therefore confirms that, within the Canadian context, family caregivers of people with cancer utilising specialist palliative care services are incurring a far greater proportion of caring costs at the end of life when compared with those incurred by health services. Research is now needed which extends these findings beyond cancer to other diagnostic groups and national contexts. In the absence of further research, there is a danger that economic evaluations conducted within a palliative care context will only capture those costs which can be easily measured, namely, those incurred by statutory service providers. For example, the recent UK palliative care funding review to inform policy making and commissioning decisions in relation to palliative care focused only upon those costs incurred by the state. 10 The failure to adopt a societal approach to costing by including family caregiving costs runs the risk of decision-making at a policy level which shifts even more of the costs of caregiving onto families, with significant implications for both their wellbeing and their capacity to care. Aim To explore family and whānau carers' experiences of the financial impact of caring within a palliative care context (whānau is most often translated as 'family', but its meaning also encompasses physical, emotional and spiritual dimensions (http://www.teara.govt.nz/en/whanau-maoriand-family/page-1)). Methods The choice of qualitative methods was guided by the research question 11 and the exploratory nature of the study. Our approach was iterative and multi-disciplinary, within the traditions of narrative inquiry. [12][13][14] Semi-structured interviews were held with 30 caregivers who were either currently caring for a person with palliative care needs or had done so in the past year. Following approval from the University of Auckland Research Ethics Committee, 17 participants (14 non-Māori and 3 Māori) were recruited through a tertiary hospital palliative care service. Clinical staff identified eligible carers and invited them to talk to a research nurse if interested in participating. Only three of the carers approached declined participation in the study, all for reasons unconnected with the research itself. A second recruitment strategy was adopted to ensure appropriate representation of Māori caregivers. A total of 13 participants (9 Māori and 4 non-Māori) were recruited via community newspaper articles about the research and Māori radio and TV coverage. Māori, the indigenous people of Aotearoa, New Zealand, tend to have poorer health than non-Māori and a higher level of unmet need for health care generally. 15 Māori who live in more deprived areas are more likely to have greater unmet primary health care needs and therefore face a potentially higher cost burden relative to more affluent carers, so were an important target Implications for practice, theory or policy • • This study indicates an urgent need for policy makers to consider the financial costs of family caregiving, particularly within the context of drives to increase community-based palliative care provision. • • Future research is needed to quantify these costs. • • Culturally appropriate methodological approaches must be used to ensure the inclusion of research participants from indigenous and minority cultural and ethnic groups who are often disproportionately affected by the costs incurred. group for this research. 15 Previous research has indicated that targeting community media is a successful strategy for recruiting Māori, who are under-represented not only in research 16 but also as users of specialist palliative care services. 17 Sampling was purposive to include a mix of information-rich participants with diverse demographic characteristics (age, gender, ethnicity) and involved in various types of carer relationships. 18 Māori were over-sampled as they comprise 14.9% of the population but are overrepresented in health care inequity. Sufficient diversity in terms of these characteristics was believed to have been achieved when we had recruited 30 participants, which is also considered an adequate sample size for an in-depth, exploratory study of this type. 18 Interviews were conducted by T.M.M. (Māori participants) and R.A. (non-Māori participants). Participants were recruited and interviewed over 6 months from November 2012. All participants provided written informed consent prior to interview. Participant details are summarised in Table 1. Interviews were conducted at participants' homes or a relative's home (n = 25) by telephone (n = 2), at the hospital (n = 1) or in a café (n = 2) depending on participant preference. Interviews lasted between 30 and 90 min. All interviews conducted by the Māori researcher incorporated a Kaupapa Māori approach. Kanohi-ki-te-kanohi (faceto-face) interviews catered for the rangatira (chiefly status) of participants and Māori cultural research protocols were observed. 20 The interview guide was informed by relevant literature 5 and covered the following key areas: experiences of caring, financial costs in relation to both day-to-day care and emergency situations, who else was involved in caring and related costs and whether financial assistance had been received from elsewhere (family, loans, credit, insurance, state support). Views were also sought about appropriate methods for conducting research in this area; these data will be reported elsewhere. Interviews were digitally recorded with participants' consent and transcribed verbatim. Summaries of interview data were presented back to participants for their feedback to maximise methodological rigour and, in particular, the confirmability of findings. 21 Qualitative data software (NVivo 10) was used as the filing system for the initial categorising and overview of responses to interview questions, experience-centred narratives, 13 cost areas and cultural concerns. A more focused analysis was then conducted using the narrative gerontology framework of personal, interpersonal, sociocultural and structural dimensions of experience. 22 Analysis was led by T.M.M. (Māori) and R.A. (non-Māori); a selection of transcripts was independently reviewed by C.G. and coding was carried out by consensus to ensure rigour and trustworthiness. This framework supported analyses of financial costs which operate at all these levels. Verbatim quotes are presented with anonymised initials and the relationship to the person with life-limiting illness (e.g. partner, daughter). Findings Participants reported a wide range of experiences with relation to the costs of caring and described the significant implications of these costs. Participant data have been summarised under the following themes: the motivations for caring, the range and nature of costs, the impact of the 'end of life' phase on costs, the impact on wider family and the lack of support systems. These themes can best be understood within the context of a multi-level narrative gerontology model of personal, interpersonal, sociocultural and structural domains, as will be described below. While motivations and meanings of care were not the focus of this research, the sensitivities of talking about money 'at a time like this' (i.e. when someone had a lifelimiting illness) meant participants always framed their talk of costs with comments on the importance and meaning of care to them. As one participant reported, [Money] never came into the equation … It was a lot of hard work but when I look at it I'm glad I was there and not strangers. (YT, Māori partner) Māori participants' caregiving commitment was often informed by cultural values steeped in āroha (compassion) and manaakitanga (preservation of mana and dignity), which were prioritised over care costs. Many participants invoked notions of reciprocity in their discussions: The way I see it is a parent raises a child, their whole role is to look after the child until they become an adult and then I see it, once you're an adult we should repay that back. You know, because your parent's health and everything starts to fail as they get older. So my obligation is to them. (STA, Māori daughter) However, it was clear that the financial costs incurred by caregiving were significant for all participants, although the extent of the financial impact of caregiving was determined by participants' current financial situation and therefore varied markedly within our sample. Participants who reported the highest levels of financial resource considered themselves 'lucky' and reported only having to cut back on planned expenditure, such as a trip overseas to visit a new grandchild, in order to meet their caring costs. However, others with fewer financial resources reported a range of more serious repercussions. These included incurring significant debt, moving to a smaller house and, in the most extreme cases, going without food because they could not afford to buy enough for everyone in the household: Sometimes food, the children are fed and we adults just have the leftover, so we could make ends meet. But that's, but we always think of it's temporary, and whenever the day will come, it will end. Things will be back to what, to whatever we usually do. (CT, Tongan daughter) Range of costs incurred Participants reported that a broad range of direct and indirect costs were incurred by caregiving (see Table 2). Direct costs were those involving direct outlays of money. Those most frequently mentioned were parking and transport costs related to their family member's hospital appointments and admissions. Costs of clothing and bed linen were also mentioned by many participants; for example, some care recipients required new clothes following weight loss or required new bed linen because of increased frequency of bedding changes. Diverse costs associated with medical treatment were reported, including those related to paying for medication and general practitioner (GP) visits and paying for alternative or complementary therapies. All participants mentioned food costs as an expenditure that had risen significantly during their family member's illness. The final section of our results explores these costs in more detail by demonstrating how a multi-level analysis is useful in understanding the complex ways in which these costs operate. Finally, costs of funerals and tangihanga (Māori funeral customs) were discussed. Some participants' family members had insurance to cover at least part of the cost; others reported getting estimates of costs prior to their family member's death, so they could put financial plans in place or using government-assisted funeral subsidies. Several Māori participants anticipated funeral expenses and had taken out funeral insurance cover. Government-assisted funeral subsidies managed by Work and Income New Zealand were also utilised. Lack of resources led to several whānau being unable to conduct traditional tangihanga at ancestral homes. Indirect costs were those incurred by participants as a result of their caregiving role. These included costs relating to lost employment opportunities, cultural obligations and personal costs related to carer health and well-being. Participants who were in paid work were often forced to fit in caring tasks around work, negotiating to work from home, using up annual leave and sick leave or taking unpaid leave. Some participants had to give up paid work to care; others had their benefits cut because they were unable to actively look for work because of caring responsibilities. Self-employment allowed some flexibility and also meant there were no paid leave provisions. One self-employed participant, for example, reported that his income halved when caring for his wife who had to leave her full-time job, leaving him with the 'balancing act' of trying to work, care for two children and care for her. After her death, he was forced to sell the family home and buy a smaller, more affordable property. For Māori, the cultural obligation and preference to return to ancestral homes before death and/or post death (tangihanga) incurred additional transport costs and other expenses associated with meeting these cultural end-of-life needs. In several cases, customary funeral traditions were interrupted due to a lack of resources. Some participants also reported that caring had negatively affected their own health and well-being. Costs incurred as a result included physiotherapy for back injuries resulting from lifting their family member. Anxiety, depression and insomnia associated with caregiving could also incur financial costs in terms of GP visits and medications, both at the time of caregiving and into bereavement. Palliative care context increases costs While any type of caregiving can have financial costs associated with it, participants' responses highlighted how A New Zealand European son reported that his mother with bowel cancer was 'adamant' she wanted 'to die at home in her own bed' which meant that he used up all his sick and annual leave as he and his wife took on the responsibility of caring, including hourly toileting, and was 'completely shattered' when it was over. A man who had recently been made redundant moved in to help care for his sister, so her husband could keep working, even though he was then penalised for not actively job-seeking: She prefers the family to care for her because obviously they know her better than any stranger from outside and it's much better for her, so she knows who's caring for her. Because she'd rather have family look after her than anybody else. (HH, NZ European brother) Costs of caring affects the wider family/network Financial costs also had an impact on the wider family or social network, including after the death of the patient. While in most cases, the bulk of the costs were incurred Products Incontinence pads, I just, couldn't find out from anybody, the hospice gave us what they could. I went and bought quite a few … The ones at the supermarket weren't any good, I had to go to one of these [online companies], Nappies for Less I think it was. For the bigger ones, the larger pads. But then, once the district nurse came on board, after a few visits, I think I must have asked her, or somebody told me that they would supply them, they didn't offer them. (CW, NZ European husband in his 60s) Funeral/tangihanga costs I've contacted the funeral directors that we dealt with with Dad and my late brother, and they'll be the one that's going to look after Mum when her day comes, just to get a costing … then we'll be looking at ways of putting that cost together. And it's not that we want her to leave us, but it's just that we know that there'll be a time when it comes. If we prepare ourselves now then it will make life easier for us when the time comes. (CT, Tongan daughter) Employment The impact for me was that I went from a salary, obviously prior to looking after mum, and gave up my job and so from a normal above average salary I went to $230 a week, so that was the impact for me. (JA, NZ European daughter) Own health And I thought, I can't go through another night like this, I was just so tired and exhausted … The last four or five days are just so vivid, they were just so horrendously stressful and I was just a physical wreck. And then, of course, they die and then you're straight into all the funeral things and I was the only one here, my brothers had to come from Australia and you're dealing with phone calls and you've got deadlines for things like the paper and the funeral sheet and it just keeps going, it just keeps going. by one primary carer, typically there were multiple family/whānau members involved in providing care and sharing costs. Several Māori whānau identified they had two or more income streams flowing into their home. However, this did not always result in money being equitably dispersed to help off-set care costs. It is also important to recognise that some participants had multiple caring responsibilities, as exemplified in this daughter's account of caring for her older parents: Mum can't look after Dad and Mum tries to look after Dad and then Dad tries to look after Mum and then Dad worries if Mum fell he wouldn't be able to pick her up and so we've sort of taken those worries away from them both [by having Mum move in with us] … I cook meals at night for Dad and then I take them around at night for Dad and then [niece] will do everything else that needs to be doing, like the housework and the cleaning and things like that. (ED, NZ European daughter) The complex care requirements across family networks, both in terms of time and money, could have an enduring and compounding impact on carers that many felt was not sufficiently recognised by health or social service providers, employers or wider social networks. Cost and care-support systems are confusing and inadequate There were variable accounts as to what statutory (government) help was available in the context of 'being palliative'. New Zealand has an increasingly constrained, publicly funded health system with some provision for end-of-life care, but exactly what was available, and when, was often unclear to participants. Interviewees were appreciative of help and support they received but were frustrated at systems and bureaucracies, including the apparently random discovery of entitlements, as one participant explained, They don't come out and give you a brochure and say, 'Here you go, here's an application form, fill it in and you'll get some money'. No, you've got to ask so that can sometimes be extremely frustrating because you don't know how you're going to survive this financially. (EW, NZ European husband) There was not a widespread expectation that 'the state should provide' or that people were entitled to a lot from the health and social system. The key issue was that if there were agreed provisions, it should be easy to find out about them and to use them effectively. One participant suggested that there should be a 'palliative care needsbased assessment', where needs were assessed across the person and the carer network, in order to identify resource constraints and provide clear information on costs and entitlements. Costs operated at a number of levels Four dimensions of costs were identified in our analyses: personal, interpersonal, sociocultural and structural. 23 The way these dimensions interacted with one another in relation to financial costs is represented in Figure 1. Analyses identified that personal and interpersonal costs sat alongside each other and could also overlap. For example, one participant reported using up all her paid leave but being supported financially by her brothers, so she could care for their father; another, by contrast, stated that her brother offered no financial or other help but wanted money from her mother's house sale. The central relationships between carers and care-receivers were located within the context of the sociocultural realm, where community or wider social or financial resources may be available, and where cultural or gendered practices around care produced particular expectations and costs. As one participant explained, I'm the only girl, I've got three older brothers, they've all got families and other commitments. I, on the other hand (laughter) have not got any attachments at the moment, but of course being of Cook Island descent, it's family first. (MS, Cook Island/European daughter) Finally, the structural systems within which care occurs were found to have cost implications at all levels. For example, many participants appreciated New Zealand's publicly funded health emergency services but not the lack of ongoing or palliative care provision: I think our health system is fantastic in emergency situations and being very thorough, doing all the tests and the staff are wonderful, they are. But it's the follow up, it's what happens after the immediate emergency part has passed, it's the follow up care that is inadequate … (GB, NZ European daughter) An example of how costs operated across the four levels of the framework is outlined in Table 3 in relation to food costs. It was apparent in our analyses that food costs were related to the individual-level needs of the carereceiver to have particular foods, the interpersonal need for food to cater for visitors, the sociocultural demands for meaningful 'cultural' food and the structural costs of having to eat at expensive hospital cafes while remaining '24/7' at the bedside of the person with a life-limiting illness. Discussion This study is one of the first to explore the range, and impact, of financial costs borne by family caregivers within a palliative care context. Our findings are consistent with the limited evidence base in this area which indicates that these costs are substantial 5,9 and can have serious and long-lasting repercussions. While significant costs have also been reported in the wider caregiving literature, 23 it was clear that the context of a life-limiting illness was significant to the nature and extent of financial outlay. Our findings were interpreted within the context of a multi-dimensional model of the costs of caring. Within the personal and interpersonal domains, our findings indicate that a range of direct costs are incurred by family caregivers in many diverse areas. Many of these were consistent with the previous limited research in this area, for example, the work by Dumont et al., 24 as well as the wider caregiving literature. Within New Zealand, in line with other countries with a partly or fully privatised health sector, the costs of unsubsidised medications and GP home visits are acknowledged to place a significant burden upon patients and family caregivers. Similarly, the costs of travelling to health appointments and parking charges, both of which were frequently raised by our participants, have also been discussed within the wider literature. 25 Our findings confirm that large regional rather than local hospitals (advocated for economies of scale), poor public transport infrastructure (Auckland lacks good public transport) and privatised contracts for revenue-generating parking and food provision in hospitals all compounded our participants' costs. This highlights the interplay of structural and personal/interpersonal dimensions on financial costs. Our study also extends previous findings by identifying a range of direct costs that go beyond those related to a patient's medical care. Food was a frequently mentioned expense, and our multi-level analysis enabled insight into the different ways in which these costs were incurred. In addition, indirect costs were reported, including those related to present and future paid employment and a carer's own health and well-being. This finding is of particular concern given evidence which suggests that informal caregiving is often associated with poor physical and mental health, which may be exacerbated by the caring role. 26 A range of costs was also identified within the sociocultural domain. Costs were experienced differently across the sample, with participants with limited financial means disproportionately impacted. Low-income Māori families accrued debt at a time when cultural and familial imperatives to care were at their greatest. For Māori participants, caring took place against a backdrop of economic and social disadvantage associated with the context of colonialism and disenfranchisement. 27 The financial costs of family caregiving can therefore be seen as exacerbating pre-existing social inequities. The structural level can also be linked to the critique of neoliberal welfare and support systems where the rhetoric that 'family care is best' is underpinned by a less overt assertion that 'family care is cheapest' (to the state). 28,29 This echoes the use of the four dimensions in other narrative research investigating how individual narratives are inevitably intertwined with collective effects. 30 This study was an initial exploratory project in New Zealand's largest city, therefore findings may not be applicable to other contexts or settings. However, our study presents the first data of its kind and underlines the importance of further research to provide quantifiable data on the nature and extent of family caregiving costs. Conclusion The multi-level analysis of costs offers opportunities for multi-level actions to mitigate costs from personal and family financial planning to better cultural awareness around customary practices and associated costs. The presence of statutory support for those caring for people in a palliative care context was valued; the absence of information and transparent eligibility and access to such support was not. While our findings show that financial costs were significant, there was also evidence of resourceful financial problem-solving and a strong commitment to care. However, this commitment to care cannot be realised without further research which quantifies the nature and extent of costs, the development of innovative solutions for supporting family members caring for those at the end of life and policy commitment to supporting those family caregivers who are key to palliative care provision.
v3-fos-license
2019-04-16T13:27:39.516Z
2018-03-05T00:00:00.000
115362531
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://vestnik.susu.ru/ctcr/article/download/7199/6008", "pdf_hash": "f8e62a71550ae275e0e388e9eb70e1b5dbdfe759", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44039", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "671626d4d894020ac3253c59808f79b85cda64a2", "year": 2018 }
pes2o/s2orc
THE SYSTEM OF AUTOMATIC DETECTION OF PENETRATION THROUGH THE PROTECTED PERIMETER BASED ON FIBER OPTIC SENSORS AND NEURAL NETWORK 2018. Т. 18, No 1. С. 155–162 155 Introduction At present, due to the growth of the scale of criminal and terrorist threats, measures are being taken to strengthen the protection of important and especially important sites. These measures are ultimately aimed at tightening the requirements for perimeter security systems, which are designed to ensure the security of the protected facility. The systems, as a rule, are located along the perimeter of the protected facility and provide the “early” alarm signal generation necessary for the timely and effective response of the security forces to the invasion [1]. Introduction At present, due to the growth of the scale of criminal and terrorist threats, measures are being taken to strengthen the protection of important and especially important sites. These measures are ultimately aimed at tightening the requirements for perimeter security systems, which are designed to ensure the security of the protected facility. The systems, as a rule, are located along the perimeter of the protected facility and provide the "early" alarm signal generation necessary for the timely and effective response of the security forces to the invasion [1]. Description of modules The installation scheme for perimetral security alarms on the basis of fiber is shown in Fig. 1. Fig. 1. Scheme of perimeter security alarm installation Here on the mesh fence there is an optical fiber responsible for monitoring the snacking of the fence. On the top of the fence there is a parallel optical fiber line responsible for controlling climbing through the fence. Consider an algorithm for detecting an intruder, which includes the registration of data and their processing with the generation of an alarm in the event of an intruder entering the protected area. The signal from the sensor goes to the signal processing unit. The processing of the original signal in the general case consists of several stages. A typical scheme of the signal processing unit using a neural network analyzer is shown in Fig. 2. The scheme, as a rule, includes a signal adaptive filter, a digital signal processor (DSP), and a neural network analyzer is a decision block (intrusion and its type). Fig. 2. Scheme of the signal processing unit Seismic signals and vibrational processes can be described by a universal model: the process is a combination of narrowband components additively mixed with broadband noise. The parameters of the components completely characterize the process. To isolate the narrow-band signal components, the first stage involves the method of extreme filtering. It includes the allocation of signal extremes, the division into alternating components by an extreme filter, the calculation of the parameters of these components (for example, the mean frequency and dispersion in a sliding window), the application of the procedure to the residues formed when the next alternating component is removed. The components and (or) their parameters allow us to judge the process, to obtain estimates of spectral characteristics, to isolate free and forced oscillations, to form diagnostic features, to substantially simplify the parametric analysis and to reduce its laboriousness, applying it not directly to the signal but to the selected components. Given the time constraints for the allocation of informative components and decision-making, preference is given to a faster-acting method of extreme filtering. Decomposition of the signal which corresponds to the transmission of data through a digital low-pass filter. The first, high-frequency component is determined from relation: The component can be extracted directly from the extremes as follows: Further transformations of the form (1), (3) are repeated over the component pi x . Then the parameters for all the components p (amplitudes A i , frequencies ), which allow the formation of primary diagnostic features. Fig. 3 shows the signal at the output of the vibration sensor, and in Fig. 4 -the marked alternating components represented by their extremes for one of the analyzed areas. In the transition from seismic noise to the signal generated during the intrusion into the zone of responsibility, the frequency of the components and their amplitude (and, correspondingly, the power) varies significantly. This is illustrated in Fig. 5, where the upper graph shows the signal, and on the second and third -the frequencies and amplitudes A i of the selected components in the sliding window, tied to the beginning of the analysis interval. Here i = 1…p, and p is the number of allocated components. It can be seen that, when detected, there is a decrease in frequency (high-frequency noise is masked by a more powerful signal) and an increase in amplitude. It is known that signal extrema carry information about the highest-frequency narrow-band component. If we remove (filter out) this part from the signal, we get a smoothed curve, the extremums of which carry information about the next narrowband component. The procedure can be performed until a sequence with alternating extremums is obtained-the lowest-frequency narrow-band component. Thus, an adaptive filtering algorithm is possible. To separate signals created by the violator from noise and interference, the third and final part of the processing in the Signal Processing Unit performs data analysis based on the principle of the neu-ral network. The use of a neural network provides high reliability of detection at a low level of false positives. Training of neural network For the neural network to work, it is required to preteach it. The algorithm for learning a neural network is that the output of the last layer of neurons is compared with the sample of training, and from the difference between the desired and the actual, it is concluded what the neurons of the last layer should be to the previous one. Then a similar operation is performed with the neurons of the penultimate layer. As a result, on the neural network, from the output to the input, a table is made for changing the connection weights. The training of the system is reduced to the work of the algorithm for selecting the weight coefficients, which operates without the direct participation of the operator. Training involves recording the initial signals from sensors installed on the perimeter. The training of the security system is performed as part of the overall configuration of the system -by adding to the database images of signals that are the result of noise factors and characteristic responses of a particular fence. So, in Fig. 6, training is provided using a radial-basis network with zero error. The first graph is the desired network output (detection); the second graph is the amplitude of the signal in the vibration protection system; the third schedule is a fixed violation of the perimeter of the protected object. This network was trained on the signal "mesh web". Testing on another kind of impact ("climbing through the fence") showed the correct operation of the detector. To create, train, and test the network, the Anfisedit editor of the Media environment was used. Network structure: four inputs, one output, number of membership functions -5 per input, type of the psigmf accessory function. At the input of the network, the parameters of the high-frequency component are given -the mean, minimum, maximum frequencies, and the amplitude-normalized amplitude range in the 3-second observation interval. For the data "car driving, group run, car driving", Fig. 6 shows the detection of transport. The output of the network is -1. Conclusion In security alarm systems, a neural network is a computer system, the algorithm for solving problems in which is presented in the form of a network of threshold elements with dynamically tunable coefficients and tuning algorithms independent of the size of the network of threshold elements and their input space. The introduction of neural network structures into the algorithms of the signal processing unit allows to approach the development of security systems with artificial intelligence, to increase the noise immunity of the perimeter security system as a whole. Increases as the average time to false alarm, and the likelihood of detection with subsequent classification of the type of intruder. The security system with artificial intelligence performs the task of detection and recognition automatically, taking into account all the characteristics of the original signal when analyzing. The processing process is much faster and gives a more reliable result. The use of intelligent perimeter security systems does not require operator intervention to analyze alarms and determine signs of a real intrusion or false alarm. As a result, the system itself makes a decision -this signal is a signal of real alarm or interference. Formation of a system of signs -parameters of alternating components, extracted from the observed signal by an extreme filter, allows solving the problem of detection and classification with the help of neural networks.
v3-fos-license
2018-01-12T05:29:25.550Z
2017-12-29T00:00:00.000
38130592
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://opensportssciencesjournal.com/VOLUME/10/PAGE/263/PDF/", "pdf_hash": "f346736c4d37bc95d310ef57b1237e35097ad3c5", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44040", "s2fieldsofstudy": [ "Biology" ], "sha1": "f346736c4d37bc95d310ef57b1237e35097ad3c5", "year": 2017 }
pes2o/s2orc
Relationship Between Knee Extensors Power Output and Vastus Lateralis EMG Activation in Elderly Women : Influence of Mother Wavelet Selection RESEARCH ARTICLE Relationship Between Knee Extensors Power Output and Vastus Lateralis EMG Activation in Elderly Women: Influence of Mother Wavelet Selection João Pedro Pinho, Bruno Mezêncio, Desidério Cano Porras, Julio Cerca Serrão and Alberto Carlos Amadio Laboratory of Biomechanics, University of São Paulo, São Paulo, Brazil Department of Physical Therapy, School of Medicine, University of São Paulo, São Paulo, Brazil INTRODUCTION Muscle strength decrease as a sarcopenia consequence is strongly related with a prolonged healthcare assistance need and a reduced quality of life [1].This physiologic section area reduction is primarily due to the diminishing muscle fibers number as well as to their size, particularly the fast twitch muscle fibers [2].These changes seem to have severe qualitative consequences in this tissue functional performance.Greenlund and Nair [3] showed that between the third and the eighth decades the knee extensors' ability to generate peak torque suffers a significant decline indicating an inverse relationship between aging and muscle function.This particular characteristic has been related to decreased mobility [4] as well as identified as a risk factor for falls [5]. Indeed, all of these quantitative and qualitative changes in a senescent musculoskeletal system have even more profound functional outcomes that lead to a condition in which the daily tasks once effortless performed now seem to be challenging ones [6].Being the majority of these tasks of short duration and, because of that, intimately related with the capacity to produce force as quickly as possible it would be expected that muscle power plays a key role in older adult's functional capacity.In fact, Brunner et al. [2] refer that among the main qualitative muscle changes, muscle power loss is the one that early occurs suggesting it as the most striking muscle alteration in the senescent locomotor system.The authors suggest that this gradual inability to rapidly produce muscle force appears to be a more markedly characteristic in women as they have a priori less muscle mass.The functional capacity diminution multifactorial origin makes the ideal exercise protocol to increase this ability to perform everyday tasks unclear.Tschopp et al. [7] pinpoint muscle power as the main independence determinant in the elderly.The authors state that when compared with muscle maximum strength, this parameter presents a premature and abrupt decline making power training not only promising but unavoidable in a senescent life.They define this method as a moderate strength training in which the concentric phase of the exercises are executed as quickly as possible and the eccentric phase in a moderate velocity.When compared with strength training, power training has been found to be superior [7 -9].Indeed, this method allows enhancing older adult's functional capacity, as well as the conventional one, with significantly less training loads, reducing the training perceived effort. Trying to determine the training load that allows producing the highest power output Zbinden-Foncea et al. [10] found a U-shaped trend with maximal power for loads of approximately 60% of the maximal effort (1RM), both for upper and lower limbs in elderly women.However, Signorile et al. [11] stated that optimal loads are affected by the architecture of the joints being trained.They showed that joints associated with longer bones are prone to higher training speed than those associated with shorter ones.Nevertheless, until now there is no consensus regarding the optimal load condition to produce maximal peak power and the neuromuscular factors affecting knee extensors power production in older adults seems to have not been fully clarified yet. Merletti et al. [12] indicated that the selective atrophy of type II muscle fibers diminish the contractile properties lowering motor unit firing rates of the older muscle.The authors suggested that these are determinant factors of the maximal voluntary contraction torque as well as of the myoelectric manifestations of muscle fatigue.Indeed, when compared to older people, young people seem to have a greater rate of decrease of the power spectrum parameters in fatigue-type conditions [12,13].This supports the theory of elderly muscles shifting toward slow-twitch muscle fibers [14].Besides determining muscle fatigue, frequency spectrum analysis of the EMG signal has been related, with some controversy [15,16], to muscle fiber type [17]. Although Fast Fourier Transformation (FFT) is the most frequently used method to assess EMG frequency spectrum it is limited to a global frequency analysis, with no time resolution [18].A time-frequency resolution of the EMG signal could provide greater insight into the relationship between muscle power output and muscle frequency rate.Wavelet Transform is a powerful technique that allows extracting EMG frequency features since is consistent with the nonstationary nature of the EMG signal, providing an alternative to the classical Short-Time Fourier Transform (STFT).While the latter strategy uses a single analysis window, the Wavelet Transform uses short windows at high frequencies and long windows at lower frequencies which allows to shift as well as distend and contract (scaling) a prototype function (mother-wavelet) without losing time resolution [19].The selection of the proper mother wavelet applied to the biological signal to be analyzed imposes another challenge in order to understand the relationships between muscle frequency and mechanical information such as power output.Therefore, the main objective of this study was to compare frequency parameters produced by six mother wavelets pinpointing the most feasible to investigate quadriceps femoris EMG parameters while producing knee extension power.A second goal was to analyze the effect of load magnitude in the selection of the optimal mother wavelet and in knee extensors power output. MATERIALS AND METHODS Took part of the study 13 sedentary elderly women (69.3 ± 4.1 years) with similar body mass index (26.1 ± 2.5 kg/m 2 ).They had not started any physical activity program for at least 2 years and reported absence of cardiac and musculoskeletal problems as well as arterial hypertension.The participants were informed of all the operational procedures giving their Written Consent informing that their involvement in the study was voluntary.The local Ethics Committee approved all procedures (protocol number 2010/16).Baecke questionnaire [20] modified and validated for older adults [21] was used to assess their physical activity level. Instruments: A customized knee extensors machine was used for the experiment which allowed placing an electrode in the participants' hamstrings without contacting the seat.A digital video camera (Casio EX-ZR10) with a sample rate of 240Hz and a shutter speed of 1/2000s recorded the trials.Reflexive markers (20 millimeters diameter) were attached to the participants' lateral malleolus of the ankle and lateral condyle of the knee.Such markers allowed assessing knee angle in relation to the machine's initial position.These data were used to calculate knee concentric angular velocity and acceleration.An external trigger allowed synchronize the kinematic and electromyographic data.Lynx-EMG System 1000 (Lynx Electronic Technology, LTDA.) was used to acquire the EMG data from vastus lateralis and biceps femoris muscles.Vastus lateralis electrode was placed at 2/3 of the line from the anterior superior iliac spine to the lateral side of the patella and biceps femoris electrode was placed at 50% on the line between the ischial tuberosity and the lateral epicondyle of the tibia, according to SENIAM electrode placement guideline [22].Bipolar, pre-gelled Ag-Ag/Cl electrodes with a center electrode distance of 20 mm were used to detect the sEMG signals (Meditrace 200, Kendal).A reference electrode was place on the most prominent part of the patella.An ETHERNET network interface (10 Mbit/s) supported by AqDados 7.02 (Lynx Electronic Technology, LTDA.) was used (common mode rejection ratio >100 dB at 60 Hz, input impedance > 1 GΩ, gain level 1000, 1Hz 1 st order highpass Butterworth and 1kHz 2 nd order low-pass Butterworth).Sampled a 2kHz, a 4 th order Butterworth band-pass (10-400Hz) filter was applied to the electromyographic signals.In order to reduce electric impedance, before placing the electrodes, skin trichotomy, abrasion by fine sandpaper and asepsis was performed. Experimental procedures: With trunk and thigh immovable and knee axis aligned with the leg extensor machine axis, a sub-maximal test to estimate the participants' maximum knee extensors load was conducted.Brzycki equation [23] was used to estimate the maximum load effort (Equation 1).Where 1RM is the estimated maximum load (kilograms), m is the mass lifted in the trial (kilograms) and reps is the number of repetitions that the participant was able to lift correctly, i.e., with full extension of both knees. After calculating the maximum effort and a 10 minutes rest, they executed three trials with 6 repetitions with the concentric phase of the knee extension movement as quickly as possible: 30%, 50% and 70% of 1RM.Between each repetition 30 to 45 seconds rest were given and between sets a 10 minutes rest was given. Data reduction: All mathematical procedures were executed in MatLab v.R2010a (Mathworks, Inc.) SkillSpector 1.3.2(Video4Coach, Inc.) was used to digitize the two markers and an 8Hz low-pass Butterworth filter was applied to smooth the raw spatial coordinates. Knee extensors power: Kinematic data obtained by video analysis, an anthropometric model and Newtonian mechanics were used to calculate knee extensors' power.Being every other body parts fixed while performing the trials and maintaining knee axis aligned with leg extension machine axis, allowed to assume shank angular acceleration a knee extensors' torque component.Therefore, an equality condition between knee extension machine's torque and net knee muscle torque was assumed.Dempster's anthropometric model [24] was used to determine the information about the mass, center of mass location and moment of inertia, taking into account anthropometric differences between participants.The reader is guided elsewhere [25] for further information about these power output calculations. Electromyographic data: Biceps femoris and vastus lateralis Root Mean Square (RMS) until knee extensors peak power was calculated.These values were normalized by the entire signal mean.In order to obtain the EMG timefrequency curves for the agonist muscle (vastus lateralis), a continuous wavelet transform was applied using six different mother wavelets from four families: Morlet, Daubechie (4 th , 8 th and 44 th order), Coiflet (4 th order) and Symlet (5 th order).These mother wavelets were selected because of their similarity to biological signals [26].From the continuous wavelet transform the Median Frequency (MF) 200 milliseconds prior to the knee extensors peak power (MF200) was calculated.A cross correlation, normalized by the maximal autocorrelation at zero lag, was calculated in order to evaluate the similarity and the shift between the knee extensors peak power and frequency curves.The selected parameters for analysis was the maximal cross correlation value, the lag at which this value was estimated, and the correlation value at zero lag. Statistical procedures: All statistical procedures were executed in SigmaPlot 11.0 (Systat Software, Inc.).Kolmogorov-Smirnov was used to test adherence to a normal curve, the Levene test to ensure the equality of variances and Mauchly to test for repeated measures data sphericity.To test biceps femoris and vastus lateralis RMS between different loads a one-way repeated measures analysis of variance (ANOVA RM ) was applied.The same analysis was used to test the differences between power output in the three load conditions.A two-way ANOVA RM was conducted in order to test the differences between mother wavelets and load conditions as well as any interaction between them.A significance level of .05 was assumed (α=.05) in all statistical tests. Fig. (2). Analysis of variance main effect (Mother wavelet) results for mother wavelets Morlet (morl), 4 th , 8 th and 44 th order Daubechie (db4, db8 and db44), 4 th order Coiflet (coif4) and 5 th order Symlet (sym5).1A: median frequency, expressed in hertz (Hz), 200 milliseconds before the peak power.Values of the mother wavelets with delta (δ) are significantly higher (p<.05) than db8, sym5 and db44; and are not significantly different from each other.Values of the mother wavelets with lambda (λ) are significantly higher than db8 and sym5; and are not significantly different from each other.1B: Maximal correlation, expressed in percentage (%), of the cross correlation analysis.Value of the mother wavelet with theta (θ) is significantly higher (p<.05) than the values of the mother wavelets with omega (ω), with no significant differences between the values with the same symbol.1C: Correlation without lag shift, expressed in percentage (%), of the cross correlation analysis.1D: Lag time (milliseconds) between power output and frequency curves peaks. DISCUSSION The present study's objective was to compare different mother wavelets to analyze EMG data of a power task.The main findings were: the 44 th order Daubechie mother wavelet presented the highest similarity between EMG frequency and power output; there is no difference between all mother wavelets to estimate the electromechanical delay by the cross-correlation lag; and the task load did not affect the mother wavelets' output regarding the similarity with the power output. EMG frequency parameters were obtained by six mother wavelets in order to compare its outputs and try to pinpoint the most feasible.Significant differences were seen in both main effects, but not in the interaction.This suggests that mother wavelet selection should not be related with load condition, even though there are influence of load in EMG signal [27] and differences in mother wavelets outputs [26], as showed in our results. In mother wavelet main effect, median frequency in the interval of 200 millisecond before peak power (MF200) and maximal cross correlation showed significant differences.Correlation at zero lag, as expected, did not showed significant differences between mother wavelets.The electromechanical delay shifts the power output curve from the EMG curve [28], losing their similarity independently of the selected mother wavelet.While the correlation at zero lag suffer a negative influence of the electromechanical delay, the lag at the maximal cross correlation value has been suggested as a good approach to identify the electromechanical delay magnitude [29].There are no differences between mother wavelets for this parameter, meaning that the estimated electromechanical delay is independent of a particular mother wavelet function. 44 th order Daubechie mother wavelet presented the highest maximal cross correlation value which means it shows the highest similarity between EMG frequency and power output curves.An optimal similarity between these two signals allows better understanding of the biological phenomena assessed.Therefore, to assess EMG time-frequency parameters in highly demand tasks in elderly mother wavelet 44 th order Daubechie should be chosen. It is interesting to notice that db44 did not show the highest MF200 value, suggesting that the curves' similarity (power output and EMG frequency curves) does not rely on maximal or minimal discrete values.On the contrary, it seems to have a frequency content optimal value that allows obtaining a similarity between the power output and EMG frequency curves.Rafiee et al. [26] tried to find the most similar function for electromyographic, electroencephalographic and vaginal pulse signals among 324 potential mother wavelets.The authors identified db44 as the most similar mother wavelet for these classes of biosignals.The surface EMG data was obtained by 16 electrodes in the forearm of six subjects performing 10 hand movements for five seconds each: forearm pronation, forearm supination, wrist flexion, wrist extension, wrist abduction, wrist adduction, key grip, chuck grip, hand open, and a rest state.It is reasonable to assume these moments are low demanding and therefore activate mainly slow twitch fibers.In this case, their results could not fit high demanding tasks with explosive executions such as those in the present study.Nonetheless, our findings reiterates db44 as the most suitable function for analyzing frequency content in elderly's EMG signal, in this particular task. Regarding the second ANOVA RM 's main effect, higher Maximal Correlation values were achieved with increasing loads.With higher loads the knee extension task was performed slower which allows to increase motor unit recruitment, according to Henneman's size principle [30].The size principle states that smaller motor units are firstly recruited and then bigger ones, which in turn means that slow-twitch muscle fibers are recruited first and then fast-twitch.In the light of such assumption, our results are explained.Maximal Correlation between frequency and power curves was obtained by higher loads because it was the condition that took longer allowing the recruitment of more fast-twitch fibers producing the highest frequency curve peak.On the other hand, taking longer to extend the knees reduces the electromechanical delay, which is consistent with lower lag values.Correlation at zero lag higher values at higher loads seems to be a methodological manifestation related with the diminished lag and augmented maximal correlations in that load condition than a physiological. It was seen that different load conditions did not change MF200.Most likely, this is more related to a methodological limitation than to a physiological phenomenon.One should realize that the median value of the frequency curve was calculated 200 milliseconds before peak power output and the frequency and power curves were shifted (lag) by 144 ms, 127 ms and 108 ms for 30%, 50% and 70% load conditions, respectively.On the other hand, selecting a window wider than 200 ms could cover the total duration of the power tasks, due to the rapid nature of the movement as the ones used in this study. This study showed that increasing loads were followed by increasing knee extensors power output in elderly women.However, the agonist EMG amplitude signal did not present a matching trend.EMG magnitude in response to different load conditions is unaltered which in this particular test seems an expected result.Although the neuromuscular demand imposed by different load conditions is undeniably different, the nature of the task (i.e., to extend the knees as quickly as possible) is the same.Thus, it would be expected to see the same muscle activation regardless of load conditions.Pousson et al. [31] even reported lower biceps brachii RMS values in high angular velocities (240°/s) when comparing with lower velocities (60°/s) in older adults, which could be related to the time required for muscle activation.Notwithstanding, the results obtained by Klass et al. [32] are in agreement with those obtained in the present study.They also found no activation differences in ankle dorsiflexors in all tested velocities (5°/s to 100°/s).In the same way, biceps femoris RMS values were found to be alike among all load conditions, which allows applying the same rationale.Even though it is an antagonist muscle, biceps femoris activation on a knee extensor task was expected to be present.In order to increase knee stability both quadriceps and hamstrings muscles are activated [14].This statement is even more accurate when it comes to sedentary people, like the participants in this study.Once again, the same RMS values amongst load conditions can be related to the specific nature of the task.Moreover, the lack of difference allows to assume that the previously cross correlation results were not affected by a co-contraction of the biceps femoris [14]. It seems reasonable to assume that higher load regimes increases activation of fast twitch muscle fibers and should be recognized as an optimal training load.To the best of our knowledge, this is the first study relating knee extensors activation in response to load manipulation in elderly subjects.Klass [33] suggests that the cause for decreased torque development in the elderly muscle is due to slowing of motor units contractile properties.The author refers that elderly adults may achieve tetanic fusion at lower discharge frequencies compared with young adults.Therefore, in order to recruit fast twitch muscle fibers one needs to regulate the training loads in an optimal way, which in this experimental design means with 70% of 1RM.We were unable to compare these results with current literature due to lack information about the topic. CONCLUSION This study constitutes a novel approach on understanding how elderly people develop knee peak power by looking upon the electromyographical signal of the major agonist muscle.Using different functions to obtain EMG timefrequency parameters yields different results and the 44 th order Daubechie mother wavelet was pinpointed as the most suitable.We have also seen that different load conditions do not seems to have an influence on mother wavelet selection.Finally, higher loads yields higher knee extensors power output which does not seems to be followed by an increased vastus lateralis median frequency.Nevertheless, it seems to be an optimal training load to elderly women. ETHICS APPROVAL AND CONSENT TO PARTICIPATE Ethical Approval was given by the University of Sao Paulo Ethics Committee (protocol number -2010/16). HUMAN AND ANIMAL RIGHTS No animals were used for this study.All humans research procedures performed in the current study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. CONSENT FOR PUBLICATION All participants gave their written consent to participate in the study. CONFLICT OF INTEREST The author confirms that this article content has no conflict of interest. Fig. ( 1 ) Fig. (1).Graphical representation of the data analysis.The surface graphic shows the continuous wavelet transform of the vastus lateralis EMG data and the planar solid red line the knee extensors power output.AU = arbitrary units.
v3-fos-license
2023-10-27T20:36:35.389Z
2023-10-27T00:00:00.000
264514902
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1128/spectrum.00541-23", "pdf_hash": "12b8390c977a8645650b23f87774ca61e9289e96", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44041", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "98107f179bf7f67706c65a6ba015a9cb6d4ac6ae", "year": 2023 }
pes2o/s2orc
Screening the NCI diversity set V for anti-MRSA activity: cefoxitin synergy and LC-MS/MS confirmation of folate/thymidine biosynthesis inhibition ABSTRACT New antibacterial agents and agent combinations are urgently needed to combat antimicrobial resistance. A multidimensional chemical library screening strategy was used to identify compounds in the National Cancer Institute (NCI) diversity set V library (1,593 compounds) with anti-methicillin-resistant Staphylococcus aureus (MRSA) activity. In this effort, library compounds were screened for anti-MRSA activity in both their original [un-metabolized (UM)] and human liver microsome-metabolized [post-metabolized (PM)] forms and in the absence and presence of sub-minimum inhibitory concentration (MIC) levels of cefoxitin. This strategy allows for the identification of intrinsically active agents, agents with active metabolites, and agents that can act synergistically with cefoxitin. Sixteen UM compounds with MICs ≤ 12.5 µM were identified. No agents with substantially enhanced activity after microsomal metabolism were found. Several agents showed significant apparent synergy with cefoxitin, and checkerboard assays were used to confirm synergy for four of these (celastrol, porfiromycin, 4-quinazolinediamine, and teniposide). A follow-up comparative screen in the absence and presence of 4-µM thymidine was used to identify three agents as likely folate/thymidine biosynthesis inhibitors. A liquid chromatography–mass spectrometry (LC-MS/MS) assay for deoxythymidine triphosphate (dTTP) was used to confirm these three as suppressing dTTP biosynthesis in MRSA. Bactericidal vs bacteriostatic activity was also evaluated. This study further demonstrates the utility of comparative library screening to identify novel bioactive agents with interesting synergies and biological activities. The identification of several folate/thymidine biosynthesis inhibitors from this small screen indicates that this pathway is a viable target for new drug discovery efforts. IMPORTANCE New antibacterial agents are urgently needed to counter increasingly resistant bacteria. One approach to this problem is library screening for new antibacterial agents. Library screening efforts can be improved by increasing the information content of the screening effort. In this study, we screened the National Cancer Institute diversity set V against methicillin-resistant Staphylococcus aureus (MRSA) with several enhancements. One of these is to screen the library before and after microsomal metabolism as means to identify potential active metabolites. A second enhancement is to screen the library in the absence and presence of sub-minimum inhibitory concentration levels of another antibiotic, such as cefoxitin in this study. This identified four agents with synergistic activity with cefoxitin out of 16 agents with good MRSA activity alone. Finally, active agents from this effort were counter-screened in the presence of thymidine, which quickly identified three folate/thymidine biosynthesis inhibitors, and also screened for bactericidal vs bacteriostatic activity. IMPORTANCE New antibacterial agents are urgently needed to counter increasingly resistant bacteria.One approach to this problem is library screening for new antibacterial agents.Library screening efforts can be improved by increasing the information content of the screening effort.In this study, we screened the National Cancer Institute diversity set V against methicillin-resistant Staphylococcus aureus (MRSA) with several enhance ments.One of these is to screen the library before and after microsomal metabolism as means to identify potential active metabolites.A second enhancement is to screen the library in the absence and presence of sub-minimum inhibitory concentration levels of another antibiotic, such as cefoxitin in this study.This identified four agents with synergistic activity with cefoxitin out of 16 agents with good MRSA activity alone.Finally, active agents from this effort were counter-screened in the presence of thymidine, which quickly identified three folate/thymidine biosynthesis inhibitors, and also screened for bactericidal vs bacteriostatic activity.KEYWORDS library screening, Staphylococcus aureus, microsome, P450, metabolism, drug discovery, chemical diversity, synergy, antibiotic drug resistance, MRSA, LC-MS/MS, metabolomics A ntimicrobial resistance (AMR) in pathogenic bacteria is a major public health threat (1)(2)(3)(4).Methicillin-resistant Staphylococcus aureus (MRSA) causes both nosocomial and community-acquired infections (5,6).It is resistant to most β-lactam antibiotics including methicillin, oxacillin, amoxicillin, and cefoxitin and to many other antibiotic classes and agents (7).Chemical library screening is a popular drug discovery approach where hundreds to many thousands of compounds are screened in a high-throughput fashion to identify novel pharmacological and biological activities (8).Given that the emergence of resistance to single agents has so far proven inevitable, methods to reverse or prevent the emergence of resistance, such as the development of antibacterial agent combinations, seem essential (9)(10)(11). In a prior study, a dimensionally enhanced library screening approach was demon strated for screening a Food and Drug Administration (FDA)-approved drug library against MRSA (12).This approach uses added dimensions (human liver microsome metabolized library compounds and ±cefoxitin screening) to a standard library screen to provide valuable additional information while also providing a degree of screening redundancy.In this study, a variation of this approach was applied to MRSA using a non-FDA-approved library to assess the ability of this approach to identify interesting lead compounds in a general chemical (non-FDA) library screen.The National Cancer Institute (NCI) diversity set V library was used for this effort, which consists of 1,593 compounds selected to cover a wide range of chemical and pharmacophore space.This effort identified agents with good intrinsic anti-MRSA activity and agents with synergistic activity with cefoxitin.No agents with active metabolites were identified in this screen. A key bottleneck in whole-cell screening for antibacterial activity is the determination of the mechanism of actions (MOAs) of newly identified agents (13,14).Two of the compounds identified in this screen had obvious similarity to trimethoprim (diaminopyr imidine), a folate reductase inhibitor.Added thymidine, a key metabolite dependent on folate biosynthesis, is known to rescue S. aureus from folate/thymidine biosynthe sis inhibition, including from both sulfamethoxazole and trimethoprim (15).A ±thymi dine follow-up screen was therefore implemented, which identified three prospective folate/thymidine biosynthesis inhibitors-two obvious diaminopyrimidine-containing candidates plus a fluorinated pyrimidine compound similar to 5fluorouracil.To provide further confirmation, their effect on bacterial deoxythymidine triphosphate (dTTP) pool levels was determined by liquid chromatography-mass spectrometry (LC-MS/MS) analysis.Compounds were also evaluated for bactericidal vs bacteriostatic activity, and spectrum of activity data against a panel of MRSA strains was used to identify agents with general activity against MRSA. Library screening and hit MIC determinations Library screening was performed at 200 µM [nominal concentration for the post-metab olized (PM) library screen] as described in detail previously (12,16).Following library screening, a pooled hit list was made [i.e., any compound that gave a hit (was active in suppressing bacterial growth) in any of the four un-metabolized (UM)/PM vs ±Cef screens was added to the list] for follow-up minimum inhibitory concentration (MIC) determinations.MICs for all the compounds in this pooled hit list were then determined by serial dilution in steps of two starting at 100 µM under all four screening conditions (UM − Cef, UM + Cef, PM − Cef, and PM + Cef ) to give a final table of MICs.The results from these MIC determinations for minimum MICs of ≤12.5 µM are summarized in Table 1 and for all screening hits in Table S1.All inactive screened compounds are listed in Table S2.Celastrol is also included in Table 1 even though it had relatively weak activity since it showed significant apparent synergy with cefoxitin as discussed further below. Several of the identified agents (Table 1) are previously known antibacterial agents.Clorobiocin is an aminocoumarin DNA gyrase inhibitor similar to novobiocin (17,18).Ethyl violet is a homolog of crystal (methyl) violet that is a well-known antibacterial agent of unknown mechanism (19).Hitachimycin (stubomycin) is a generally cytotoxic agent with Gram-positive antibacterial activity isolated from streptomyces cultures (20,21).Streptovaricin C is a known antibiotic that inhibits mRNA polymerase (22).Several other agents on this list have been identified as having anti-MRSA activity in publicly available library screening databases [ChEMBL CHEMBL4296184 (23) and PubChem assay identifiers (AIDs) 1259311 and 1409573]. Comparative MIC analyses to identify agents synergistic with cefoxitin Comparisons between MIC values are included in Table 1 to highlight the effect of added cefoxitin on compound MICs and the effect of microsomal metabolism on MICs.The L2 (±Cef ) values represent simple comparisons between UM compound MICs in the absence and presence of cefoxitin: This represents the log 2 -fold change for the UM − Cef/UM + Cef MIC ratio.An L2 for UM − Cef vs PM − Cef can be defined similarly (L2 (UM/PM) ), which reflects the change in between the UM − Cef and PM − Cef MIC ratio.The AL2 values represent the average effect of added cefoxitin on UM and PM MIC values or of compound metabolism on both −Cef and +Cef values, as presented previously (12) and as defined in the footnote of Table 1.Parameter values ≥ 2 (fourfold changes, highlighted in Table 1) indicate significantly increased potency (lower MIC), and values ≤−2 (highlighted in Table 1) Checkerboard analyses Five compounds in Table 1 showed apparent significant synergy (L2 (±Cef ) ≥ 2).Follow-up checkerboard assays were performed for all these except NSC654260, which was not available in sufficient amounts for this analysis.This confirmed the synergy of cefoxitin with all four of the tested L2 (±Cef ) ≥ 2 compounds (Fig. 1), ranging from relatively strong synergy (∑FIC min = 0.19) for celastrol to relatively weak synergy (∑FIC min = 0.5) for teniposide.There does not appear to be a common mechanistic relationship between these four synergistic-with-cefoxitin compounds. Identification and confirmation of folate/thymidine biosynthesis inhibitors Two of the compounds in Table 1 had the diaminopyrimidine pharmacophore associated with folate reductase inhibitors such as trimethoprim [4-quinazolinediamine (4-QDA) and NSC309401, Fig. 2].Folate is required for the synthesis of thymidine, and the addition of thymidine can be used to reverse the action of folate/thymidine biosynthesis inhibi tors (15).It was therefore expected that redetermining the MICs of the compounds in Table 1 in the absence and presence of 4-µM (1-µg mL −1 ) thymidine (±Thy) could be used to identify folate/thymidine biosynthesis inhibitors within this group (Table 2).This identified three compounds with significantly increased MICs in the presence of thymidine (L2 (±Thy) ≥ 2): the two diaminopyrimidine compounds (4-QDA and NSC309401) as well as the fluorine substituted pyrimidine analog NSC367428 (Table 2). To further confirm these as thymidine biosynthesis inhibitors, an ion-pairing LC-MS/MS method was developed for dTTP, with ATP as a control nucleotide (Table 3) similar to the method developed for the UDP-linked intermediates in the bacterial peptidogly can biosynthesis pathway (24).This method was used to determine the level of dTTP after MRSA exposure to the putative folate/thymidine biosynthesis inhibitors, with trimethoprim included as a positive control and gemcitabine (12) included as a negative control.These LC-MS/MS results (Fig. 3) clearly demonstrate substantial dTTP level suppression for NSC309401, NSC367428, and 4-QDA.The two diaminopyrimidinecontaining agents (4-QDA and NSC309401) are likely folate reductase inhibitors.The mechanism of thymidine biosynthesis inhibition by the fluoropyrimidine NSC367428 is unknown, but it is structurally similar to 5fluorouracil (Fig. 2).This ±thymidine approach to the quick identification of folate/thymidine biosynthesis inhibitors is a simple extension to the general synergy screening approach used in this and several prior studies.Since folate biosynthesis is an essential bacterial biochemical pathway, this approach could be expanded for the large-scale identification of novel agents targeting this essential and druggable pathway. Toxicity data Most of the identified compounds have been previously screened for cytotoxicity, and PubChem NSC identifiers, compound identifiers (CIDs) , and cytotoxicity (bioassay) AIDs and screening results are also included in Table 2. Spectrum of activity To further assess the potential of this group of NCI compounds as anti-MRSA and antibacterial agents, spectrum of activity was determined against several MRSA strains, one strain of vancomycin-resistant Enterococcus (VRE) faecium, one strain of VRE faecalis, and one strain of Escherichia coli (Table 3).Only NSC367428, the fluoropyrimidine derivative (Fig. 2), demonstrated appreciable activity against E. coli.This is in contrast to the structurally similar 5fluorouracil, which did not show activity against E. coli (12).Clorobiocin showed the best MRSA spectrum of activity, followed by 4-QDA, bactobolin, streptovaricin C, ethyl violet, NSC367428, and hitachimycin, based on average MRSA MIC.Naphtanilide LB and, to a lesser degree clorobiocin were unusual in their selectivity to certain MRSA strains. Conclusions A library screening effort was performed with the NCI diversity set V against MRSA to both identify novel antibacterial metabolites and synergistic agents with cefoxitin.In contrast to a prior similar screen of an FDA-approved drug library against MRSA (12), human microsome metabolism of the NCI library did not result in the identification of any new active metabolites.However, similar to this prior FDA screen, screening the NCI library in the absence and presence of cefoxitin allowed for the identification of several synergistic combinations with cefoxitin: celastrol, porfiromycin, 4-QDA, and tenoposide.Two of these synergistic agents, celastrol and porfiromycin, are DNA-damaging agents (25,26), teniposide is a DNA gyrase inhibitor (27), and 4-QDA is a folate/thymidine biosynthesis inhibitor as demonstrated in this study.There does not seem to be an obvious common mechanistic basis for the synergy of these four agents with cefoxitin.a Global method parameters were TEM (source temperature), 300°C; IS (ion spray voltage), −4,500 V; GS1 and GS2 (gas flows), 50 (arbitrary units); CAD gas, medium. The identification of several folate/thymidine biosynthesis inhibitors using a ±thymidine counter screenidentified three compounds (4-QDA, NSC367428, and NSC309401) as folate/thymidine biosynthesis inhibitors, and these were confirmed as able to suppress dTTP biosynthesis in MRSA by LC-MS.4-QDA may provide a lead for further folate biosynthesis inhibitors.Of the top-ranked hits, all appear bactericidal except 4-QDA, bactobolin, and naphthnilide LB.Several other agents identified in this screen are unknown but potentially interesting mechanisms including naphtanilide LB, NSC207895, NSC204262, NSC654260, and NSC53275.Of these, only NSC207895 has been identified as cytotoxic to human cells in prior screening efforts (Table 2).Further mechanistic studies of these agents may provide new targets for focused drug discovery and refinement efforts.against MRSA (ATCC 43300).The dashed line in the isobolograms is for the no interaction (additive MICs) curve.MICs for other agents alone are given in Table 1. General The NCI diversity set V library of 1,593 compounds was from the Division of Cancer Treatment and Diagnosis (DCTD) of the NCI.All other materials were as described previously (12). Library replication, addition of metabolism, and antibacterial control com pounds The using a liquid-handling workstation (Biomek 3000) and diluted with 90 µL DMSO to provide UM working plates at 1 mM. In vitro microsomal metabolism to provide the PM library For PM library preparation, the remaining 10 µL of each sample in DMSO was dried by freezing the plates at -80°C and drying under a strong vacuum (<50 µmHg) in a Genevac Quatro centrifugal concentrator (DMSO can interfere with microsomal metabolism reactions).The dried library plates were metabolized with human liver microsomes as described previously (12).To each well was added 10 µL acetonitrile/water (20%/80%, vol/vol) to redissolve samples.The plates were incubated for 2 h at 35°C, followed by the addition of 490 µL of freshly prepared (on ice) microsomal reaction mixture contain ing 50 mM potassium phosphate pH 7.4, 3 mM MgCl 2 , 5 mM glucose-6-phosphate, 1 unit mL −1 glucose-6-phosphate dehydrogenase, 1 mM NADP + , and 0.5 mg mL −1 total microsomal protein.Reaction mixtures were incubated for 24 h at 35°C with gentle rocking.Library plates were then centrifuged at 4,000 g for 30 min at 4°C, and 400 µL of the supernatants was then transferred to sterile 96 well plates.To the residues was added 100 µL DMSO, and the samples were mixed thoroughly.Library plates were centrifuged again at 4,000 g for 30 min, and 150 µL of the supernatants was removed and combined with the first extracts.The resulting extracts were frozen at -80°C and dried under strong vacuum (<50 µmHg) in a Genevac Quatro centrifugal concentrator.These PM library samples were then reconstituted in 100-µL DMSO to provide a 1-mM PM NCI working library.Both UM and PM working libraries were stored in U-bottom polypropy lene storage plates at -80°C.Samples of wells containing microsomally metabolized drug controls from PM plates were analyzed by LC-MS/MS to provide a relative measure of metabolism.The percent metabolism of these control drugs was 52%, 55%, 60%, 66%, 95%, and 100% for tolbutamide, dextromethorphan, chlorzoxazone, phenacetin, diclofenac, and coumarin respectively.These controls demonstrate that the metabolism conditions employed in this study were sufficient to achieve a relatively high degree of metabolism. UM/PM vs ±Cef library screen against MRSA Four sets of library screens were performed (UM − Cef, UM + Cef, PM − Cef, and PM + Cef ), as described previously for an FDA-approved drug library screen (12), with the modification that 2 µL of library samples at 1 mM was used.During the bacterial incubation step, this provided 100 µM compound concentrations, rather than 200 µM as in the previously described study (12).Plates were frozen at -80°C and dried as described above.To each well in each set was added 20 µL cation-adjusted Mueller-Hinton (CAMH) broth containing 4,000 CFU MRSA (ATCC 43300) and containing either no cefoxitin for -Cef screens or +8 µg mL −1 cefoxitin (equal to 1/4× MIC) for +Cef screens.Plates were incubated for 48 h at 35°C.Fresh CAMH broth (10 µL) was then added to the wells of these four sets of plates, followed by incubation for 2 h at 35°C, to restart active cell growth.To the wells of these plates was then added 6 µL of 100 µg mL −1 resazurin (sodium salt) (28)(29)(30).The plates were incubated for another 2 h at 35°C, and the 570/600 fluorescence ratio was measured in a Molecular Devices SpectraMax M5 multimode microplate reader.The resulting data were processed and analyzed using MATLAB scripts (The Mathworks, Natick, MA, USA) to identify active wells using a cutoff value between known actives (antibiotic controls) and known inactives (microsomal controls).A merged hit list was generated, in which a compound was included in the merged hit list if it demonstrated activity under any of the four test conditions (UM − Cef, UM + Cef, PM − Cef, or PM + Cef ). Hit picking and MIC determination Follow-up MIC determinations for identified hits were performed as described in detail previously (12) using a resazurin (Alamar Blue)-based colorimetric assay (28)(29)(30).This assay gives comparable results to standard clinical MIC methods for S. aureus with increased sensitivity in a 384-well plate format.MICs were determined for all actives by hit picking 2 µL samples from both UM and PM working plates (two sets from each) into the first columns of 384 well plates (four sets total, for UM -Cef, UM + Cef, PM -Cef, and PM + Cef MIC determinations).These samples were then serially diluted in steps of two across the plates with DMSO using an Integra Viaflo Assist-automated multichannel pipette.The last column was left blank (DMSO only).These plates were frozen at -80°C and dried under a strong vacuum as described above.To each well in each set was added 20-µL CAMH broth containing 4,000 CFU MRSA (ATCC 43300) and containing either no cefoxitin for -Cef MICs or 8 µg mL −1 Cef for +Cef MICs.This provided MIC plates with 100 µM as the highest test agent concentration.Incubation and resazurin treatment were as described above.MICs were determined using a cutoff midway between known active and inactive samples.All MICs were determined at least in triplicate. Minimum bactericidal concentrations Minimum bactericidal concentrations (MBCs) were determined by preparing and drying UM − Cef MIC plates as described above.To each well was added 20 µL CAMH broth containing 8,000 CFU MRSA, and the plates were incubated for 18-24 h.After resazurin addition and development as described above, 20 µL from the 1× MIC, 2× MIC, and 4× MIC wells for each compound was removed and plated on CAMH agar plates.The plates were incubated overnight, and colony counts were assessed.A greater than 1,000-fold decrease from anticipated colony counts was scored as bactericidal, and a less than 1,000-fold decrease as bacteriostatic.All MBCs were determined at least in triplicate and reported as the ratio of the MBC to the MIC (Table 2). Checkerboard assays to confirm synergy with cefoxitin Several agents showed lower MICs in the presence of cefoxitin (Table 1), indicative of potential synergistic activity.Checkerboard assays (31) were used to confirm and assess synergy for 4-QDA, celastrol, teniposide, streptovaricin, porfiromycin, and ethyl violet with cefoxitin, as described previously (12).All checkerboard assays were performed in triplicate.Data were plotted as isobolograms and reported as the minimum sum of fractional inhibitory concentrations (∑FIC min values in Fig. 1, also referred to as FICI values) (32). ±Thymidine counter screen and LC-MS/MS confirmation for folate/thymidine biosynthesis inhibitors The effects of folate/thymidine biosynthesis inhibitors on MRSA can be reversed by the addition of thymidine to the culture media (15).This effect was therefore used to assess Table 1 by redetermining the UM − Cef MICs in the absence and presence of 4 µM (1-µg mL −1 ) thymidine (Table 2).This identified three agents with significant L2 (±Thy) values.To further confirm these three agents as thymidine biosynthesis inhibitors, an ion-pairing LC-MS/MS assay was developed for ATP and dTTP using the same approach as previously described for UDP-linked intermediates in the bacterial cell wall biosyn thesis pathway (24) (Table 3).Antibiotic-treated bacterial cultures were prepared as described in detail previously (24).MRSA cultures were grown in CAMH media to the mid-log phase (OD 600 = 0.5), and 50 mL of this mid-log phase was transferred to baffled 250 mL culture flasks and treated with the test agent at 4× MIC (Table 2, Thy values) for 15 min.The tested agents were NSC367428, 4-QDA, and NSC309401, with trimetho prim included as a positive control and gemcitabine included as a negative control.A non-antibiotic control flask was also included.The flasks were incubated at 35°C with shaking for 15 min.Flasks were then rapidly chilled in an ice slush bath, and the samples from individual flasks were collected in quadruplicate and stored on ice for up to 15 min prior to centrifugations and processing for metabolite extraction, as described above.Samples were analyzed for ATP and dTTP using the LC-MS/MS parameters described in Table 3.The results from this experiment are reported in Fig. 2. FIG 2 FIG 2 Structures of active or referenced compounds. FIG 3 FIG 3 Fold changes in the levels of ATP and dTTP upon exposure to 4× MIC of different agents for 15 min relative to an untreated control. UM f PM g Name PubChem CID h Cef +Cef Cef +Cef Min_MIC L2 (±Cef) a AL2 (±Cef) b L2 (UM/PM) c AL2 (UM/PM) d significantly decreased potency (higher MIC).Five compounds demonstrated L2 (±Cef ) ≥ 2 values, identifying these as likely synergistic agent combinations with cefoxitin, and worthy of follow-up checkerboard analyses.No compounds demonstrated L2 (UM/PM) ≥ 2 values indicative of a substantially more active metabolite, and no further follow-up on active metabolite identification was therefore performed. TABLE 2 Additional compound information a,d s MBC/MIC e (−Thy) C/S b PubChem NSC PubChem CID f PubChem AID b Toxic c a C = −cidal; S = −static.b PubChem bioassay [assay identifier (AID) record] results for cytotoxicity testing.The AID504648 screen used A549 ARE_Flux cells, the AID720589 screen used HepG2 cells, and the AID1409572 screen used HEK293 cells.c L2 ±Thy = log 2 MIC +Thy MIC −Thy .d MICs (µM) for the top NCI diversity set V compounds against MRSA (ATCC #43300) in the absence (−Thy) and presence (+Thy) of 4-µM of thymidine.The associated L2 values identify those agents targeting folate/thymidine biosynthesis (bold entries).MBC/MIC ratios and −static vs −cidal designations (C/S column, −Thy) are included in the MBC/MIC column.The toxic column summarizes toxicity results from other screening efforts identified in the indicated PubChem bioassay (AID) records.e MBC/MIC, minimum bactericidal concentration/minimum inhibitory concentration.f CID, compound identifier.g 4-QDA, 4-quinazolinediamine. TABLE 3 Retention time (t R ) and MS/MS parameters for ATP and dTTP quantification a NCI diversity set V was delivered in 96 well plates in Columns 2-11, 20 plates total, with each well containing 20 µL of a 10-mM solution of a compound in DMSO.Antibiotic controls (20 µL of 10 mM stock solutions of vancomycin, fosfomycin, ampicillin, doxycy cline, or chloramphenicol) were added to Column 1 of each library plate.Microsomal (CYP) substrate controls (20 µL of 10 mM stock solutions of phenacetin, tolbutamide, dextromethorphan, coumarin, chlorzoxazone, or diclofenac) were added to Column 12 of each library plate.Aliquots (10 µL) of library samples were transferred to 96-well plates TABLE 4 Spectrum of activity of NCI compounds (MIC, µM) a Structures are shown in the figure below.b ATCC 43300 MRSA strain used for library screening.Other vendor IDs are given in the text.c Not active (NA) at 50 µM, the highest concentration used in these MIC determinations.d Control antibiotic.e MRSA, methicillin-resistant Staphylococcus aureus.f 4-QDA, 4-quinazolinediamine. g VRE, vancomycin-resistant Enterococcus.
v3-fos-license
2022-01-12T14:13:26.245Z
2022-01-11T00:00:00.000
245856624
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcemergmed.biomedcentral.com/track/pdf/10.1186/s12873-021-00563-8", "pdf_hash": "cc1d9214b659bdb8cc32843fd30e2ff9c7ab7999", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44042", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8737d55a3f7f26a43915e47a5894c3bb39c3a537", "year": 2022 }
pes2o/s2orc
German emergency department measures in 2018: a status quo based on the Utstein reporting standard Background Compelling data on clinical emergency medicine is required for healthcare system management. The aim of this survey was to describe the nationwide status quo of emergency care in Germany at the healthcare system level using the Utstein reporting template as the guideline to measure the data collected. Methods This cross-sectional survey collected standardized data from German EDs in 2018. All 759 of the EDs listed in a previously collected ED Directory were contacted in November 2019 using the online-survey tool SoSci Survey. Exclusively descriptive statistical analyses were performed. Absolute as well as relative frequencies, medians, means, ranges, standard deviations (SD) and interquartile ranges (IQR) were reported depending on distribution. Main Results A total of 150 questionnaires of contacted EDs were evaluated (response rate: 19.8%). Hospitals had a median of 403 inpatient beds (n=147). The EDs recorded a median of 30,000 patient contacts (n=136). Eighty-three EDs (55%) had observation units with a median of six beds. The special patient groups were pediatric patients (< 5 years) and older patients (> 75 years) with a median of 1.7% and 25%, respectively. Outpatients accounted for 55%, while 45% were admitted (intensive care unit 5.0%, standard care unit 32.3%, observation unit 6.3%) and 1.2% transferred to another hospital. Conclusions The use of the Utstein reporting template enabled the collection of ED descriptive parameters in Germany. The data can provide a baseline for upcoming reforms on German emergency medicine, and for international comparisons on admission rates, initial triage categories, and patient populations. data as a basis for political decision-making are lacking as well. The aim of this survey was to describe a nationwide status quo of care in emergency departments in Germany using the SocSci Survey tool with focus on demographic patient data, ED structure and process indicators by using the Utstein template. This "template for uniform reporting of emergency department measures, consensus according to the Utstein method" was developed by Hruska et al. to enable a comparative description of individual EDs in research publications [11] and was adapted to the German ED and hospital structures. Study design A cross-sectional online survey was conducted to collect key data from German EDs for the reference year 2018. After translation into German, the Utstein reporting template was used to develop a questionnaire. This was adapted and consented by clinical and methodological experts. In November 2019, all 759 ED chairs listed in a previously collected ED directory were invited via email to participate in the survey using the online-tool SoSci Survey (SoSci Survey GmbH, Munich, Germany). Participation in the survey was anonymous and voluntary. The survey ended in January 2020, and lasted for two months, with three reminders being sent out periodically. The study was approved by the Ethics Committee of the Otto von Guericke University at the Faculty of Medicine, Magdeburg, Germany (identification number 131/19). Questionnaire and adaption of the Utstein template The questionnaire included 19 questions according to ED workflow with minor modifications of the Utstein template to accommodate German conditions. Data were collected on ED structures, processes, and patient characteristics. The measure of acute care beds per 1,000 inhabitants was replaced with an intensive care bed count. The time to first provider was defined as time to first physician because other professions listed in the Utstein template were not common in Germany until 2021. As there was no specialty for emergency medicine established in Germany, the question of coverage by emergency medicine specialists was excluded. Questions about the proportion of disposition of non-hospitalized patients, proportion of patients until age of 18, and the German federal state of hospital location were added. Clinical care hours were collected separately and cumulatively in hours per 100 cases by occupational group. To allow for the distinction between a lack of data or questions not answered, participants could respond to each question with "unknown". Inclusion criteria Only EDs that visited all pages of the questionnaire were included in the analysis. In addition, information on hospital beds or ED cases had to be entered as a minimum requirement. ED directory There are 1,065 EDs in Germany [8]; however, the official hospital directory [9] with 1,864 hospitals contains neither information about ED existence nor ED contact data. To compensate for this, a proprietary ED directory was compiled on behalf of the German Interdisciplinary Association of Critical Care and Emergency Medicine (DIVI) and the German Interdisciplinary Society for Emergency and Acute Medicine (DGINA) containing contacts from 759 EDs at survey time. The representativeness of the hospitals was estimated by comparison with official hospital directory data [9] containing 1,864 hospitals. Statistics After data collection was completed, surveys that met the inclusion criteria were analyzed. Descriptive statistical analyses were performed using Excel 2016 (Microsoft Corp., Redmond, USA) and SPSS 26.0 (IBM Corp., New York, USA). Absolute and relative frequencies, medians, means, ranges, standard deviations (SD) and interquartile ranges (IQR) were reported. The analysis of the structural parameters was carried out in subgroups according to official hospital directory bed count categories (≤ 399 beds, 400-799 beds, ≥ 800 beds). In the context of the descriptive characterization of EDs in Germany, no inductive statistical tests were performed. The valid dataset of the various parameters differed due to omission of missing data (i.e., no responses or "unknown" and "no information"). Response rate and representativity After the interview period, 154 questionnaires were completed. Of these, 150 were evaluated, as four questionnaires did not meet the inclusion criteria. In relation to contacted EDs, the response rate of this study was 19.8%, representing 14.1% of the 1,065 hospitals with EDs [8] in Germany. The median number of hospital beds in this study was higher than the German average [9] of all 1.864 hospitals including non-ED hospitals. Within the predefined subgroups, response rate for EDs increased with their hospital size (Table 1). ED structure ED structure included the following parameters: number of visits, treatment spaces, visits per treatment space, resuscitation beds, visits per resuscitation, observation unit, number of beds in observation unit, visits per observation unit bed, and percentage of cases that arrived by ambulance. Corresponding statistical data are presented in Table 2. ED population Participating EDs treated a median of 30,000 [IQR: 20,000-37,008] patients (n = 136) ( Table 2). Of these patients, a median of 9.8% [IQR: 5.0-15.8%] (n = 88) were Three questionnaires could not be assigned to any subgroup due to implausible bed counts. They were included in the bed-independent analysis under "Total" because plausible data were available for other parameters. SD standard deviation, IQR interquartile range, Min minimum, Max maximum younger than 19 years and 25.5% [IQR: 20.0-31.6%] were older than 75 years (n = 77). A more detailed overview of the patient population is provided in Table 3. The initial triage of emergency patients was performed mainly using the Manchester Triage System (MTS) in 122 EDs (81.3%) and the Emergency Severity Index (ESI) in 18 EDs (12.0%). Furthermore, six EDs used proprietary ED-specific initial triage systems (4.0%), three EDs performed no initial triage (2.0%) with one ED not reporting (0.7%). Figure 1 (initial triagecategory of acuity for Manchester Triage System (MTS) and Emergency Severity Index (ESI)) shows relative frequencies of each initial triage category for MTS and ESI. Categories three and four were represented most frequently in participating EDs. Table 4. Discrepancies between the sum of subgroups and total resulted from the exclusion of unplausible bed counts as described in Table 2 SD standard deviation, IQR interquartile range, Min minimum, Max maximum Patients who left without being seen accounted for 1.0% [IQR: 0.5-2.0%] (n = 94). After discharge, 1.0% [IQR: 0.9-3.3%] (n = 30) of the patients returned unplanned within 72 hours ( Table 2). Discussion This study was the first comprehensive description of the status of EDs in Germany. As no national standard for surveys in EDs existed in Germany, the Utstein reporting standard [11] for research publications was used to collect internationally comparable data. Previous German studies dated back to 2013 were limited to members of ED professional societies and achieved lower respondent numbers [4,12]. Those studies had a lower response rate of small hospital EDs covering about 9.4% of the study sample [4]. In this study, small hospitals had a better representation with 47.6% but did not reach the proportion in official hospital statistics with 78.0%. Furthermore, representativity assessment was complicated as official hospital statistics contained non-ED hospitals. Compared to previous publications, EDs in this study had fewer treatment spaces, which was aligned with the higher response rate from smaller hospitals [4,12]. According to the Federal Joint Committee's (G-BA) decision from 2018 [13], EDs providing more than basic emergency care are required to have an observation unit. This affects approximately 41% of German EDs [8]. With 56%, the proportion was higher in this sample. However, since there were no further information on the level of care, conclusions about the fulfillment of the G-BA requirements could not be drawn. Despite low question specific response rate, the estimated direct clinical care hours per patient visit were consistent to previously reported data [4]. This study determined direct clinical care hours, which were on average higher than the patient-dependent engagement times measured by Gräff et al. [14]. These results could Length of stay is presented separately for each disposition Discrepancies between the sum of subgroups and total resulted from the exclusion of unplausible bed counts as described in Table 2 SD standard deviation, IQR interquartile range, Min minimum, Max maximum not be compared as Gräff et al. measured engagement times using an observer, while this study calculated care hours based on staff roster. A median of 30,000 patients annually corresponded with 34,000 patients from a previous study with a higher proportion of larger hospitals [4]. Internationally, the number of patient contacts in different healthcare systems differed widely (Switzerland, 8806; United States, 20,000; France, 22,265; and Denmark, 32,000) [15][16][17][18]. The proportion of pediatric patients aged 0 to 5 years and 0 to 18 years were higher than those previously reported [4,19] but were still inadequately represented compared to international data [1,3,20]. This may be due to existing specialized pediatric EDs, which were often not organizationally integrated into the EDs and thus not surveyed. Accordingly, patients older than 75 years were overrepresented in this study [1,3,16,20]. In addition to the assumed selection bias by nonparticipating pediatric EDs, Germany has the fourth oldest population in the world [21], which may have contributed to the increased proportion of older patients with medical conditions.The length of stay (LOS) in this study was shorter, with a median of 154 minutes, compared to 178 minutes in Australia, where a 4-hour rule is in place [2]. Of the 150 responding EDs, only 85 answered the LOS-question. Of these, 16 EDs reported an average LOS under 120 minutes and eight EDs reported an average LOS over 240 minutes. Since recording of discharge or transfer time was not mandatory for billing purposes [10], the reported LOS may not be reliable. The frequency distributions of the initial triage categories matched a recent analysis conducted by the AKTIN German Emergency Department Data Registry [19]. Small differences may have been due to the larger proportion of patients not assessed in the reference study (approximately 14%) [19]. Compared to Australia and Canada, patients were assessed as being less urgent [1,2]. This may have been due to the different health care systems and the increasing use of EDs by patients with acute, but non-emergent, treatment demands [22]. In addition, the MTS, which was predominantly used by EDs in this study, tends to underassess older patients, which were also highly represented in the study [23,24]. The initial triage of patients as more urgent was more frequent with ESI than with MTS. This could be explained by the fact that MTS and ESI use different algorithms and ESI allows urgent grading based on condition, symptoms, or a combination of both [24]. The outpatient proportion was congruent with previous German results [4] but lower than internationally reported proportions [1,3,16,20]. This may have been due to differences in the health care systems and the higher proportion of elderly patients in this survey data. Furthermore, there is no specialist in emergency medicine in Germany, and EDs are often staffed by younger residents [4] who may make different decisions compared to experienced specialists. Internationally, Germany had one of the highest hospital bed densities [25]. At the same time, EDs do not cover their costs [26]. There is a lack of cost-covering billing numbers in Germany for outpatient emergency care. This applies, for example, to the reimbursement of complex diagnostics in order to avoid admissions. There is an urgent requirement for adequate compensation to avoid false incentives to cover costs through patient hospitalization. Overall, the survey revealed further research requirements. Limitations Although the overall response rate was good (19.8%), this study covered only a small proportion of German EDs (14.1%). This discrepancy resulted from the fact that no official directory existed and not all German EDs were included in the used directory. Response rate related to hospital size was presumably higher in larger hospitals. Furthermore, while the overall item-specific response rates were acceptable, some case-related questions (e.g. process times, and disposition) suffered relevant omissions. There was no feedback with respect to the motivation not to respond to the survey. Due to the structure of the Utstein template, the evaluation of time to physician treatment related to triage category was not possible. In future surveys, the classification of hospitals into the new G-BA national emergency levels [13] should be recorded for better evaluability. Conclusion This study enabled, for the first time, a nationwide survey addressing individual EDs in terms of structure, key figures, and performance indicators. Politicians and healthcare managers may use these data for further planning and development in clinical emergency medicine. To be more representative and to allow regional planning, the collection of these data should become mandatory for all German EDs.
v3-fos-license
2023-09-20T15:12:12.806Z
2023-09-01T00:00:00.000
262060011
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-666X/14/9/1777/pdf?version=1694850706", "pdf_hash": "0599944d84067a47656367dc8b72e8015e22d49e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44043", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "384af19061ff24f42b02fa3e74f5e4dfde9f389d", "year": 2023 }
pes2o/s2orc
Conversion of CH4 and Hydrogen Storage via Reactions with MgH2-12Ni The main key to the future transition to a hydrogen economy society is the development of hydrogen production and storage methods. Hydrogen energy is the energy produced via the reaction of hydrogen with oxygen, producing only water as a by-product. Hydrogen energy is considered one of the potential substitutes to overcome the growing global energy demand and global warming. A new study on CH4 conversion into hydrogen and hydrogen storage was performed using a magnesium-based alloy. MgH2-12Ni (with the composition of 88 wt% MgH2 + 12 wt% Ni) was prepared in a planetary ball mill by milling in a hydrogen atmosphere (reaction-involved milling). X-ray diffraction (XRD) analysis was performed on samples after reaction-involved milling and after reactions with CH4. The variation of adsorbed or desorbed gas over time was measured using a Sieverts’-type high-pressure apparatus. The microstructure of the powders was observed using a scanning transmission microscope (STEM) with energy-dispersive X-ray spectroscopy (EDS). The synthesized samples were also characterized using Fourier transform infrared (FT-IR) spectroscopy. The XRD pattern of MgH2-12Ni after the reaction with CH4 (12 bar pressure) at 773 K and decomposition under 1.0 bar at 773 K exhibited MgH2 and Mg2NiH4 phases. This shows that CH4 conversion took place, the hydrogen produced after CH4 conversion was then adsorbed onto the particles, and hydrides were formed during cooling to room temperature. Ni and Mg2Ni formed during heating to 773 K are believed to cause catalytic effects in CH4 conversion. The remaining CH4 after conversion is pumped out at room temperature. Introduction The global economy is developing gradually, and consequently, the global energy demand is constantly growing.Energy is supplied from fossil fuels such as coal, crude oil and natural gas, which are finite on Earth.The use of fossil fuels as an energy source has led to global warming and climate change.To solve these problems, alternative energy sources should be developed. As alternative energy sources, we can consider solar energy, wind energy, geothermal energy, hydropower, ocean energy and bioenergy. Many researchers are interested in the production and storage of hydrogen based on the use of alternative renewable energy sources.In a renewable energy-based hydrogen economy, the distribution of hydrogen from the producer to consumer is currently a key missing technology. Hydrogen energy is the energy produced via the reaction of hydrogen with oxygen.The reaction of hydrogen with oxygen simultaneously produces water.Hydrogen energy is considered one of the potential substitutes to overcome the growing global energy demand [1,2].Hydrogen energy is believed to lead to a 'hydrogen energy economy' society. Electrochemical devices, particularly fuel cell systems, have great potential to revolutionize the way power is produced and utilized.Direct electrochemical production promises greater energy efficiency, less dependence on non-renewable resources and less environmental impact.However, fundamental challenges remain in developing the material systems necessary to achieve the required levels of performance and durability and make solid oxide fuel cell technology a reality. Fuel cells are energy conversion devices that produce electricity by electrochemically combining fuel and oxidizing gases across an electrolyte [3].The scientist William Grove first demonstrated the fuel cell concept and associated electrochemical processes in 1839 [4].He reversed the electrolysis process-where hydrogen and oxygen recombine-and showed that a small electric current could be produced [5].Although the concept was demonstrated more than 180 years ago, fuel cells have only recently attracted serious interest as an economically and technically applicable power source. As a new generation of power sources compared with conventional energy systems, fuel cells have a number of advantages, thanks to which they have gained widespread recognition.A key feature of a fuel cell system is its high energy conversion efficiency.Since the fuel cell converts the chemical energy of the fuel directly into electrical energy, its conversion efficiency is not subject to the Carnot limitation [5].Other advantages over conventional power production methods include modular construction, high efficiency at partial load, minimal location constraints, cogeneration potential and much lower production of pollutants [5]. Hydrogen is usually stored in a gaseous state under high pressure and in a cryogenic liquid state [6].Storing gaseous hydrogen has disadvantages such as safety issues, high cost and hydrogen's embrittlement of storage tank materials.Storage of hydrogen in a cryogenic liquid state has drawbacks such as thermal losses in the case of an open system, safety and cost of liquefaction. Solid-state hydrogen storage using materials such as metal hydrides has advantages such as high gravimetric and volumetric storage capacities and safety, as metal hydrides can absorb and release hydrogen at relatively low pressures.Hydrogen is bound by chemical or physical forces in hydrogen storage based on solid-state materials.The technique of storing hydrogen in a solid state has become very attractive [7] and is the subject of studies by many researchers [8][9][10][11][12]. The hydrogen-storage capacity of magnesium is high, its price is low and its reserves in the Earth's crust are large.However, its reaction rate with hydrogen is low even at a relatively high temperature such as at 573 K [13].A lot of work on improving the hydriding and dehydriding rates of magnesium has been put into alloying magnesium with certain metals [14], such as Cu [8], Ni [9,10], Ti [11], Sn [15], V [16] and Ni and Y [17]. Reilly et al. [9] and Akiba et al. [10] improved the reaction kinetics of Mg with H 2 by preparing Mg-Ni alloys.Song et al. [18] increased the hydriding and dehydriding rates of Mg via the mechanical alloying of Mg with Ni under an Ar atmosphere.Bobet et al. [12] improved the hydrogen-storage properties of both magnesium and Mg + 10 wt% Co, Ni, and Fe mixtures by means of mechanical milling under H 2 (reaction-involved milling) for a short time (2 h).In our previous work [19], samples with the compositions of 94 wt% MgH 2 + 6 wt% Ni, 88 wt% MgH 2 + 12 wt% Ni, 85 wt% MgH 2 + 15 wt% Ni and 82 wt% MgH 2 + 18 wt% Ni were prepared by means of reactive mechanical grinding.Then, the variations of the hydriding and dehydriding properties in the first hydridingdehydriding cycle with Ni content were investigated.The sample with the composition of 88 wt% MgH 2 + 12 wt% Ni had the highest hydriding rate and the largest quantity of hydrogen absorbed for 60 min.Therefore, we selected this sample (named MgH 2 -12Ni) as the suitable alloy. There are three types of methane reforming: steam reforming, autothermal reforming and partial oxidation.These are chemical processes that can produce pure hydrogen gas from methane using a catalyst.Most methods rely on exposing methane to a catalyst (usually nickel) at high temperatures and pressures [20]. Milling particles in a hydrogen atmosphere (reaction-involved milling) generates defects, causes cracks and creates clean surfaces, and reduces particle size.In this way, reaction-involved milling puts the sample in a state that is readily operable with gas; defects can act as active nucleation sites, clean surfaces show high reactivity with gas, and particle size reduction shortens the diffusion distances of atoms. The main obstacles that need to be overcome in the future in order to move to the hydrogen economy society are the development of hydrogen generation and storage methods.In this work, a new study on CH 4 's conversion to hydrogen and the storage of hydrogen was performed using a magnesium-based alloy.MgH 2 -12Ni (with the composition of 88 wt% MgH 2 + 12 wt% Ni) was prepared in a planetary ball mill by means of reaction-involved milling.X-ray diffraction (XRD) analysis was performed on samples after reaction-involved milling and after reactions with CH 4 .The variation of adsorbed or desorbed gas over time was measured using a Sieverts'-type high-pressure apparatus under a methane pressure of 12 bar at 773 K.The microstructure of the powders was observed using a scanning transmission microscope (STEM) with energy-dispersive X-ray spectroscopy (EDS).The reacted samples were also characterized using Fourier transform infrared (FT-IR) spectroscopy. One of the studies aimed at the practical application of fuel cells is the production and storage of hydrogen.We were able to generate hydrogen from CH 4 and at the same time store it as a nano-sized metal hydride.The results of this work can be applied to the production and storage of hydrogen, which can be used for supplying hydrogen to fuel cells.The materials developed in our work are believed to be used for motive power fuel and portable appliances as mobile applications, transport and distribution as semi-mobile applications, and industrial off-peak power H 2 -generation, hydrogen-purifying systems and heat pumps as stationary applications. A mixture with the composition of 88 wt% MgH 2 + 12 wt% Ni (total weight of 8 g) was placed in a hermetically sealed stainless-steel container with 105 hardened steel balls (total weight of 360 g).The sample-to-ball-weight ratio was 1/45.The samples were handled in a glove box under Ar to prevent oxidation.MgH 2 -12Ni with the composition of 88 wt% MgH 2 + 12 wt% Ni was prepared in a planetary ball mill (Planetary Mono Mill; Pulverisette 6, Fritsch, Weimar, Germany) by milling at a disc revolution speed of 400 rpm under a high-purity hydrogen gas of 12 bar for 6 h.Pure MgH 2 was also milled under the same conditions and named as milled MgH 2 . The variation in the amount of adsorbed or desorbed gas over time was measured by means of the volumetric method in a Sieverts'-type high-pressure apparatus described previously [21].This apparatus is composed of three parts: a reactor containing the sample, a gas-supplying part and a part of a standard volume with a known volume used to measure the amount of adsorbed or released gases.The amount of adsorbed gas was measured based on changes in the pressure of the standard volume over time.The standard volume pressure decreases as some gas is transferred to the reactor to compensate for the gas pressure drop in the reactor due to gas adsorption.The amount of desorbed gas was measured based on changes in the pressure of the standard volume over time.The pressure of the standard volume increases as some gas is transferred from the reactor to the standard volume to remove the amount of gas from the reactor (whose pressure increases due to gas desorption).The amount of sample (MgH 2 -12Ni) used for these measurements was 0.5 g. For the reaction of methane in milled MgH 2 and MgH 2 -12Ni, we chose a temperature of 773 K, which is not too high compared with the temperature of metal hydride formation.This temperature is lower than the temperature at which CH 4 conversion was performed in the reported works [20].We chose a gas pressure of 12 bar at 773 K because too high a gas pressure causes leakage in the parts of the Sieverts'-type high-pressure apparatus. X-ray diffraction (XRD) patterns of samples after reaction-involved milling and after adsorption-desorption were obtained in a powder diffractometer Rigaku D/MAX 250 (Tokyo, Japan) with Cu Kα radiation.XRD pattern analysis was performed using the MDI JADE 5.0 program.Data from the JCPDS PDF-2 2004 card of the International Centre for Diffraction Data (ICDD) were used to identify the phases.Reacted samples were also characterized using Fourier transform infrared (FT-IR) spectroscopy (Frontier, PerkinElmer, Shelton, CT, USA).Powder microstructures were observed using a high-resolution transmission electron microscope (HR-TEM) with energy-dispersive X-ray spectroscopy (EDS) (Titan G2 Cube 60-300, FEI company (Field Electron and Ion Company, FEI, Hillsboro, OR, USA)) operated at 80 kV. Results and Discussion Figure 1 shows the XRD patterns at room temperature of milled MgH 2 and MgH 2 -12Ni after the reaction with CH 4 at 12 bar and 773 K for 1 h and desorption under 1.0 bar at 773 K for 1 h. increases due to gas desorption).The amount of sample (MgH2-12Ni) used for these measurements was 0.5 g. For the reaction of methane in milled MgH2 and MgH2-12Ni, we chose a temperature of 773 K, which is not too high compared with the temperature of metal hydride formation.This temperature is lower than the temperature at which CH4 conversion was performed in the reported works [20].We chose a gas pressure of 12 bar at 773 K because too high a gas pressure causes leakage in the parts of the Sieverts'-type high-pressure apparatus. X-ray diffraction (XRD) patterns of samples after reaction-involved milling and after adsorption-desorption were obtained in a powder diffractometer Rigaku D/MAX 250 (Tokyo, Japan) with Cu Kα radiation.XRD pattern analysis was performed using the MDI JADE 5.0 program.Data from the JCPDS PDF-2 2004 card of the International Centre for Diffraction Data (ICDD) were used to identify the phases.Reacted samples were also characterized using Fourier transform infrared (FT-IR) spectroscopy (Frontier, PerkinElmer, Shelton, CT, USA).Powder microstructures were observed using a high-resolution transmission electron microscope (HR-TEM) with energy-dispersive X-ray spectroscopy (EDS) (Titan G2 Cube 60-300, FEI company (Field Electron and Ion Company, FEI, Hillsboro, OR, USA)) operated at 80 kV. Results and Discussion Figure 1 shows the XRD patterns at room temperature of milled MgH2 and MgH2-12Ni after the reaction with CH4 at 12 bar and 773 K for 1 h and desorption under 1.0 bar at 773 K for 1 h.When the milled MgH2 was heated to 773 K under 1.0 bar CH4 and vacuum pumped, the hydrogen in the milled MgH2 is thought to have been removed.It is believed that Mg2Ni was formed during heating to 773 K [22].When the MgH2-12Ni was heated to 773 K under 1.0 bar CH4 and vacuum pumped, the hydrogen in the MgH2 is thought to have been removed. The XRD pattern of milled MgH2 after the reaction with CH4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited the Mg and MgO phases.The MgO is believed Figure 2 shows the quantity of converted CH 4 versus time t under 12 bar CH 4 at 773 K and the desorbed quantity of converted CH 4 versus t under 1.0 bar at 773 K for MgH 2 -12Ni.The quantity of converted CH 4 under 12 bar CH 4 at 773 K was 0.8 wt% for 1 min and 1.17 wt% for 60 min.The desorbed quantity of converted CH 4 (hydrogen-containing mixture) under 1.0 bar at 773 K was 0.8 wt% for 1 min and 1.17 wt% for 60 min. to have been formed during sample exposure to air to obtain the XRD pattern.This shows that the conversion of CH4 did not take place. The XRD pattern of MgH2-12Ni after the reaction with CH4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited the MgH2, Mg2NiH4, Mg, Mg2Ni and MgO phases.The formation of MgH2 and Mg2NiH4 indicates that the conversion of CH4 took place, the converted CH4 (hydrogen-containing mixture) is adsorbed on the particles, and MgH2 and Mg2NiH4 hydrides are thought to have been formed by the reaction of Mg (formed during heating to 773 K under 1.0 bar and vacuum pumping at 773 K) and Mg2N (formed during heating to 773 K) with hydrogen (formed via CH4 conversion and adsorbed on particles) during cooling to room temperature. Figure 2 shows the quantity of converted CH4 versus time t under 12 bar CH4 at 773 K and the desorbed quantity of converted CH4 versus t under 1.0 bar at 773 K for MgH2-12Ni.The quantity of converted CH4 under 12 bar CH4 at 773 K was 0.8 wt% for 1 min and 1.17 wt% for 60 min.The desorbed quantity of converted CH4 (hydrogen-containing mixture) under 1.0 bar at 773 K was 0.8 wt% for 1 min and 1.17 wt% for 60 min.Attenuated total reflectance FT-IR spectroscopy (ATR-FTIR) spectra of MgH2-12N reacted with 12 bar CH4 at 723 K and 773 K, respectively, are shown in Figure 3. Peaks for C-H bending, C=C stretching and C=C bending resulting from CH4 conversion were observed [23,24].Peaks for O-H stretching, C=O stretching and C-O stretching are believed to be formed due to a reaction with oxygen in air. Figure 4 shows the curve of released hydrogen quantity versus temperature T for asmilled MgH2-12Ni and the curve of released gas quantity versus T for MgH2-12Ni after the reaction with CH4 at 12 bar when heated at a heating rate of 5-6 K/min.The as-milled MgH2-12Ni released hydrogen of 5.09 wt% up to about 648 K relatively rapidly, and slowly released hydrogen of 6.74 wt% up to about 700 K. MgH2-12Ni after the reaction with CH4 at 12 bar released gas (a hydrogen-containing mixture) of 0.66 wt% up to about 663 K rapidly and 0.94 wt% up to about 702 K slowly. Figure 5 shows HR-TEM images of MgH2-12Ni as-milled and after the reaction with CH4 at 12 bar at 773 K for 1 h.The as-milled MgH2-12Ni exhibits spherically shaped particles.The MgH2-12Ni after the reaction with CH4 shows carbon on the surface of the particles, which was highlighted.Attenuated total reflectance FT-IR spectroscopy (ATR-FTIR) spectra of MgH 2 -12Ni reacted with 12 bar CH 4 at 723 K and 773 K, respectively, are shown in Figure 3. Peaks for C-H bending, C=C stretching and C=C bending resulting from CH 4 conversion were observed [23,24].Peaks for O-H stretching, C=O stretching and C-O stretching are believed to be formed due to a reaction with oxygen in air. Figure 4 shows the curve of released hydrogen quantity versus temperature T for as-milled MgH 2 -12Ni and the curve of released gas quantity versus T for MgH 2 -12Ni after the reaction with CH 4 at 12 bar when heated at a heating rate of 5-6 K/min.The as-milled MgH 2 -12Ni released hydrogen of 5.09 wt% up to about 648 K relatively rapidly, and slowly released hydrogen of 6.74 wt% up to about 700 K. MgH 2 -12Ni after the reaction with CH 4 at 12 bar released gas (a hydrogen-containing mixture) of 0.66 wt% up to about 663 K rapidly and 0.94 wt% up to about 702 K slowly. Figure 5 shows HR-TEM images of MgH 2 -12Ni as-milled and after the reaction with CH 4 at 12 bar at 773 K for 1 h.The as-milled MgH 2 -12Ni exhibits spherically shaped particles.The MgH 2 -12Ni after the reaction with CH 4 shows carbon on the surface of the particles, which was highlighted.An HR-TEM image, EDS images and an EDS spectrum of the as-milled MgH2-12Ni are shown in Figure 6.The EDS images show that the distribution of Mg, Ni and C on the particle is quite homogeneous.The oxygen is introduced due to exposure to ethanol and air.The particles were dried in air for 2 h after placing the sample particles on a Lacey carbon-supported copper grid, which were sonicated in an ethanol-filled vial.The EDS An HR-TEM image, EDS images and an EDS spectrum of the as-milled MgH2-12Ni are shown in Figure 6.The EDS images show that the distribution of Mg, Ni and C on the particle is quite homogeneous.The oxygen is introduced due to exposure to ethanol and air.The particles were dried in air for 2 h after placing the sample particles on a Lacey carbon-supported copper grid, which were sonicated in an ethanol-filled vial.The EDS An HR-TEM image, EDS images and an EDS spectrum of the as-milled MgH 2 -12Ni are shown in Figure 6.The EDS images show that the distribution of Mg, Ni and C on the particle is quite homogeneous.The oxygen is introduced due to exposure to ethanol and air.The particles were dried in air for 2 h after placing the sample particles on a Lacey carbon-supported copper grid, which were sonicated in an ethanol-filled vial.The EDS spectrum exhibits the peaks of Mg and Ni together with the peaks of Cu and O.The Cu peak appears due to the copper in the Lacey carbon-supported copper grid.From n = 1 to n = 2, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min increased very slightly; the H a versus time t curves at n = 1 and n = 2 were very similar.From n = 2 to n = 4, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min decreased at lot.This means that the surfaces of MgH 2 -12Ni particles were contaminated with C and CH 4 ; C and CH 4 were adsorbed on the surfaces of MgH 2 -12Ni particles.From n = 4 to n = 5, the initial hydriding rate increased and the quantity of hydrogen absorbed for 60 min decreased a little, showing that the C and CH 4 adsorbed on the surfaces of MgH 2 -12Ni particles were removed; the surfaces of MgH 2 -12Ni particles were recovered during pumping out after dehydriding.The decrease in the quantity of hydrogen absorbed for 60 min suggests that sintering of particles took place during hydriding-dehydriding cycling.From n = 1 to n = 2, the initial dehydriding rate and the quantity of hydrogen released for 30 min increased a lot; the incubation period for dehydriding, which appeared at n = 1, disappeared at n = 2. From n = 2 to n = 4, the initial dehydriding rate and the quantity of hydrogen absorbed for 30 min decreased a lot.This means that C and CH 4 were adsorbed on the surfaces of MgH 2 -12Ni particles.From n = 4 to n = 5, the initial dehydriding rate increased a little (the incubation period for dehydriding was decreased from 9 min to 3 min) and the quantity of hydrogen absorbed for 30 min decreased a lot, showing that the C and CH 4 adsorbed on the surfaces of MgH 2 -12Ni particles were removed.The results in Figure 8 show that the surfaces of MgH 2 -12Ni particles were contaminated with C and CH 4 ; C and CH 4 were adsorbed on the surfaces of MgH 2 -12Ni particles after the reaction with 12 bar CH 4 at 773 K. Micromachines 2023, 14, x FOR PEER REVIEW 8 of 13 hydrogen released for 30 min increased a lot; the incubation period for dehydriding, which appeared at n = 1, disappeared at n = 2. From n = 2 to n = 4, the initial dehydriding rate and the quantity of hydrogen absorbed for 30 min decreased a lot.This means that C and CH4 were adsorbed on the surfaces of MgH2-12Ni particles.From n = 4 to n = 5, the initial dehydriding rate increased a little (the incubation period for dehydriding was decreased from 9 min to 3 min) and the quantity of hydrogen absorbed for 30 min decreased a lot, showing that the C and CH4 adsorbed on the surfaces of MgH2-12Ni particles were removed.The results in Figure 8 show that the surfaces of MgH2-12Ni particles were contaminated with C and CH4; C and CH4 were adsorbed on the surfaces of MgH2-12Ni particles after the reaction with 12 bar CH4 at 773 K. Figure 2 shows that the methane conversion proceeds quite rapidly (0.8 wt% for 1 min) and then proceeds very slowly to 1.17 wt% up to 60 min.The average particle sizes of milled MgH2 and MgH2-12Ni, which were measured via particle size analysis, were 1.39 and 0.65 µm, respectively.From these values, the specific surface areas of milled MgH2 and MgH2-12Ni were calculated to be 2.98 and 5.73 m 2 /g, respectively, assuming that the Figure 2 shows that the methane conversion proceeds quite rapidly (0.8 wt% for 1 min) and then proceeds very slowly to 1.17 wt% up to 60 min.The average particle sizes of milled MgH 2 and MgH 2 -12Ni, which were measured via particle size analysis, were 1.39 and 0.65 µm, respectively.From these values, the specific surface areas of milled MgH 2 and MgH 2 -12Ni were calculated to be 2.98 and 5.73 m 2 /g, respectively, assuming that the particles were spherical.MgH 2 -12Ni has a fairly large specific surface area (1.9 times), compared with milled MgH 2 .The distribution of Ni, which was observed by means of EDS (Figures 6 and 7), was quite homogeneous. Micromachines 2023, 14, x FOR PEER REVIEW 10 of 13 particles were spherical.MgH2-12Ni has a fairly large specific surface area (1.9 times), compared with milled MgH2.The distribution of Ni, which was observed by means of EDS (Figures 6 and 7), was quite homogeneous.The surface of MgH2-12Ni is very reactive because it was prepared by means of milling in a hydrogen atmosphere and heating in hydrogen to 773 K. Thus, CH4 was converted very rapidly at first.However, the CH4 conversion was very slow, at 1.17% after 60 min, and the conversion rate was quite low.We think that the conversion rate and the converted quantity should be increased.In future research, the composition of the MgH2-12Ni will be varied, the milling conditions will be changed and different CH4 pressures will be exerted.In addition, the variability of CH4 conversion depending on the number of cycles will be studied. The pressure-composition isotherms (P-C-T diagram) in metal-hydrogen systems exhibit equilibrium plateau pressures at various temperatures.The equilibrium plateau pressures are the equilibrium hydrogen pressures at which the metal and hydrogen The surface of MgH 2 -12Ni is very reactive because it was prepared by means of milling in a hydrogen atmosphere and heating in hydrogen to 773 K. Thus, CH 4 was converted very rapidly at first.However, the CH 4 conversion was very slow, at 1.17% after 60 min, and the conversion rate was quite low.We think that the conversion rate and the converted quantity should be increased.In future research, the composition of the MgH 2 -12Ni will be varied, the milling conditions will be changed and different CH 4 pressures will be exerted.In addition, the variability of CH 4 conversion depending on the number of cycles will be studied. The pressure-composition isotherms (P-C-T diagram) in metal-hydrogen systems exhibit equilibrium plateau pressures at various temperatures.The equilibrium plateau pressures are the equilibrium hydrogen pressures at which the metal and hydrogen coexist in equilibrium.In order to form a metal hydride at a certain temperature, hydrogen with a pressure higher than the equilibrium plateau pressure must be applied.At a temperature of 773 K, the equilibrium plateau pressures of the Mg-H system and the Mg 2 Ni-H system are much higher than 12 bar, which was applied in the present work.The equilibrium plateau pressure at 773 K is 136 bar for the Mg-H system [25] and 98 bar for the Mg 2 Ni-H system [26].It is therefore considered that Mg and Mg 2 Ni hydrides are not formed upon reaction with CH 4 at 12 bar and 773 K. CH 4 is converted and the converted gas mixture is adsorbed on MgH 2 -12Ni particles, and Mg and Mg 2 Ni hydrides are formed during cooling to room temperature as a result of the reactions of Mg and Mg 2 Ni with adsorbed hydrogen.The equilibrium plateau pressure is 1 bar at 557 K for the Mg-H system [25] and at 527 K for the Mg 2 Ni-H system [26].At temperatures from 473 K to room temperature (during cooling), the equilibrium plateau pressures of the Mg-H and Mg 2 Ni-H systems are very low and the formation of MgH 2 and Mg 2 NiH 4 is possible. The XRD pattern of milled MgH 2 after the reaction with CH 4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited no MgH 2 and Mg 2 NiH 4 phases.However, the XRD pattern of MgH 2 -12Ni after the reaction with CH 4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited MgH 2 and Mg 2 NiH 4 phases.This shows that 4 conversion took place, the converted CH 4 (a hydrogen-containing mixture) was adsorbed onto the particles, and MgH 2 and Mg 2 NiH 4 hydrides were believed to be formed by the reaction of Mg (formed during heating up to 773 K under 1.0 bar and vacuum pumping at 773 K) and Mg 2 Ni (formed during heating up to 773 K) with hydrogen (formed as a result of CH 4 conversion and adsorbed on the particles) during cooling to room temperature. Ni was not observed in the XRD pattern of MgH 2 -12Ni after the reaction with CH 4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K.It is known that a small amount in the sample is not observed in the XRD pattern.We believe that Ni is present in MgH 2 -12Ni after heating to 773 K. The addition of Ni for sample preparation is thought to lead to different results for the particles.The surface state of MgH 2 -12Ni and the greater surface area of MgH 2 -12Ni than milled MgH 2 might have played a role in converting CH 4 .However, Ni and Mg 2 Ni formed during heating to 773 K are believed to have produced catalytic effects in CH 4 conversion, playing a larger role in CH 4 conversion.It has been reported that most methane-reforming methods usually use nickel as a catalyst [20]. Transition metals such as Ni are reported to have a catalytic effect on gas adsorption [27].The addition of Ni (and less possibly Mg 2 Ni) could help CH 4 to adsorb onto the particles. The process developed in the present work is one in which the conversion of CH 4 , the storage of hydrogen and the separation of the remaining CH 4 (by pumping out at room temperature) are all achieved in a single process. In our future research, gas chromatography analysis will be performed on gases obtained after a reaction with CH 4 at 12 bar and 773 K.This will help to verify the present work. Conclusions The conversion of CH 4 to hydrogen and hydrogen storage was studied using a magnesium-based alloy.MgH 2 -12Ni (with the composition of 88 wt% MgH 2 + 12 wt% Ni) was prepared in a planetary ball mill under high-purity hydrogen gas.The XRD pattern of MgH 2 -12Ni after reaction with CH 4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited MgH 2 and Mg 2 NiH 4 phases.This shows that conversion of CH 4 occurred, the converted CH 4 (hydrogen-containing mixture) was then adsorbed on the particles, and hydrides were formed during cooling to room temperature.The Ni and Mg 2 Ni formed during heating up to 773 K are believed to have brought about catalytic effects for converting CH 4 .MgH 2 -12Ni adsorbed 0.8 wt% of converted CH 4 within 1 min in a reaction with CH 4 at 12 bar and 773 K and then desorbed 0.8 wt% of converted CH 4 within 1 min under 1.0 bar and 773 K. Attenuated total reflectance FT-IR spectroscopy (ATR-FTIR) spectra of MgH 2 -12Ni after reactions under 12 bar CH 4 at 723 K and 773 K showed peaks of C-H bending, C=C stretching, O-H stretching, O-H bending and C-O stretching.In our future Figure 1 . Figure 1.XRD patterns at room temperature of (a) milled MgH2 and (b) MgH2-12Ni after the reaction with CH4 at 12 bar and 773 K for 1 h and desorption under 1.0 bar at 773 K for 1 h. Figure 1 . Figure 1.XRD patterns at room temperature of (a) milled MgH 2 and (b) MgH 2 -12Ni after the reaction with CH 4 at 12 bar and 773 K for 1 h and desorption under 1.0 bar at 773 K for 1 h.When the milled MgH 2 was heated to 773 K under 1.0 bar CH 4 and vacuum pumped, the hydrogen in the milled MgH 2 is thought to have been removed.It is believed that Mg 2 Ni was formed during heating to 773 K[22].When the MgH 2 -12Ni was heated to 773 K under 1.0 bar CH 4 and vacuum pumped, the hydrogen in the MgH 2 is thought to have been removed.The XRD pattern of milled MgH 2 after the reaction with CH 4 at 12 bar and 773 K and desorption under 1.0 bar at 773 K exhibited the Mg and MgO phases.The MgO is believed to have been formed during sample exposure to air to obtain the XRD pattern.This shows that the conversion of CH 4 did not take place. Figure 2 . Figure 2. Quantity of converted CH4 versus time t under 12 bar CH4 at 773 K and desorbed quantity of converted CH4 versus t under 1.0 bar at 773 K for MgH2-12Ni. Figure 2 . Figure 2. Quantity of converted CH 4 versus time t under 12 bar CH 4 at 773 K and desorbed quantity of converted CH 4 versus t under 1.0 bar at 773 K for MgH 2 -12Ni. Figure 4 . Figure 4.The curve of released hydrogen quantity as a function of temperature T for as-milled MgH2-12Ni and the curve of released gas amount as a function of T for MgH2-12Ni after the reaction with CH4 at 12 bar when heated at a heating rate of 5-6 K/min. Figure 4 . Figure 4.The curve of released hydrogen quantity as a function of temperature T for as-milled MgH 2 -12Ni and the curve of released gas amount as a function of T for MgH 2 -12Ni after the reaction with CH 4 at 12 bar when heated at a heating rate of 5-6 K/min. Figure 5 . Figure 5. HR-TEM images of MgH2-12Ni (a) as-milled and (b) after reaction with CH4 at 12 bar and 773 K for 1 h. Figure 7 Figure7shows a HR-TEM image, EDS images and an EDS spectrum of the MgH2-12Ni after the 12 bar CH4 reaction at 773 K for 1 h.The EDS images show that the distribution of Mg, Ni and C on the particle is quite homogeneous.The EDS spectrum exhibits the carbon peak together with the peaks of Mg, Ni, Cu and O.The change in absorbed hydrogen quantity Ha versus time t curve under 12 bar H2 and the change in released hydrogen quantity Hd versus t curve under 1.0 bar H2 at 573 K with cycle number n for MgH2-12Ni are shown in Figure8.At n = 3, the MgH2-12Ni was reacted under 12 bar CH4 at 773 K and desorbed under 1.0 bar CH4 at 773 K. From n = 1 to n = 2, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min increased very slightly; the Ha versus time t curves at n = 1 and n = 2 were very similar.From n = 2 to n = 4, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min decreased at lot.This means that the surfaces of MgH2-12Ni particles were contaminated with C and CH4; C and CH4 were adsorbed on the surfaces of MgH2-12Ni particles.From n = 4 to n = 5, the initial hydriding rate increased and the quantity of hydrogen absorbed for 60 min decreased a little, showing that the C and CH4 adsorbed on the surfaces of MgH2-12Ni particles were removed; the surfaces of MgH2-12Ni particles were recovered during pumping out after dehydriding.The decrease in the quantity of hydrogen absorbed for 60 min suggests that sintering of particles took place during hydriding-dehydriding cycling.From n = 1 to n = 2, the initial dehydriding rate and the quantity of Figure 5 . Figure 5. HR-TEM images of MgH 2 -12Ni (a) as-milled and (b) after reaction with CH 4 at 12 bar and 773 K for 1 h. Figure 7 Figure 7 shows a HR-TEM image, EDS images and an EDS spectrum of the MgH 2 -12Ni after the 12 bar CH 4 reaction at 773 K for 1 h.The EDS images show that the distribution of Mg, Ni and C on the particle is quite homogeneous.The EDS spectrum exhibits the carbon peak together with the peaks of Mg, Ni, Cu and O.The change in absorbed hydrogen quantity H a versus time t curve under 12 bar H 2 and the change in released hydrogen quantity H d versus t curve under 1.0 bar H 2 at 573 K with cycle number n for MgH 2 -12Ni are shown in Figure 8.At n = 3, the MgH 2 -12Ni was reacted under 12 bar CH 4 at 773 K and desorbed under 1.0 bar CH 4 at 773 K. From n = 1 to n = 2, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min increased very slightly; the H a versus time t curves at n = 1 and n = 2 were very similar.From n = 2 to n = 4, the initial hydriding rate and the quantity of hydrogen absorbed for 60 min decreased at lot.This means that the surfaces of MgH 2 -12Ni particles were contaminated with C and CH 4 ; C and CH 4 were adsorbed on the surfaces of MgH 2 -12Ni particles.From n = 4 to n = 5, the initial hydriding rate increased and the quantity of hydrogen absorbed for 60 min decreased a little, showing that the C and CH 4 adsorbed on the surfaces of MgH 2 -12Ni particles were removed; the surfaces of MgH 2 -12Ni particles were recovered during pumping out after dehydriding.The decrease in the quantity of hydrogen absorbed for 60 min suggests that sintering of particles took place during hydriding-dehydriding cycling.From n = 1 to n = 2, the initial dehydriding rate and the quantity of hydrogen released for 30 min increased a lot; the incubation period for dehydriding, which appeared at n = 1, disappeared at n = 2. From n = 2 to n = 4, the initial dehydriding rate and the quantity of hydrogen absorbed Figure 7 . Figure 7.An HR-TEM image, EDS images and an EDS spectrum of the MgH2-12Ni after the 12 bar CH4 reaction at 773 K for 1 h. Figure 7 . Figure 7.An HR-TEM image, EDS images and an EDS spectrum of the MgH 2 -12Ni after the 12 bar CH 4 reaction at 773 K for 1 h. Figure 8 . Figure 8.(a) Change in absorbed hydrogen quantity Ha versus time t curve under 12 bar H2 and (b) change in released hydrogen quantity Hd versus t curve under 1.0 bar H2 at 573 K with cycle number n for MgH2-12Ni.At n = 3, the MgH2-12Ni was reacted under 12 bar CH4 at 773 K and desorbed under 1.0 bar CH4 at 773 K. Figure 8 . Figure 8.(a) Change in absorbed hydrogen quantity H a versus time t curve under 12 bar H 2 and (b) change in released hydrogen quantity H d versus t curve under 1.0 bar H 2 at 573 K with cycle number n for MgH 2 -12Ni.At n = 3, the MgH 2 -12Ni was reacted under 12 bar CH 4 at 773 K and desorbed under 1.0 bar CH 4 at 773 K.
v3-fos-license
2019-08-10T13:03:57.319Z
2019-08-08T00:00:00.000
199503871
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.47212", "pdf_hash": "5ac7ca83aae03924ad10239aebd4ffff17caf91e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44046", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "b7c0774b216985a1b5c93714c223135f4b119d49", "year": 2019 }
pes2o/s2orc
Pressure-driven release of viral genome into a host nucleus is a mechanism leading to herpes infection Many viruses previously have been shown to have pressurized genomes inside their viral protein shell, termed the capsid. This pressure results from the tight confinement of negatively charged viral nucleic acids inside the capsid. However, the relevance of capsid pressure to viral infection has not been demonstrated. In this work, we show that the internal DNA pressure of tens of atmospheres inside a herpesvirus capsid powers ejection of the viral genome into a host cell nucleus. To our knowledge, this provides the first demonstration of a pressure-dependent mechanism of viral genome penetration into a host nucleus, leading to infection of eukaryotic cells. Introduction Recent studies have found that many families of viruses have highly stressed packaged genomes, exerting tens of atmospheres of pressure, inside their viral protein shell, termed the capsid [e.g. bacteriophages (Evilevitch et al., 2003), archaeoviruses (Hanhijärvi et al., 2013) and eukaryotic viruses (Bauer et al., 2013), infecting all three domains of life]. The pressure results from tight confinement of the negatively charged double-stranded (ds) viral DNA or dsRNA inside the capsid (Tzlil et al., 2003;Kindt et al., 2001;Purohit et al., 2005). Our recent measurement of 20 atmospheres of DNA pressure in a Herpes simplex type 1 (HSV-1) capsid (Bauer et al., 2013) was the first demonstration of a pressurized genome state in a eukaryotic virus. This high internal capsid pressure is generated by an ATP-driven packaging motor located at a unique capsid vertex, shown to be the strongest molecular motor known (McElwee et al., 2018;Smith et al., 2001). Structural features of packaging motor components are shared by bacterial and archaeal dsDNA viruses and eukaryotic herpesviruses (Krupovic and Bamford, 2011). This strongly suggests that once DNA is packaged with high force into a capsid, the reverse process of pressure-driven genome release is one of the central mechanisms of viral replication. A previous attempt to demonstrate this mechanism analyzed the velocity of DNA ejection from phage l into an E. coli cell to determine whether ejection dynamics correlates with a decrease in intracapsid DNA pressure (Van Valen et al., 2012). However, due to large cell-to-cell variability in the ejection rates, the results were difficult to interpret. [We recently found, using isothermal titration calorimetry, similar timescale variability in phage l ejection dynamics, ranging from a few seconds to minutes; this variability is caused by the metastable state of the tightly packaged genome resulting from DNA-DNA electrostatic sliding friction, which can delay or stall the ejection process despite high pressure in the capsid. However, this interstrand friction was significantly reduced by a transition in intracapsid DNA structure induced by optimum environmental conditions favorable for infection, leading to essentially instant DNA release (Evilevitch, 2018).] Thus, the role that high DNA pressure in phage capsids might play in viral genome delivery into a bacterial cell remained unclear. Furthermore, the experimental evidence placing the discovery of intracapsid genome pressure (Bauer et al., 2013) in the context of eukaryotic viral infection was lacking. Here, we conducted a stringent test, showing that DNA pressure in HSV-1 capsids powers ejection of the viral genome into a cell nucleus. This provides, to our knowledge, the first demonstration of a pressure-dependent mechanism leading to infection of eukaryotic cells (where the term 'infection' denotes the introduction of viral nucleic acid into a host cell by a virus [Flint, 2004]). Herpesviridae are a leading cause of human viral disease, second only to influenza and cold viruses (Pellett and Roizmann, 2007;Roizmann et al., 2007;Sandri-Goldin, 2006). The herpesviridae family includes a diverse set of viruses, nine of which are human pathogens (Davison et al., 2009). Herpesvirus infections are life-long with latency periods between recurrent reactivations, making treatment difficult (Davison et al., 2009;Pai and Weinberger, 2017). Herpesvirus infections frequently reactivate to result in recurrent acute oral and genital lesions, encephalitis , shingles (Coen, 2006), birth defects and transplant failures, as well as oncogenic transformation (Rickinson and Kieff, 2007;Ganem, 2007). Herpesviruses consist of a double-stranded (ds) DNA genome packaged within an icosahedral capsid that is surrounded by an unstructured protein layer, the tegument, and a lipid envelope. Figure 1 illustrates the HSV-1 infection process as observed by ultrathin-sectioning transmission electron microscopy (TEM). After binding at the outer membrane (Figure 1a), viruses enter the cell cytoplasm and are transported toward the nucleus (Figure 1b). The viral capsid ejects its genome upon docking to a nuclear pore complex (NPC), which forms a passageway for molecular traffic into the nucleus (Figure 1c) (Sodeik et al., 1997). To investigate the specific event of herpes DNA injection into a cell nucleus, we designed an assay built on previous experiments showing that purified HSV-1 capsids bind to NPCs on isolated nuclei and eject their DNA into nuclei in the presence of cytosol supplemented with an ATP-regeneration system (Ojala et al., 2000). This reconstituted nuclei system allows us to determine if viral DNA is ejected into the nucleus when the capsid pressure is 'turned off' by addition of an external osmolyte. We had shown previously that DNA ejection from isolated HSV-1 capsids into solution can be suppressed by creating an osmotic pressure in the host solution, which matches the pressure of the packaged DNA (Bauer et al., 2013). This effectively eliminates genome pressure in the capsid (the mechanism of osmotic suppression is explained below). In this work we show that viral genome ejection through the NPCs into a cell nucleus can be completely suppressed in the presence of the biologically inert osmolyte polyethylene glycol (PEG). The reconstituted nuclei system accurately reproduces capsids-nuclei binding and nuclear transport of the herpes genome into living cells (Ojala et al., 2000;Adam et al., 1990;Au et al., 2016;Cassany and Gerace, 2008). It provides the benefit of isolating the effect of eliminated capsid pressure on the single step of viral DNA ejection, while avoiding interference from hyperosmotic conditions on other processes occurring within the cell during viral replication. To provide evidence that intracapsid DNA pressure is responsible for DNA release from a herpesvirus capsid into a cell nucleus, it is essential to show that when the capsid pressure is 'turned off' with addition of an external osmolyte, herpes capsids bound to NPCs do not eject DNA into a nucleus, while the ejection is completed successfully without osmolyte addition (see illustration in Figure 1d). While the term 'infection' usually refers to both viral genome transport into the cell and subsequent replication of the virus, the primary infection by several types of herpesviruses (including HSV-1) is latent (i.e. the herpes genome is translocated into the host nucleus, without subsequent genome replication [Steiner, 1996]). Thus, the osmotic suppression assay, combined with the reconstituted nucleus system in this work, present a platform for analysis of a pressure-dependent mechanism of herpesvirus infection focused on the viral genome translocation step. This paper is divided into three sections. In Section 1, we determine the critical PEG concentration at which the DNA pressure in HSV-1 capsids is 'turned off'. In Section 2, we validate that the presence of PEG does not affect the integrity and functionality of reconstituted nuclei, as well as the binding of HSV-1 capsids to nuclei. In Section 3, we designed a pull-down assay with real-time PCR (qPCR) quantification of the amount of DNA injected into a nucleus when the capsid pressure is 'on' and 'off.' Suppression of DNA injection also is visualized by ultrathin-sectioning electron microscopy (EM). Results and discussion All herpesviruses have strongly confined dsDNA inside the capsid, which is released into a cell nucleus upon the capsid docking to the NPCs at the nuclear envelope (Heming et al., 2017). We used HSV-1, which is a prototypical, experimental model system to study herpesvirus replication due to the ease of growing and purifying large quantities of viral C-capsids (DNA-filled capsids without a lipid envelope and tegument proteins) (Méndez-Á lvarez, 2000). Figure 1. Ultrathin-sectioning EM visualization of the HSV-1 infection process showing viral DNA ejection from HSV-1 capsid into a host nucleus. Ultrathin Epon sections of Vero cells infected with HSV-1 at an MOI of 300 PFU/cell. Artificially colored electron micrographs of HSV-1 at the cell membrane (A), in transport to the nucleus (B), and bound at a nuclear pore complex (NPC) embedded within the nuclear envelope (C). The dsDNA genome appears as an electron-dense region within the capsid, which is visible in (A) and (B), but absent in (C) due to DNA ejection upon NPC binding. Scale bar, 50 nm. Adapted for clarity from our earlier publication (Bauer et al., 2013). (D) Illustration of the osmotic suppression experiment. DNA ejection from a virus capsid into a reconstituted host nucleus is completed successfully without osmolyte addition. However, viral DNA ejection is fully suppressed, when the capsid pressure is 'turned off' with an external osmotic pressure, created by PEG, that matches the pressure of the packaged DNA in the capsid. DOI: https://doi.org/10.7554/eLife.47212.002 'Turning off' DNA pressure inside a viral capsid with external osmotic pressure Below we provide a description of the mechanism through which DNA pressure inside the capsid is 'turned off' with osmolyte addition. Viral capsids are permeable to water and small ions (Trus et al., 1996;Heymann et al., 2003). In an aqueous buffer solution (without osmolyte addition), due to the high DNA concentration inside the capsid, water is drawn into the fixed volume through the capsid pores, as a result of the entropic drive to maximize mixing, and a large osmotic pressure is developed inside the capsid to equalize the chemical potential of water inside and outside of the capsid. This pressure, due to the compressed water inside the rigid capsid volume can also be described as a DNA repulsion and bending pressure withstood by the capsid walls . Addition of an osmolyte to the solution surrounding the capsid, where osmolyte is larger than the capsid pores, creates an osmotic gradient. Water will be drawn out of the DNA volume to dilute the osmolyte molecules outside the capsid. This reduces the water density and the osmotic pressure inside the capsid. Once the osmolyte concentration reaches c*, the pressure inside the capsid is brought down to one atm (atmospheric pressure). For this special value of c*, the water-exchange equilibrium corresponds to zero osmotic pressure difference inside and outside the capsid and a net force of zero on the interior of the capsid's rigid walls confining the DNA. Thus, the osmotic pressure associated with the osmolyte concentration, c*, is equal to the osmotic pressure exerted by the confined DNA. As a consequence, even if the DNA was allowed the opportunity to ''escape'' from its confinement (when the virus capsid is opened), it would not, because there is no driving force for this process. For any lesser value of external osmotic pressure than that provided by c*, there is a pressure difference and hence a net force (outward) on the confining capsid walls because an insufficient amount of water has been drawn out of the DNA solution to lower its osmotic pressure to one atm. This explains our previously designed experiment of osmotic suppression of viral DNA ejection where we measured the pressure in HSV-1 capsids (Bauer et al., 2013). F eject = F osm F eject First, without nuclei present and using our osmotic suppression assay (Evilevitch et al., 2003;Bauer et al., 2013), we used solutions containing PEG to determine the critical concentration, c* PEG , that matches the DNA pressure in an HSV-1 capsid and thus turns it off. To create an osmotic pressure gradient between the interior and exterior of the capsid, we used PEG with molecular weight MW » 8 kDa, which does not permeate the capsid since the HSV-1 capsid pore diameter is~20 Å , corresponding to~4 kDa MW cutoff (Trus et al., 1996;Heymann et al., 2003). DNA ejection from the capsid was triggered by mild trypsin treatment. Trypsin cleaves the portal 'plug' proteins UL6 and UL25 while the rest of the HSV-1 capsid remains intact (Bauer et al., 2013;Newcomb et al., 2007). The length of DNA remaining in the capsid as a function of increasing PEG 8 kDa concentration was determined with pulse field gel electrophoresis (PFGE) combined with a DNase protection assay, as described in Bauer et al. (2013). Figure 2 shows that a progressively smaller fraction of DNA was ejected from HSV-1 capsids with increasing external osmotic pressure, where the DNA ejection was completely suppressed at c* PEG PEG30% w/w (PFGE data are shown in Figure 2-figure supplement 1). This corresponds to~18 atm of external osmotic pressure equal to the DNA pressure in the capsid, at buffer conditions required for capsid binding to isolated nuclei, set by capsid binding buffer CBB (see Materials and methods Section). It should be noted that ionic conditions in the surrounding buffer also affect DNA pressure in the capsid through cations permeating the capsid and screening the repulsive DNA-DNA interactions . We used this critical PEG concentration (c* PEG ) to turn off the capsid pressure when HSV-1 C-capsids were incubated with reconstituted nuclei. In the next Section, we demonstrate specific capsid binding to NPCs at the nuclear membrane and confirm that the nuclear integrity as well as capsid-nuclei binding are not affected by the addition of 30% w/w PEG 8 kDa. Reconstituted capsid-nuclei system Purified HSV-1 C-capsids were incubated with nuclei isolated from rat liver cells supplemented with cytosol and ATP-regeneration system. Cytosol contains importin-b required for efficient HSV-1 capsid binding to NPCs (Ojala et al., 2000;Anderson et al., 2014). The ATP-regeneration system is not required for capsid binding to NPCs, but it is required for opening the capsid's portal leading to DNA ejection (Anderson et al., 2014). The ATP-regeneration system contains ATP and GTP (as well as other components [Ojala et al., 2000]), both of which are required for maintenance of the Ran-GTP/GDP gradient across the nuclear membrane (Cole and Hammell, 1998). While the mechanisms of interactions that mediate capsid docking to NPC and its opening are not clear, importin-b binds to cargo proteins (Cautain et al., 2015;Macara, 2001), while the Ran-GTP cycle regulates the importin-cargo association (Cautain et al., 2015;Macara, 2001). A nuclear localization signal (NLS) Figure 2. Percentage of viral genome ejected from HSV-1 capsids as a function of the external osmotic pressure. DNA ejection from the capsids in vitro is triggered by mild trypsin treatment, which cleaves the portal protein (UL6) without degrading the major capsid protein (VP5) or causing morphological damage to capsids (Bauer et al., 2013). Figure shows that DNA ejection is progressively suppressed with increasing PEG concentration at 37˚C with PEG 8 kDa. DNA ejection is completely suppressed at 18 atm external osmotic pressure, which matches and therefore 'turns off' the DNA pressure in the capsid. PEG is added to CBB buffer (capsid binding buffer: 20 mM HEPES-KOH with pH of 7.3, 80 mM K-acetate, 2 mM DTT, 1 mM EGTA, 2 mM Mgacetate, 1 mM PMSF, and 1X CLAP cocktail), required for capsid binding to nuclei. (PEG concentration was converted to osmotic pressure using the relation in ref Evilevitch et al., 2003). Vertical error bars represent the standard error of the gel band intensity profile (see Materials and methods Section). Horizonal error bars representing standard error in weighted PEG concentration are negligibly small. Dashed line is drawn to guide the eye. DOI: https://doi.org/10.7554/eLife.47212.003 The following figure supplement is available for figure 2: and other motifs on viral capsid proteins are involved in this importin binding interaction (Cole and Hammell, 1998;Flatt and Greber, 2015). Since herpes DNA does not interact with import protein factors, these protein factors are likely involved in the binding and opening of the capsid portal vertex, which triggers genome ejection, but they do not provide the driving force for the actual translocation of the viral DNA across the NPC channel (which is driven by DNA capsid pressure, as we demonstrate below). Figure 3 shows individual GFP-labeled HSV-1 C-capsids (green, strain K26GFP, HSV-1 strain expressing GFP-tagged VP26 protein) bound to NPCs on isolated DAPI-stained cell nuclei (blue), imaged with Super-Resolution Structured Illumination Microscopy (SR-SIM) (Sekine et al., 2017). SR-SIM provides resolutions down to 120 nm, allowing visualization of individual capsids attached to the nucleus (HSV-1 C-capsid diameter » 125 nm). [We found that tegument-free C-capsids were able to bind efficiently to NPCs and eject DNA into a nucleus. This finding is consistent with a recent study demonstrating that untegumented C-capsids and viral capsids exposing inner tegument proteins on their surface had a similar degree of binding to NPCs (Ojala et al., 2000;Anderson et al., 2014).] In parallel, using confocal fluorescent microscopy (FM), we confirmed that DNA-filled C-capsids bind specifically to the NPCs (as opposed to random binding to the nuclear membrane). As a control, we used WGA (wheat germ agglutinin) which blocks the NPC [WGA associates with the glycoproteins within the NPC (Ojala et al., 2000;Finlay et al., 1987) and competes with capsid binding], and prevents capsid binding, demonstrating capsid-NPC binding specificity, see Figure 3B. Figure 3B also shows that capsid-NPC binding is not inhibited by the addition of 30% w/w PEG 8 kDa. Finan et al. (2011) demonstrates that nuclear transport through the NPCs is not negatively affected by hyperosmotic conditions corresponding to those used in our study (~20 atm). Specifically, the authors reported that, under hyperosmotic stress, the nuclear size decreased while nuclear lacunarity increased, indicating expansion in the pores and channels interdigitating the chromatin. As a result, the rate of nucleocytoplasmic transport increased but only due to the change in nucleus geometry, providing a shorter effective diffusion distance. This sensitivity to hyperosmotic conditions concerned both passive and active transport across the NPCs. At the same time, the authors found that diffusivity within the nucleus was insensitive to the osmotic environment. In agreement with these studies (Finan et al., 2011), we observed that, under hyperosmotic conditions (~18 atm at 30% w/w PEG 8 kDa), the nuclei slightly shrunk (Figure 3-figure supplement 1A). However, the sub-nuclear structure of heterochromatin DNA was essentially unchanged upon addition of PEG, as visualized by a DAPI stain of nuclear DNA (Figure 3-figure supplement 1A, second row). We also confirmed that the integrity of the nuclei was not affected by the addition of 30% w/w PEG 8 kDa by showing that fluorescently labeled 70 kDa dextran is excluded from the nuclei interior with nuclei remaining intact and structured, see Figure 3-figure supplement 1B. Finally, we showed that the full transport functionality of NPCs is maintained in the reconstituted nuclei system at an osmotic pressure of~18 atm generated by PEG. This was verified with a fluorescently labeled NLS (data not shown) (Miyamoto et al., 2002). Purified GST-NLS-EGFP recombinant protein, which contains the NLS of the simian virus 40 T antigen fused with glutathione S-transferase (GST) and EGFP fluorescent protein, was used. Purified rat liver nuclei were incubated with cytosolic extracts (as a source of soluble import factors) supplemented with an ATP-regeneration system and a purified GST-NLS-EGFP recombinant protein at~18 atm external osmotic pressure generated by PEG. This protein was used as a positive import substrate. GST-NLS-EGFP was fully transported into the nucleus through the NPC by an active mechanism, which was detected by fluorescence microscopy (Miyamoto et al., 2002;Tsuji et al., 2007;Vázquez-Iglesias et al., 2009) (see details in the Materials and methods Section). Together, these findings show that the reconstituted capsid-nuclei system provides a robust assay for investigation of viral DNA ejection into a cell nucleus under hyperosmotic conditions. Quantification of intranuclear DNA release from HSV-1 capsids with genome pressure on and off Once the integrity and functionality of the reconstituted nuclei system at hyperosmotic conditions were verified, we conducted a stringent test demonstrating the role of intracapsid pressure for HSV-1 genome transport into a host nucleus. This was analyzed using a pull-down assay which allows # &' Figure 3. Imaging of reconstituted capsid-nuclei system confirms specific capsid binding to the NPCs at the nuclear membrane with and without PEG 8 kDa present. (A) Representative super-resolution SIM image showing GFP-HSV-1 C-capsids (green) bound to isolated reconstituted rat liver nuclei (blue DAPI stain). A histogram of a capsid cross-section profile for a capsid GFP signal along the white line shows that individual C-capsids are resolved (HSV-1 C-capsid diameter » 125 nm). (B) Confocal fluorescence microscopy images show that binding of GFP-HSV-1 C-capsids (green) to DAPI-stained isolated nuclei (blue), in the presence of cytosol (no ATP-regeneration system was added since it is not required for capsid binding [Ojala et al., 2000]), is not inhibited by the addition of 30% w/w PEG 8 kDa. The addition of wheat germ agglutinin (WGA) prevents most of the capsid binding to nuclei, which demonstrates that capsids bind specifically to NPCs as opposed to binding anywhere on the nuclear membrane [WGA associates with the Figure 3 continued on next page quantification of the amount of DNA injected from HSV-1 capsids into cell nuclei when the capsid pressure is 'on' or 'off', modulated by osmolyte addition. First, the pull-down assay was used to demonstrate DNA ejection from capsids into nuclei without PEG present (when the capsid is pressurized). Purified HSV-1 C-capsids were incubated with reconstituted nuclei in a CBB buffer at 37˚C for 40 min, supplemented with cytosol and ATP-regeneration system. After incubation, capsids bound to nuclei were pelleted and separated from the extranuclear solution by low-speed centrifugation, as illustrated in Figure 4. The supernatant with extranuclear solution contains unbound capsids and free viral DNA from broken capsids. The pellet of nuclei with bound capsids was then resuspended in a surfactant-containing buffer to break the nuclear membranes and release into solution the bound capsids and the nucleoplasm contents with injected viral DNA. In the pull-down assay (Figure 4), we used anti-HSV-1/2 ICP5/UL19 antibody attached to Protein A beads to immunoprecipitate capsids present in the resuspended nuclear pellet. Analogously, immunoprecipitation was used to separate the unbound viral capsids from the extranuclear solution (the supernatant in Figure 4). During all separation and purification steps, the samples were kept at 4˚C, which prevents DNA ejection from the capsids after the initial capsid-nuclei 40 min incubation was completed (Newcomb et al., 2007). In order to extract and quantify the amount of DNA retained in the capsids, protease K was added to digest the capsid shell. As illustrated in Figure 4, by combining this pull-down assay with repeated low-speed centrifugation-separation steps, we successfully separated four fractions of HSV-1 DNA originating from: (a) DNA extracted from capsids that failed to bind to NPCs, (b) free DNA in the extranuclear solution from broken capsids, (c) DNA retained inside the capsids that were bound to nuclei but did not eject DNA, and (d) DNA ejected from capsids into the nucleoplasm, see Figures 4 and 5A. Viral DNA in each fraction was further purified using phenol-chloroform extraction (see details in the Materials and methods Section). Combined, these four DNA fractions constitute the total viral DNA load in the capsid-nuclei sample. This pull-down assay allows accurate quantification of the amount of HSV-1 DNA released from capsids bound to NPCs into the nucleoplasm (fraction d) (excluding viral DNA from unbound capsids), relative to the total DNA amount in capsids bound to nuclei, which have either ejected or retained their genome (fractions c+d). The amounts of DNA extracted from each fraction (a,b,c, and d) were quantified by qPCR using specific HSV-1 primers for VP16/UL48 and ICP0 genes. Viral gene copies were compared to PCR amplification of HSV-1 DNA with a known copy number (see Materials and methods Section). Histograms in Figure 5B show the DNA copy number for each of the four viral DNA-containing fractions. Figure 5C shows the total DNA copy number from fractions a, b, c, and d. Figure 5D shows the fraction of DNA ejected from nuclei-bound capsids (fraction d/fractions c+d). After nuclei incubation with HSV-1 C-capsids at 37˚C for 40 min without osmolyte addition, Figure 5D shows that~98% of all nuclei bound capsids ejected their DNA into nuclei. All qPCR data for DNA copy numbers in each fraction obtained with the pull-down assay and shown in Figure 5 are also summarized in a table in Supplementary file 1. Separately, using the fluorescently labeled 70 kDa dextran exclusion assay described in Section two above, we confirmed that viral DNA injection into nuclei did not affect the integrity of the nuclei (Figure 4-figure supplement 1). The efficiency of the pull-down assay was assessed by the DNase protection method where we showed that 94-99% of the capsids in a given sample fraction are immunoprecipitated (see method description and data in Materials and methods Section and table in Supplementary file 1). The fact that DNA copy numbers determined with the VP16 primer are generally higher than those determined with the ICP0 primer is related to the difference in qPCR amplification efficiency due to differences in primer-gene interactions. This demonstration, showing . Schematic of the pull-down assay for quantification of the amount of DNA injected from HSV-1 capsids into cell nuclei when the capsid pressure is 'on' or 'off', modulated by PEG addition. HSV-1 C-capsids were incubated with reconstituted rat liver cell nuclei in CBB buffer, with and without 30% w/w PEG 8 kDa (osmolyte is not shown in the sketch). This pull-down assay successfully separates four fractions of HSV-1 DNA originating from: (a) DNA extracted from capsids that failed to bind to NPCs, (b) free DNA in the extranuclear solution from broken capsids, (c) DNA retained inside the capsids that were bound to nuclei but did not eject DNA, and (d) DNA ejected from capsids into the nucleoplasm. Viral DNA in each fraction was further purified using phenol-chloroform extraction prior to qPCR quantification. Note that the anti-HSV-1/2 ICP5 antibody attached to Protein A beads for the immunoprecipitation step has multiple binding sites on the capsid but only one antibody bound to a capsid is shown for clarity of presentation. Nuclear chromosomal DNA present in fraction c is not shown. DOI: https://doi.org/10.7554/eLife.47212.007 The following figure supplement is available for figure 4: that essentially all of HSV-1 capsids bound to NPCs eject their DNA into reconstituted host nuclei, sets the stage for a definitive test of hypothesis for a pressure-driven mechanism of intranuclear viral genome release. Isolated nuclei in a cytosol solution supplemented with an ATP-regeneration system were incubated with HSV-1 C-capsids with~18 atm osmotic pressure in the extranuclear solution (generated by 30% w/w PEG 8 kDa). The pull-down assay described above was used to separate viral DNA fractions a,b, c, and d after incubation for 40 min at 37˚C. qPCR was used to quantify the HSV-1 DNA After nuclei incubation with HSV-1 C-capsids at 37˚C for 40 min, without osmolyte addition,~98% of all nuclei bound capsids ejected their DNA into nuclei. When the capsid pressure is turned off at 18 atm of external osmotic pressure (generated by addition of 30% w/w PEG 8 kDa), the ejection of DNA from capsids bound to nuclei is completely suppressed (fraction d/fractions c+d~0.2%). qPCR DNA copy number quantification is based on a standard curve generated by serial dilution of a wild-type HSV-1 DNA with known DNA copy number. ICP0 and VP16 HSV-1 genes were quantified using specific primers. Error bars in B are standard deviations in DNA copy numbers from three independent qPCR reactions repeated at the same conditions. Error bars in C and D are progressed standard deviations. DOI: https://doi.org/10.7554/eLife.47212.009 copy number in each fraction using VP16/UL48 and ICP0 genes. [Note that phenol-chloroform extraction of each viral DNA fraction prior to qPCR analysis removes PEG from DNA samples to avoid any interference from PEG during PCR amplification.] Figure 5B and D show that when the capsid pressure is turned off at 18 atm of external osmotic pressure, the ejection of DNA from capsids bound to nuclei is completely suppressed (fraction d/fractions c+d~0.2%). The positions of DNA primers were selected to cover most of the HSV-1 genome length and included both S and L regions corresponding to one copy of VP16 (103,163-104,635 bp) and the two copies of ICP0 [copy 1: (2,113 bp-5,388 bp) and copy 2: (120,207 bp-123,482 bp)]. DNA ejection from HSV-1 capsid follows directionality starting at the 151 kb S-end (Newcomb et al., 2009). This primer selection ensured that both complete and partial ejection of the HSV-1 genome (151 kb total length) into the nucleus could be detected. As a control, Figure 5C shows that the total DNA copy number added from all four fractions (a,b, c,d) separated with the pull-down assay remains the same with and without 30% w/w PEG added to the capsid-nuclei sample. This confirms that no DNA is lost during the DNA fractionation steps due to PEG addition, and therefore the observed reduction in DNA amount in fraction d (ejected intranuclear DNA) is entirely attributed to the suppression of DNA ejection from nuclei-bound capsids. As described above, fluorescent microscopy imaging in Figure 3B showed that the addition of 30% PEG 8 kDa does not interfere with capsid binding to nuclei. By determining the amounts of DNA injected into nuclei (fraction d) and retained in the capsids bound to nuclei (fraction c), Figure 5B shows that the number of capsids bound to nuclei is in fact slightly increased (fractions c and d combined) with 30% PEG addition (the number of unbound capsids in fraction b have correspondingly decreased). Enhanced capsid binding to NPCs at nuclei can be explained by the crowding effect induced by PEG molecules, which has been observed to enhance macromolecular binding (Minton, 2006). Fraction a in Figure 5B, corresponding to free viral DNA in the extranuclear solution from broken capsids, is also reduced by PEG addition. This is primarily attributed to the decreased number of unbound capsids in the extranuclear solution and also to increased capsid stability induced by PEG. However, it should also be noted that even without PEG, fraction a only constitutes <0.2% of the total viral DNA amount and is at the level of qPCR background noise. For a final visual demonstration of the osmotic suppression of DNA ejection from HSV-1 capsids into reconstituted cell nuclei, we used ultrathin-sectioning EM. As a negative control, capsids were first incubated with isolated nuclei with added cytosol at 4˚C for 40 min without ATP regeneration system. As was previously observed (Ojala et al., 2000), these conditions prevented DNA ejection from viral capsids with~95% of HSV-1 capsids retaining their genomes ( Figure 6). Next, reproducing the optimized capsid-nuclei binding conditions from the pull-down assay above, purified DNA-filled C-capsids were incubated with isolated rat liver nuclei in the presence of cell cytosol supplemented with ATP regeneration system for 40 min at 37˚C. After this incubation, Figure 6 shows EM micrographs of capsids attached to nuclei, where~62% of capsids are empty with fully ejected DNA when no PEG is present (at least 100 nuclei bound capsids were counted for each sample analysis). The failure to eject DNA from the remaining~38% of capsids can be attributed to the capsid damage and failure to attach to the NPCs. Indeed, Figure 6 shows capsids that bind to the nuclear membrane in multilayer clusters, where only the first layer, closest to the nuclear membrane, can dock to the NPCs and eject the DNA. Capsids in the outer layers therefore retain their genomes. By contrast, Figure 6 shows that when capsids were incubated with reconstituted nuclei (in cytosol supplemented with ATP-regeneration system) for 40 min at 37˚C with 30% w/w PEG 8 kDa present, the majority of capsids (~81%) did not eject their genome and retained DNA in the capsid. [In the pulldown assay, the estimated fractions of capsids that ejected DNA without PEG or retained DNA with PEG present, were even higher. This can be attributed to the fact that in the pull-down assay, only capsids that were directly bound to nuclear NPCs were accounted for (fractions c and d), separated with several centrifugation steps and multiple washes of nuclei with bound capsids, unlike in the EM analysis where unbound capsids are also present. Qualitatively, however, the EM data supports all of the pull-down assay observations above.] The NLS transport experiment above (Section 2) and the nucleocytoplasmic transport measurements under hyperosmotic stress reported in Finan et al. (2011) showed that NPCs' transport functionality is not disrupted by 18 atm PEG-generated osmotic pressure. Here, we further demonstrate that the observed suppression of DNA ejection from capsids into nuclei is caused by the osmotic pressure gradient across the capsid wall, which turns the capsid pressure off, as opposed to PEG itself and/or its osmotic pressure effect blocking the NPC channel and interfering with the transport functionality. To show this, we repeated the capsid-nuclei binding experiment above but this time PEG 8 kDa was replaced with PEG 400 Da. At 17% w/w, PEG 400 generates 18 atm of osmotic pressure (see https://brocku.ca/researchers/peter_rand/). This osmotic pressure was required for Figure 6. Ultrathin-sectioning EM visualization of complete osmotic suppression of DNA ejection from HSV-1 capsids into reconstituted cell nuclei when capsid pressure is 'turned off' by 18 atm osmotic pressure generated by PEG 8 kDa. Negative control at 4˚C without added PEG and without ATP-regenerating system, shows that no ejection from nuclei bound C-capsids occurs. Positive control at 37˚C shows complete DNA ejection from C-capsids bound to isolated cell nuclei supplemented with cytosol and ATP-regenerating system. EM images show that capsids can bind to the nuclear membrane as individual capsids or in multilayer clusters. Consequentially, only capsids in the first layer that are bound to the NPCs are able to eject their DNA. EM shows that the addition of 30% PEG 8 kDa to reconstituted capsid-nuclei system inhibits DNA ejection from HSV-1 C-capsids into host nuclei through the NPC. In all samples, capsids and nuclei were incubated for 40 min. The following figure supplement is available for figure 6: complete suppression of DNA ejection with PEG 8 kDa. However, as mentioned above, HSV-1 capsid pore size has a MW cutoff of~4000 Da (Trus et al., 1996;Heymann et al., 2003); therefore, PEG 400 Da permeates the capsid (unlike PEG 8 kDa). Accordingly, even when capsids bound to reconstituted nuclei are incubated at 18 atm osmotic pressure with PEG 400 Da, there will be no osmotic pressure gradient (between inside and outside the capsid) needed to cancel the DNA pressure in the capsid . Indeed, when isolated nuclei reconstituted with cytosol and an ATP-regeneration system were incubated with C-capsids for 40 min at 37˚C with 17% w/w PEG 400 Da added to the solution, ultrathin-sectioning EM ( Figure 6) showed that, despite18 atm osmotic pressure surrounding the capsids,~60% of capsids were empty (ejected their DNA). This is equivalent to the fraction of empty capsids that ejected DNA after incubation of capsids with reconstituted nuclei for 40 min at 37˚C without PEG addition (~62% empty capsids). This observation further validates the assumption that it is the osmotic pressure gradient that suppresses DNA ejection through the NPCs by 'turning off' the capsid pressure and not the interference with NPC transport functionality. Combined, the pull-down assay and the EM data clearly demonstrate that viral DNA ejection into host nuclei from HSV-1 capsids bound to NPCs is completely blocked when the intracapsid genome pressure is turned off through an osmolyte addition. This proves that viral DNA translocation from herpesvirus capsids across the nuclear membrane is driven by intracapsid pressure. Conclusions All eukaryotic DNA viruses, with the exception of poxviruses, deliver and replicate their genomes within the nucleus (Hennig and O'Hare, 2015). In addition, retro-viruses transport their genomes across nuclear membranes in order to replicate after they have reversely transcribed their singlestranded (ss) RNA to dsDNA. The mechanism of this most significant step of infection, viral DNA entry into the host nucleus, remains poorly understood for the majority of viruses (Hennig and O'Hare, 2015). The reconstituted capsid-nuclei experiments above provide a demonstration of a nuclear entry mechanism of DNA from HSV-1, driven by high mechanical pressure of the encapsidated viral genome. Despite previous measurements of intracapsid DNA pressure in several types of viruses (Evilevitch et al., 2003;Bauer et al., 2013), the role that capsid pressure plays for intranuclear viral genome delivery has not been demonstrated until now. The aim of this work was to demonstrate that capsid pressure is critical for initiation of DNA ejection from a herpesvirus capsid into a host nucleus. However, other factors may contribute to complete internalization of the viral genome into the nucleus once the first portion of herpes DNA is released through the NPC channel by the capsid pressure. As shown in Figure 2, capsid pressure is rapidly reduced with the increasing fraction of ejected DNA. At osmotic pressures of 3-4 atm, equivalent to that of the cellular cytoplasm surrounding the capsid (Jeembaeva et al., 2008), only~50% of DNA is ejected. This leaves the question as to how the rest of the genome is released. [However, not all macromolecules generating the osmotic pressure in the cell are large enough not to penetrate the capsid, which, as discussed above, is required for osmotic suppression of DNA ejection. Thus, we anticipate that a DNA fraction larger than 50% is ejected by DNA pressure into the crowded cellular environment]. We had previously found that the crowded cellular environment (which contributes to the osmotic pressure [Parsegian et al., 2000]), combined with the presence of DNA-binding proteins in the cell, lead to instant condensation of the incoming viral DNA. This DNA condensation exerts a significant pulling force on the rest of the DNA, facilitating its complete internalization in the cell (Jeembaeva et al., 2008). Further, a recent theoretical study proposed that DNA ejection could be described by a two-step process, where the first portion of DNA is ejected by DNA-DNA repulsive pressure, followed by a slower process of anomalous diffusion of condensed viral genome in the crowded cytoplasm (Chen et al., 2018). These effects can be investigated in the future using the reconstituted nucleus system. High intracapsid DNA packing density resulting in tens of atmospheres of pressure is a distinctive trait of all nine human herpesviruses. In Figure 6-figure supplement 1, we calculated capsid DNA pressures in several types of herpesviruses using analytical expressions in refs. (Tzlil et al., 2003;Purohit et al., 2003) and EM measured values for inner capsid diameters (Booy et al., 1991;Germi et al., 2012;Yi et al., 2017) to compute DNA-DNA electrostatic repulsive force and bending stress. As a reference pressure, the calculated HSV-1 DNA pressure is in agreement with our measured value of 19 atm (Bauer et al., 2013). The differences in computed pressures (ranging from~14 atm for VZV to~90 atm for EBV) are related to the variation in DNA packing density of these viruses (Booy et al., 1991;Germi et al., 2012;Yi et al., 2017). This strongly suggests that pressure-driven entry of viral DNA into the host nucleus during infection is universal to all herpesviruses. Other types of viruses also involve replication steps dependent on the pressurized state of the intracapsid genome. For instance, during genome packaging, reoviruses replicate ssRNA to dsRNA inside the capsid, which results in genome packaging densities similar to that of herpesviruses (Prasad et al., 1996). Such intracapsid replication could be regulated, at least in part, by generation of internal pressure resulting from the increasing genome packaging density as newly synthesized dsRNA continues to fill the internal capsid volume. Another example is HIV, where similar to herpesviruses, HIV capsids dock to NPCs at the nucleus and release transcribed dsDNA through the NPC channel (Rankovic et al., 2017). It was recently shown that the reverse transcription process from ssRNA to dsDNA inside the HIV capsid is associated with increasing internal DNA pressure (Rankovic et al., 2017). By demonstrating the central function of herpes capsid pressure for intranuclear viral DNA entry, combined with assays developed in this work, we provide a platform for analysis of pressure-regulated replication in many viruses that afflict humans and animals. Cells and viruses African green monkey kidney cells (Vero; ATCC CCL-81 from American Type Culture Collection, Rockville, MD) and BHK-21 cells (ATCC CCL-10; from American Type Culture Collection, Rockville, MD) were cultured at 37˚C in 5% CO2 in Dulbecco's modified Eagle's medium (DMEM; Life Technologies) supplemented with 10% fetal bovine serum (FBS; Gibco), 2 mM L-glutamine (Life Technologies), and antibiotics (100 U/ml penicillin and 100 mg/ml streptomycin; Life Technologies). The KOS strain of HSV-1 was used as the wild-type strain. The K26GFP HSV-1 recombinant virus (gift from Dr. Fred Homa, University of Pittsburgh), that carries a GFP tag on the capsid protein VP16 was used in fluorescence studies. All viruses were amplified on Vero cells, and titers were determined on Vero cells by plaque assay. Viral plaque assays were carried out as follows: Viral stocks were serially diluted in DMEM. Aliquots were plated on 6-well trays of Vero cells for 1 hr at 37˚C. The inoculum was then replaced with 40% (v/v) carboxymethylcellulose in DMEM media. HSV-1 plaque assays were incubated for 3-4 days. The monolayers were stained for 1 hr with crystal violet stain (Sigma-Aldrich). After removal of the stain, the trays were rinsed with water and dried, and plaques were counted. HSV-1 C-capsid isolation Purification of HSV-1 capsids was previously described (Bauer et al., 2013). African green monkey kidney cells (Vero) were infected with either HSV-1 KOS strain or a K26GFP HSV-1 recombinant virus at a multiplicity of infection (MOI) of 5 PFU/cell for 20 hr at 37˚C. Cells were scraped into solution and centrifuged at 3500 r.p.m. for 10 min in a JLA-16.250 rotor. The cell pellet was re-suspended in 20 mM Tris buffer (pH 7.5) on ice for 20 min and lysed by addition of 1.25% (v/v) Triton X-100 (Alfa Aesar) for 10 min on ice. Lysed cells were centrifuged at 2000 rpm for 10 min and the nuclei pellets were re-suspended with 1x protease inhibitor cocktail (Complete; Roche) added. Nuclei were disrupted by sonication for 30 s followed by treatment with DNase I (Thermo-Fisher) for 30 min at room temperature. Large debris were cleared by brief centrifugation, and the supernatant was spun in a 20-50% (w/w) sucrose gradient in TNE buffer (500 mM NaCl, 10 mM Tris, 1 mM Na 2 EDTA, pH 8.0) at 24,000 rpm in a SW41 rotor for 1 hr. The C-capsid band was isolated by side puncture, diluted in TNE buffer and centrifuged at 23,000 rpm for an additional 1 hr. Capsids were re-suspended in a preferred capsid binding buffer (CBB: 20 mM HEPES-KOH with pH of 7.3, 80 mM K-acetate, 2 mM DTT, 1 mM EGTA, 2 mM Mg-acetate, 1 mM PMSF, and 1X CLAP cocktail). Osmotic suppression of DNA ejection and PFGE analysis HSV-1 C-capsids along with varying concentrations of 8 kDa MW polyethylene glycol (PEG) (Fisher) were incubated at 37˚C for 1.5 hr with trypsin and DNase I as previously described (Bauer et al., 2013). The corresponding osmotic pressure (P) as a function of the PEG w/w percentage (w) was determined by the empirical relation (Evilevitch et al., 2003) P(atm) = À1.29 G 2 T + 140 G 2 + 4G, where G = w/(100 -w) and T is the temperature (˚C). Non-ejected DNA was extracted from capsids by addition of 10 mM ethylenediaminetetra-acetic (EDTA) (Duchefa), 0.5% (w/v) SDS (Sigma), and 50 mg/mL protease K (Amresco) followed by a 1 hr incubation at 65˚C. The length of osmotically suppressed DNA within capsids was determined by pulse field gel electrophoresis using a Bio-Rad CHEF II DR at 6 V/cm with initial and final switch times of 4 and 13 s respectively. Gels were stained with SybrGold and size estimations performed with UVP VisionWorksLS software using the Midrange molecular weight standard from New England BioLabs as a reference. Rat liver nuclei isolation and cytosol preparation Nuclei from rat liver were isolated as adapted from previously described protocol (Ojala et al., 2000). The intactness of nuclei was confirmed by light microscopy, EM (electron microscopy) and FM (fluorescence microscopy) by staining the nuclei with DAPI and by their ability to exclude fluorescently tagged (Fluorescein isothiocyanate) 70 kDa dextran. The cytosol was separately prepared using BHK-21 cells. Reconstituted capsid-nuclei system An in-vitro viral HSV-1 DNA translocation system was built in which HSV-1 genome was released into nucleoplasm in a homogenate solution mimicking cytoplasm environment, see details in previously described protocol in Ojala et al. (2000). In a typical system, rat liver cell nuclei were incubated C-capsids (HSV-1 or GFP-labeled HSV-1), containing: (i) cytosol, (ii) BSA, (iii) ATP-regeneration system, see details in Ojala et al. (2000). The system was incubated 37˚C for 40 min sufficient for capsid binding to nuclei. For inhibition studies, wheat germ agglutinin (WGA) was pre-incubated with the nuclei prior to addition of C-capsids. NPC transport functionality We verified that NPC transport functionality was not disrupted by 18 atmospheres of osmotic pressure generated by PEG. We performed an in-vitro import assay to evaluate the nuclear import activity of NPCs using the nuclear localization signal (NLS) (Miyamoto et al., 2002). Purified rat liver nuclei were incubated with cytosolic extracts (as a source of soluble import factors) supplemented with ATP-regeneration system and a purified GST-NLS-EGFP recombinant protein, which contains the nuclear localization signal (NLS) of the simian virus 40 T antigen fused with glutathione S-transferase (GST) and EGFP. This protein was used as a positive import substrate, since it is transported into the nucleus by an active non-diffusion mechanism and can be detected by fluorescence microscopy (Miyamoto et al., 2002;Tsuji et al., 2007;Vázquez-Iglesias et al., 2009). Fluorescence microscopy For fluorescence imaging of the reconstituted capsid-nuclei system, GFP-labeled HSV-1 C-capsids were used. After incubation of capsids with the nuclei as described above, the buffer system containing purified GFP-labeled C-capsids and nuclei were loaded onto cover-slips (Mab-Tek). The nuclei were stained with DAPI for 5 min before imaging. Overlay of the confocal 488 (for GFP emitted signal) and 358 (for DAPI emitted signal) channels show the localization of viral capsids onto the nucleus. Images were captured with a Nikon A1R laser-scanning confocal microscope. For inhibition studies with wheat germ agglutinin (WGA), the nuclei were pre-incubated with 0.5 mg of WGA/ml for 20 min on ice before addition of GFP-labeled HSV-1 C-capsids. Super Resolution-Structured illumination microscopy (SR-SIM) After incubation of nuclei with GFP-labeled C-capsids, the complete binding mixture was loaded onto chamber slides (Mab-Tek) and the samples were immediately imaged for GFP and DAPI by using 405 nm and 488 nm excitation wavelengths with a Zeiss Elyra S1 microscope with a 64X-oil immersion lens. The images were captured on a sCMOS PCO Edge camera. The images were processed using the Structured Illumination module of the Zeiss (software Zen ver. 2011) software to obtain the super-resolved images of GFP-capsids bound to nuclei. The spatial resolution of the instrument is 120 nm. To generate 3D reconstructions, image stacks (1 mm) were acquired in Frame Fast mode with a z-step of 110 nm and 120 raw images per plane. Raw data was then computationally reconstructed using the Zen software to obtain a super-resolution 3D image stack. The Fiji-ImageJ software was used to generate the histogram of the cross-section profile for the GFP-labeled C-capsid signal. Electron microscopy (EM) After binding of capsids to nuclei, the samples were washed with CBB buffer. The supernatant was then removed and replaced with fixative (2.5% EM-grade glutaraldehyde and 2.0% EM-grade formaldehyde in 0.1 M sodium cacodylate buffer, pH 7.4) for 3 hr at 4˚C. The fixative was then removed and replaced with 1% osmium tetroxide in buffer for 90 min. Each sample was then subjected to 10 min buffer rinse, after which it was placed in 1% aqueous uranyl acetate and left overnight. The next day, each sample was dehydrated by using a graded ethanol series and propylenoxid. The nuclear pellets were embedded in Epon prior to cutting. Ultrathin Epon sections on grids were stained with 1% aqueous uranyl acetate and lead citrate (Reynolds, 1963). After the grids dried, areas of interest were imaged at 120 kV, spot three using a Tietz 2k  2 k camera mounted on a Philips/FEI (now Thermo Fisher FEI) CM200 transmission electron microscope. Capsid pull-down assay After capsids-nuclei incubation, as described above, the system was centrifuged at 3,000 rpm to spin down the nuclei-associated capsids, and the supernatant was collected separately as the extranuclear solution. Nuclei pellet was washed extensively in CBB buffer at 4˚C to remove excessive osmolytes in the pellet (all the steps were carried at 4˚C to minimize DNA ejections after the incubation stage). The pellet was then re-suspended and incubated for 20 min. in 1x reticulocyte standard buffer (RSB: 10 mM Tris of pH 7.5, 10 mM KCl, 1.5 mM MgCl2, 0.5% NP-40 substitute) to lyse the nuclear membrane. Both extranuclear supernatant solution and lysed nuclear pellet were then incubated with 5 mL of an anti-HSV1/2 ICP5/UL19 antibody overnight at 4˚C. The next day, 50 mL of 50% Protein A bead slurry (Sigma-Aldrich) was added to each sample to capture viral capsid-antibody complex. Protein A bead complexes were then centrifugated (1500 rpm, 5 min), and the supernatants were collected (sample b and d in Figure 2). In parallel, the pelleted beads were re-suspended in Proteinase K (Amresco) solution to digest the capsid and let viral DNA diffuse into the solution (Fractions a and c). Then, DNA from each sample was recovered by phenol-chloroform extraction, precipitated with ethanol and re-suspended in DNase-free ultrapure water. This in-vitro assay successfully divides the HSV-1 DNA into four fractions: (a) DNA extracted from capsids that failed to bind to NPCs, (b) free DNA in the extranuclear solution from broken capsids, (c) DNA retained inside the capsids that were bound to nuclei but did not eject DNA, and (d) DNA ejected from capsids into the nucleoplasm. Extracted viral DNAs from Fractions a, b, c, d were quantified by qPCR analysis for DNA level by custom TaqMan assays. Viral genes VP16 and ICP0 were quantified with specific primers (gift from Bernard Roizman lab). The assays were performed by using a StepOnePlus system (Applied Biosystems) and were analyzed with a software provided by the supplier. A WT HSV-1 DNA with known viral copy number was used to generate a standard curve and to calculate the viral gene copy number of the unknown samples.
v3-fos-license
2019-05-20T13:04:07.431Z
2017-01-01T00:00:00.000
158291407
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7E0B173A965AD76652CF476BE03FCA6B/S2398772318000235a.pdf/div-class-title-the-incomplete-right-to-freedom-of-movement-div.pdf", "pdf_hash": "251bc3a0c6addc29042e42c9b7ccedb054bb0f53", "pdf_src": "Cambridge", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44047", "s2fieldsofstudy": [ "Law", "Political Science" ], "sha1": "efc10f24aea62414ce884141b7f6b5f48bcecd2b", "year": 2017 }
pes2o/s2orc
The Incomplete Right to Freedom of Movement We live now in the midst of a massive global crisis of mobility. An ever-growing population finds itself refugees displaced from the legitimate jurisdiction of any territorial state. In the face of this pressing emergency, influential voices argue that international human rights law should be placed “at the center” of international efforts to meet this challenge. But today's calamity is set against the backdrop of a universal human rights regime that is not only thin but, more importantly, incomplete. When it comes to cross-border mobility, human rights law ensures that states allow individuals to leave their state, but alas does not require that any other state let them enter and remain. Such entry and residence rights are required only for a country's own nationals (however nationality is defined). And so, many refugees who have exercised their human right to exit come up against a functional block to mobility: they have no place to stop moving. Some of them may nonetheless find a state willing to take them in. In that case, they may enjoy meaningful protection, but this protection exists only by virtue of a state's domestic policies and has little to do with international human rights. continuity (the right to remain) after a political change in the state and express a collective sentiment. Refugees in flight are concerned about exiting a state (the right to withdraw) and often, though not always, reflect an individual sentiment. With this distinction between refugees in exile and refugees in flight in mind, let me now turn to the freedom of movement right. The right embodies two functions: exit and entry. For this right to have practical meaning, both functions must be in effect. Entry, moreover, can be thin (a right to cross a border) or robust (a right to both border crossing and status regularization post entry). Under human rights law, the exit function is always in effect. It is universal and unlimited; anyone can leave any country. This is confirmed by both the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). 3 As for the entry function, it is limited to three main situations. 4 (i) Entry under the right of return. Human rights law provides any individual with a right of return, or entry, to "his own country." 5 This entry comes from a recognition of "the special relationship of a person to … [her or his] country." 6 The UN Human Rights Council (UNHRC) defines the scope of "own country" as protecting nationals in a formal sense, i.e., citizens, and also nationals in an informal sense, i.e., individuals "who, because of … [their] special ties to or claims in relation to a given country, cannot be considered to be a mere alien." 7 While both formal nationals and informal nationals qualify for protection, there are differences in the ways in which they acquire the status of "own country" and also in the rights that they accrue from this status. According to the UNHRC, the protection of formal nationals does not depend on physical-territorial presence in their "own country"; they bear the right of return even if they never lived in the state prior to exercising entry. 8 Protection here is robust, regulating the two aspects of return: the actual entry into the state (a mobility right), as well as status in the state after entry (a continuity right). The protection of informal nationals, in contrast, is a function of an ongoing personal-territorial continuity in the country. According to the UNHRC, the determination of this continuity invites consideration of such matters as "long standing residence, close personal and family ties and intentions to remain, as well as to the absence of such ties elsewhere." 9 This protection is thin and attends only to the right permitting an informal national to remain, or not to be expelled (a continuity right). This protection is the opposite of mobility, leaving the exit and entry functions irrelevant for the operation of the right. So while formal nationals bear a return right (a mobility right), informal nationals do not. Instead they bear what I call the right of domicile-the right to remain in the place where they live. This type of entry is narrowly constrained. The return function is limited only to formal nationals, and the domicile right is constricted by location: an informal national can bring a claim only from within the state to remain in the state. Protection, moreover, is minimal and negative (nonremoval). 10 Additional positive rights are provided at the discretion of the host state. (ii) Entry under refugee status. Some instruments guarantee vulnerable individuals a right of entry into host states as refugees: the UDHR, for example, pledges to uphold the "right to seek and to enjoy in other countries asylum from persecution," 11 and the Convention on Asylum guarantees under certain circumstances entry for political asylum. 12 However, the vast majority of international treaties-including, most importantly, the Refugee Convention and the Convention Against Torture-prohibit a state from returning individuals only when there is a "well-founded fear that they will be persecuted" 13 or "subjected to torture." 14 Alas, while these treaties create an obligation for the state not to send back a refugee ("non-refoulement"), they do not provide an individual with a right to enter the state in order to seek protection in the first place. As a result, the opportunity to enter as a refugee is narrowly restricted. It is constrained territorially: an individual can be considered for entry only after she has established a territorial presence, either inside the state (including, at least under soft law, at the border of the state), or under the effective control of the state or its agents, even if beyond national borders. At the same time, this form of entry is also imprecise: the legal meaning of territorial presence changes over time. 15 (iii) Entry under a deliberate decision of the host country. Public international law permits any sovereign the right to exclude whomever it wishes and to grant nationality on the terms it wishes within and in relation to its own domestic legal system. 16 This state-based definition of nationality is supported by human rights law, 17 and is considered valid so long as it is not challenged by another state. 18 This type of entry, much like the earlier two, is also restricted; it is circumscribed by the will of the state. An individual has no ability to enter and/or remain without the state's consent. The result is that while a right of return obliges states not to expel nationals, either formal or informal, it does not require them to allow the reentry of informal nationals if they were not originally expelled. This offers meaningful 10 See, e.g., Human Rights Committee, supra note 6, at paras. 19-21. 11 UDHR, supra note 3, art. 14. 12 Convention on Asylum, Feb. 20, 1928, O.A.S.T.S. No. 34. 13 The Convention Relating to the Status of Refugees art. 33(1), Apr. 22, 1951, 189 UNTS 137. 14 Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment art. 3(1), GA Res. 39/46 (Dec. 10, 1984). There has been a great deal of commentary on the gap between a "right not to be returned" and a "right to enter to see if you ought to be returned." But, as already mentioned, I leave this body of law to the side and focus here only on Human Rights law which does purport to grant rights to individuals and is interpreted and enforced by a range of international courts and institutions. 15 protection for refugees in exile who are formal nationals and who ask to return to their "own country," regardless of the reasons why they are not there. It also covers first-generation informal nationals who were expelled from the state after they had established their physical continuity. But second-generation refugees in exile who seek to return to their "own country," from which their parents were expelled, but of which they are not formal nationals, find themselves without protection. Human rights law guarantees them what I call a domicile right, a right to remain where they are, but makes this right a function of ongoing personal-territorial continuity. Without the ability to show physical continuity, they are unable to return to the place of original dispossession. At the same time, for refugees in flight, the return right can amount to death sentence. They do not seek to remain (a continuity right), but rather to flee their "own country" (a mobility right). The right to free mobility mandates that states permit them to exit, but it does not require other states to allow them to enter, unless they are nationals. For some, the right to exit, by itself, is indeed meaningful. One example is individuals with extreme vulnerability, such as Syrian refugees, or Edward Snowden. Their primary concern is the ability to leave a particular state. By leaving, they would have the possibility, at the very least, to file an entry claim somewhere, and perhaps some state would make a humanitarian exception and let them in. Another example is individuals with high physical capacity, such as young strong men. 19 For them, exit alone may be sufficient. By exiting, they could have the chance to approach a potential host state or its agents. And, if they succeed in establishing a de facto entry, they would have, at minimum, a transitory entry pending determination of refugee status. But a large percentage of the refugees in flight and refugees in exile who exercise their freedom of movement right to leave their state for whatever reason, but who are unable to reach a host state or its agents, find themselves in limbo. Human rights law guarantees them universal exit. Without a state that consents to accept them, however, they are on a journey without a destination: permanently stuck in transitional locations such as refugee camps or territorial borderlands. This configuration of the right to mobility operates within an international legal system that places the state as the center of corrective legal processes. It guarantees individuals private rights with respect to a state. And, moreover, it allocates protection to those individuals who are either within state territory after they have established territorial presence in the state (responsibility grows out of territoriality), or who have come under the state's effective control (responsibility grows out of contact). And so, individuals must be inside the state or under its control in order to benefit from rights against a state. This frame leaves without protection those who are stranded between states-whose state of nationality either is the source of their harm (positive violation) or is unable to remedy their harm (negative violation). They can exit their state, but no state has a corresponding duty to allow them in. To incorporate those who are left outside the human rights regime, a new field of migration law ought to address the problem of the entry function, including the two aspects of a thick entry: entrance (mobility right) and status post entry (continuity right). One way to generate a robust entry is to codify new law compelling unwilling host states to take in refugees, including both the right to enter and to remain. Such an entry right would be universal and would not be subject to state's preference. Unfortunately, however, our time is one of xenophobia. Strong states, rich with resources, may well refuse to sign on to such a law. And leaders who do might be punished by their electorates. Another way to create an entry right is to override state will by drawing on existing law without creating new rights. There are two possibilities: (1) the definition of informal nationality-or what I propose to call a domicile right -could be expanded, and thus also the right of return into one's "own country"; and (2) the meaning of territoriality could be expanded, and thus also the ability to enter under refugee status. Informal nationality (a domicile right in my terminology) and territory, however, are arbitrary legal categories. Informal nationality is arbitrary in a temporal sense; it privileges those who are physically present on the territory at a particular moment, and it punishes those who are not, regardless of circumstances. 20 Territorial presence, in turn, is arbitrary from a policy perspective; it rewards individuals who can physically approach the state or its agents, and punishes states based on geographic factors. Thus, widening the existing legal definition of "own country" and "territory" does not resolve the concern that mobility under human rights law is, in fact, determined by the situation of the individual, and is a function of particular circumstances that fall outside the purview of universal law. So long as we live in a state system, with territory limiting the responsibility of states such that states have no duties to engage in unilateral humanitarian intervention, a better way to effect change is by putting pressure on states to voluntarily permit more entry. Alas such entry rights would not be universal, dependent as they would be on the political will of each state. An example is the current Syrian refugee situation and the assertion by politicians in the United States of America and Hungary that they will take in only Christian refugees. Here we are. Thick entry (including border crossing and status regularization) across the borders of some, but not all, states, becomes possible for certain subcategories of refugees, but not for others. And the remaining refugees who are unable to secure entry? Whether they chose or were forced to leave their state, human rights law guarantees them only a point of departure but no point of arrival. Marooned on land and adrift at sea, they carry suitcases full of meaningless human rights.
v3-fos-license
2023-02-15T16:11:20.686Z
2023-02-13T00:00:00.000
256863627
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsurg.2023.1077472/pdf", "pdf_hash": "2685c16fcfa89dc5e82b0216fa5f927f23990ee5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44048", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8a7a4018072343f54b858fad208a46fc4733e0ed", "year": 2023 }
pes2o/s2orc
A multidimensional learning curve analysis of totally laparoscopic ileostomy reversal using a single surgeon' s experience Purpose Recently, totally laparoscopic ileostomy reversal (TLAP) has received increasing attention and exhibited promising short-term outcomes. The aim of this study was to detail the learning process of the TLAP technique. Methods Based on our initial experience with TLAP from 2018, a total of 65 TLAP cases were enrolled. Demographics and perioperative parameters were assessed using cumulative sum (CUSUM), moving average, and risk-adjusted CUSUM (RA-CUSUM) analyses. Results The overall mean operative time (OT) was 94 min and the median postoperative hospitalization period was 4 days, and there was an estimated 10.77% incidence rate of perioperative complications. Three unique phases of the learning curve were derived from CUSUM analysis, and the mean OT of phase I (1–24 cases) was 108.5 min, that of phase II (25–39 cases) was 92 min, and that of phase III (40–65 cases) was 80 min, respectively. There was no significant difference in perioperative complications between these 3 phases. Similarly, moving average analysis indicated that the operation time was reduced significantly after the 20th case and reached a steady state after the 36th case. Furthermore, complication-based CUSUM and RA-CUSUM analyses indicated an acceptable range of complication rates during the whole learning period. Conclusion Our data demonstrated 3 distinct phases of the learning curve of TLAP. For an experienced surgeon, surgical competence in TLAP can be grasped at around 25 cases with satisfactory short-term outcomes. Introduction A temporary loop ileostomy is frequently performed to avoid anastomotic leakage and protect the downstream anastomoses in colorectal cancer surgery (1). Subsequent reversal of the stoma might inevitably result in some complications, even for senior surgeons. According to the statistics, the reversal of ileostomy carries an estimated 17.3% morbidity rate and 0.4% mortality rate (2)(3)(4). With the evolution of minimally invasive techniques, laparoscopic-assisted reversal has been developed to reduce postoperative complications such as bowel obstruction and incisional hernia (5,6). In addition, some initial explorations of laparoscopic reversal with intracorporeal anastomosis have been conducted (7)(8)(9). However, intracorporeal intestine reconstruction is relatively difficult and requires a learning process for inexperienced surgeons. The learning curve can provide not only a visual representation of surgeon performance but also a quantitative estimation of surgical competency (10). Previous studies have analyzed the learning curve of intracorporeal anastomosis, suggesting that a plateau is reached after approximately 20-30 procedures (11)(12)(13). However, to our knowledge, the learning process of totally laparoscopic ileostomy reversal (TLAP) has not been previously investigated. In addition, most studies used operative time as the sole parameter to determine the learning curve and analyzed data using only one kind of statistical method, thus insufficiently representing the completion of surgical skill acquisition. In light of this, the present study was conducted to analyze the learning curve of TLAP based on operative time and perioperative complications using cumulative sum (CUSUM), moving average and risk-adjusted CUSUM (RA-CUSUM) analyses, aiming to show the safety and feasibility of this new technique. Methods Patients In the second half of 2018, our group innovatively introduced the TLAP technique into ileostomy reversal. Since then, TLAP has been performed in >10 procedures/year by the same surgery team. From October 2018 to October 2021, a total of 65 consecutive patients were retrospectively enrolled. All patients with a history of laparoscopic colorectal cancer surgery underwent TLAP at the National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College. In this study, any patients suited to undergo classic open reversal were regarded as potential candidates for TLAP. Eligible patients were those ≥18 years of age who received TLAP ≥3 months after former colorectal surgery or 8 weeks after postoperative chemotherapy/radiotherapy. Moreover, study participants also underwent both a colonoscope examination and enhanced computed tomography imaging of the thoracic, abdominal, and pelvic cavities to guarantee acceptable anastomotic stoma healing and exclude tumor recurrence or metastasis. Patients who underwent TLAP combined with additional procedures, such as additional intestine resection, anastomotic reconstruction, or parastomal hernia/abdominal wall repair, and those with other surgical contradictions for the traditional open ileostomy reversal were excluded from subsequent analysis. This study was in accordance with the Declaration of Helsinki and the informed contents were signed before the TLAP surgery. This research was also approved by the Ethical Committee of the Cancer Hospital (Institute), Chinese Academy of Medical Sciences, Beijing, People's Republic of China. Surgical team The surgical team included a single experienced surgeon and 2 constant assistants throughout the study period. The participating senior surgeon has trained as an oncology surgeon for 15 years with extensive laparoscopic colorectal surgery experience (>100 procedures/year since 2015). Besides, TLAP was necessarily aided by a first assistant surgeon and a laparoscope holder. Both assistants were surgery residents and had completed 3 years of standardized training of residency after 2017. The primary duties of the first assistant surgeon included retraction and suction when necessary. All team members understood the details of TLAP technique, supported each other with effective methods, and would infuse their experience into their future performance. Surgical procedures After general anesthesia, patients were placed in a supine lithotomy position and the previous stoma was closed in a onelayer continuous Lembert pattern wherein the needle exited the tissue within 1 mm of the stoma edge and engaged the submucosa with each bite. Here, a 4-port technique was employed for trocar placement ( Figure 1). First, a 10-mm trocar was inserted upon the umbilicus as an observation port. Then, a 12-mm supraumbilical port was placed at the left anterior axillary line as a principal operating port. Next, a 5-mm operating port located at the left lower-quadrant McBurney's point was used for auxiliary operating. Another 5-mm port for assistant was located in the right anterior axillary line 10 cm superior to the stoma. After the establishment of pneumoperitoneum, the lysis of adhesions around the stoma and dissection of the mesenteries were performed using an ultrasonic scalpel. A 60-mm endoscopic linear stapler (Johnson ECR60B) was subsequently used to transect the proximal and distal ileum for digestive tract reconstruction. First, a pair of 1-cm incisions located at the anti-mesenteric side of the proximal and distal intestines were made, respectively, and a side- Frontiers in Surgery to-side anastomosis was created with a 60-mm endoscopic linear cutter stapler (Johnson ECR60B). Then, the common opening of both intestines was closed by another linear cutter stapler and the mesenteric defect was closed by absorbable sutures routinely. Following examination of the anastomotic blood supply, the stoma remnant was removed and the incision was sutured conventionally. The learning curve analysis In this study, operation time (OT) was regarded as a reflection of surgical competency. To explore the association between surgeon experience and OT, CUSUM and moving average analyses were completed. CUSUM analysis is an analytic technique employed in surgical procedures for the quantitative estimation and visualization of the learning curve (14). Briefly, the CUSUM is the total accumulated value of differences in OT between each data point and the mean OT of all data points. In the CUSUM analysis, all 65 cases were ordered chronologically from the earliest to the latest date of TLAP. For the first patient, the CUSUM OT was the difference between the OT for the first patient and the mean OT for all cases. Similarly, the CUSUM OT for the second patient was the difference between the second OT and the mean OT of all cases plus the CUSUM OT for the first patient (15). This recursive process continued until the 65th patients was treated, and the results of all CUSUM OT analyses were plotted graphically thereafter, revealing the trend of deviation from the mean OT. Of note, the inflection points indicated at each set of ≥3 consecutive negative values were used to divide patients into separate phases. A linear regression model was then fitted to match the CUSUM curve. In addition, we also used a moving average of 5 to eliminate individual variations and highlight the long-term trends of OT (16). Specifically, the moving average of the i cases was the mean value from the i cases to the i + 4 cases (17). To analyze the learning curve from multiple dimensions, we designated each case as a success or failure. Conventionally, surgical failure was defined as conversion to open surgery. However, since there was no instance of conversion to open surgery, surgical failure was defined as any intraoperative or postoperative complications according to a previous report (18). Similar to the OT analysis, the CUSUM of complications was displayed graphically and showed the cumulative total of a mixture of increments with each surgery failure and decrements with each surgery success (19). Univariate and multivariate logistic analyses were then developed based on baseline variables (gender, age, body mass index et al.) to evaluate the potential confounders on surgical failure, respectively. Furthermore, RA-CUSUM analysis was applied to depict the success or failure of the TLAP technique. First, baseline variables with P < 0.20 in the univariate association were considered for inclusion, and the predicted probabilities of each case were calculated according to the regression coefficients of the variables in the final multivariate regression model (20). Then, for each failure case, the RA-CUSUM value was incremented by (1predicted probability of failure). In contrast, for each success, the value was decreased by the predicted probability of failure (21). Patients were again grouped into distinct phases according to the inflection points. Data collection and outcomes definition The demographic and baseline variables included gender, age, body mass index (BMI), American Society of Anesthesiologists (ASA) score, duration after previous laparoscopic colorectal cancer surgery and comorbidities. Perioperative results included operation time, estimated blood loss, length of incision, time to ground activities and flatus passage, postoperative hospitalization, and perioperative complications. Estimated blood loss was the sum of the blood in the suction canister (the total volume after subtracting the amount of irrigation fluid) and the segment of increased weight of swabs during operation phase (1 ml of blood is about weighs 1 g) according to previous randomized controlled trails (22,23). The time to ground activities and flatus was reported by patient. Postoperative hospitalization was defined as the number of nights from TALP to discharge. Perioperative complications were calculated within 30 days of surgery. Statistical analysis The SPSS version 26.0 software program (SPSS Inc., Chicago, IL, USA) and Microsoft Office Excel were used for statistical analysis and data visualization, respectively. For quantitative variables with normal distribution as determined by the Shapiro-Wilk test, data are presented as mean ± standard derivation (SD) values and compared by One-way analysis of variance followed by Bonferroni's test. In contrast, data with skewed distribution are presented using median and interquartile range (IQR) values and compared by the Kruskal-Wallis test. For categorical variables, data are presented using numbers and percentages, and the chi-squared test or Fisher's exact test was applied to reveal group discrepancy. Polynomial regression models were selected according to a bestfitted model. A P value < 0.05 was considered to indicate a significant difference in all tests. Patient demographics and clinical profile From 2018 to 2021, a total of 65 consecutive patients who underwent TLAP were enrolled in this study. The overall perioperative data are presented in Table 1. There were 43 male and 22 female patients treated with this innovative technique with a median age of 63 years. The mean BMI of the TLAP patients was 23.46 kg/m 2 . Most patients (87.69%) were classified as ASA class I or II cases, and the previous laparoscopic colorectal cancer surgery occurred a median of 9 months ago. Among comorbidities, hypertension was most common (21.54%), followed by diabetes mellitus, affecting 13.85% of enrolled patients. Other comorbidities included hyperthyroidism, coronary disease, and renal insufficiency. Intraoperative and postoperative data are also presented in Table 1. We found that the median operation time was 94 min, which was adopted as a crucial indicator for subsequent learning curve analyses. The estimated blood loss ranged from 10 ml-100 ml, with a median of 30 ml. The median incision length was 6 cm. Of note, in this study, the first ground activities (median = 1), first flatus passage (median = 2), and number of postoperative hospitalization days (median = 4) were used as reflections of postoperative recovery. Any complication during or after surgery was also estimated to assess the safety of the TLAP technique. In our series, a total of 7 patients suffered from intraoperative/ postoperative complications (trocar site bleeding, n = 1; pyrexia, n = 4; incisional infection, n = 2). Learning curve analysis based on operation time The raw operation time was plotted according to chronological case order and exhibited a tendency of steady reduction with the best-fitted logarithmic model [y = −21.44 ln(x) + 165.13, R 2 = 0.7425, P < 0.001], indicating a complex non-liner relationship between the OT and surgeon experience (Figure 2). CUSUM analysis was subsequently applied, and the mean operation time (96 min) was used as a critical reference. As shown in Figure 3A, the CUSUM of OT was best modeled as a third-order polynomial (y = 0.005x 3 -0.907x 2 + 34.673x + 99.112, R² = 0.9604, P < 0.001), which showed a gradual upward slope until the 24th case, followed by small fluctuations between the 25th and 39th cases and a subsequent steep downward trend after the 39th case. Similarly, we determined that the OT decreased significantly after the 20th case and reached a steady state after the 36th case after fitting a logarithmic model of y = −18.56 ln(x) + 153.43 (R² = 0.8806, P < 0.001) in a moving average curve ( Figure 3B). Based on the learning curve of the CUSUM of OT, we were able to separate the learning curve into the following 3 phases: phase I (an initial phase, including cases 1-24), phase II (a transition phase, including cases 25-39), and phase III (the proficient phase, including cases 40-65). Best-fitted lines for each phase were also acquired ( Figure 4). The positive slope in phase I indicated a longer OT during the initial learning phase (R² = 0.8026, P < 0.001). However, a flat slope in phase II (R² = 0.3246, P = 0.027) revealed an increased degree of surgery competency in the transition phase. More importantly, the negative slope seen in phase III (R² = 0.9879, P < 0.001) confirmed the proficiency of the TLAP technique. Interphase comparisons between the learning phases The interphase comparisons of patient characteristics are presented in Table 2. With regard to demographics, no statistical difference was found in gender (P = 0.742), age (P = 0.863), BMI (P = 0.067), ASA (P = 0.891), duration of ileostomy (P = 0.239), postoperative adjuvant therapy history (P = 0.535), or comorbidities (P = 0.187) among the initial, transition, and proficiency phases. Most notably, our results revealed that the OT between each phase was significantly different (108.5 min vs. 92 min vs. 80 min, P < 0.001). Phase I had the longest OT; meanwhile, a significant difference in OT was also revealed between phase II and III (P = 0.016). We additionally observed a significant downtrend in the number of postoperative hospitalization days (4 vs. 5 vs. 3 days, P < 0.001). In contrast, there was no significant difference in estimated blood loss (P = 0.988), surgical incision length (P = 0.798), time of first ground activities (P = 0.143), or time of first flatus passage (P = 0.663). Rates of intraoperative and postoperative complications between the 3 phases were not significantly different either (3 vs. 2 vs. 2, P = 0.778). Learning curve analysis based on complications To analyze the relationship between surgery experience and surgery success, CUSUM analysis was also performed. The CUSUM result based on intraoperative/postoperative complications showed a small fluctuation without a significant change between the zero line until approximately the 29th case, followed by an upward slope until the 32nd case and a subsequent downward slope thereafter ( Figure 5A). To adjust for potential confounding effects of baseline covariables, univariable and multivariable logistic regression analyses were conducted ( Table 3). The univariable analyses indicated that the gender (OR = 0.038, CI: 0.068-1.667; P = 0.183) and BMI (OR = 1.567, CI: 1.070-2.296; P = 0.021) were associated with a potential increased risk of surgery failure, with which age (P = 0.230), ASA score (P = 0.999), duration of ileostomy (P = 0.891), postoperative adjuvant therapy history (P = 0.722), and comorbidities (P = 0.540) were not significantly correlated. In the multivariable analysis, BMI was the only factor independently associated with perioperative complications (OR = 1.538, CI: 1.044-2.265; P = 0.029). A further RA-CUSUM analysis was conducted based on the predicted odds ratio ( Figure 5B); similar to the results of the CUSUM analysis, a small fluctuation was observed until the 22nd case and a downward tendency occurred after the 32nd case, suggesting an acceptable range with regard to perioperative complications during the learning period. Discussion To date, some studies have reported on the initial exploration of totally laparoscopic ileostomy reversal, but no available data has shown the learning process of this technique. To the best of our knowledge, this is the first study to analyze the learning curve of TLAP. Using CUSUM and moving average analyses, we assessed the learning curve based on operation time and divided it into 3 distinct phases. Then, when we compared the perioperative parameters between these phases, we discovered a significant decrease in both the OT and hospitalization stay length when the level of TLAP performance was proficient. Furthermore, CUSUM and RA-CUSUM analyses illustrated an acceptable incidence of complications during the learning process. These results not only demonstrated a relatively short learning process of TLAP but also revealed its safety and feasibility, providing available proof for its future application. Based on OT, we divided the learning process of TLAP into an initial phase, transition phase, and proficiency phase, respectively. According to CUSUM analysis, 25 cases were required for the initial exploration of TLAP, and another 14 cases were necessary to acquire proficiency. In contrast, 20 and 36 cases were required to complete the learning process, respectively, based on the moving average method. Despite limited studies of TLAP, some have explored the learning process of intracorporal intestinal anastomoses. In 2007, Torres et al. found that 21 cases were needed to achieve a satisfactory laparoscopic anastomoses time (12). In a recent study, the learning curve of laparoscopic right hemicolectomy with overlap anastomosis was decreased gradually and stabilized after 5 cases for experienced surgeons (24). Similar to our results, 18 cases were needed to gain increased competence based on the learning curve of right colectomy with intracorporeal anastomosis (25). Other studies investigating totally laparoscopic gastrectomy have suggested a required learning period of 27-29 cases (11,13). Although these researches varied in their surgical approach, the key procedure was intracorporal anastomosis and digestive tract reconstruction. These learning curves in addition to the results of our present study may partially indicate a relatively short learning period is required for TLAP. The learning curve for surgery complications showed an early peak, followed by a decreasing trend according to both the Frontiers in Surgery 05 frontiersin.org CUSUM and RA-CUSUM analyses. In other words, unlike the learning curve for operation time, the intraoperative/postoperative complications remained within an acceptable range from the early study stage onward (26). Admittedly, perioperative complications cannot be completely eliminated; however, their incidence was low in the initial, transition and postoperative periods, indicating the safety of the learning process and TLAP technique itself. Of note, the curve fluctuated until the 32nd case in both the CUSUM and RA-CUSUM analyses, which was attributed to trocar site bleeding during the operation in the 32nd case. According to the literature, ileostomy reversal carries an estimated 17.3% morbidity rate, which encompasses intestinal injury, small bowel obstruction, wound infection, and incisional hernia (27). In contrast, we observed an overall 10.77% rates of complications, and the majority were transient fever and incisional infection. Similarly, our previous study also reported a 10% incidence of postoperative complications associated with TLAP reversal in obese patients (9), whereas the open technique carried an increased incidence of incisional infection (26.5%). In summary, these results indicated the advantage of TLAP in reducing postoperative complications, which may be highlighted by further prospective and randomized multicenter studies. We also identified a significant decrease in hospitalization stay length after the transition phase, suggesting a relationship between surgery experience and postoperative recovery. In addition, cumulative studies have revealed that the laparoscopy technique itself also contributes to a quick recovery. In a randomized controlled trial, the median length of hospital stay was significantly Frontiers in Surgery reduced after ileostomy closure with laparoscopy (5). Intracorporeal anastomosis also supported fast gastrointestinal function recovery in patients undergoing right hemicolectomy (28). Notably, TLAP inevitably requires a surgical team that includes assistant surgeons. Although the auxiliary operators were inexperienced compared to the expert surgeon in this study, other studies have shown that a less-experienced assistant does not negatively affect perioperative outcomes (29,30). Moreover, the learning curve also partly reflected the tacit team cooperation in TLAP. Therefore, the learning process of auxiliary surgeons was not presented independently in this study. Admittedly, there are some limitations of this study that must be mentioned. First, this was a retrospective investigation with a small sample size in which baseline data were not fully balanced or randomized. Fortunately, further univariable and multivariable analyses showed no significant correlation between gender and complications. Second, a cost analysis assessment was not performed. However, we believe TLAP does not significantly increase hospitalization expenses based on a previous report (7). Third, the key step of TLAP, intracorporeal anastomosis, has been applied to patients undergoing right hemicolectomy in our group since 2016. As a result, enriched experience with laparoscopic colorectal surgery might be necessary to complete a safe and feasible TLAP. Lastly, current studies investigating TLAP are limited to small sample sizes, lacking adequate analysis of the learning process. In our institution, TLAP is not conducted by other surgeons either. As a result, the comparison of learning curves between different operators or studies might be difficult. More data should be made available from larger studies to illustrate the feasibility of TLAP thoroughly. In conclusion, this study explored the learning process of TLAP from multidimensional perspectives. We not only differentiated 3 learning phases based on CUSUM, moving average and RA-CUSUM analyses but also found that reductions in operation time and hospitalization stay lengths and acceptable rates of Lines of best fit for each phase of the CUSUM OT learning curve. (A) Phase I of the CUSUM OT learning curve represents the initial training phase (y = 15.564x + 178.78, R 2 = 0.8026). (B) Phase II of the CUSUM OT learning curve represents the improvement phase (y = −3.4179x + 454.34, R 2 = 0.3246). (C) Phase III of the CUSUM OT learning curve represents the mastery phase (y = − 17.304x + 442.06, R 2 = 0.9879). Frontiers in Surgery perioperative complications were associated with mastery of the TLAP technique, providing reliable evidence of its potential for ileostomy reversal. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by The Ethics Committee of Cancer Hospital, Chinese Academy of Medical Sciences.. The patients/participants provided their written informed consent to participate in this study.
v3-fos-license
2016-05-18T13:21:40.425Z
2012-03-06T00:00:00.000
9872387
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ehjournal.biomedcentral.com/track/pdf/10.1186/1476-069X-11-10", "pdf_hash": "0f4e89fa50a87744ab16ed0a0e164d987452cb8f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44051", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "9d60307d1f15770a8a08bb0ecd647e536a3dc502", "year": 2012 }
pes2o/s2orc
Social disparities in exposures to bisphenol A and polyfluoroalkyl chemicals: a cross-sectional study within NHANES 2003-2006 Background Bisphenol A (BPA) and polyfluoroalkyl chemicals (PFCs) are suspected endocrine disrupting compounds known to be ubiquitous in people's bodies. Population disparities in exposure to these chemicals have not been fully characterized. Methods We analyzed data from the 2003-2006 National Health and Nutrition Examination Survey. Using multivariable linear regression we examined the association between urinary concentrations of BPA, serum concentrations of four PFCs, and multiple measures of socioeconomic position (SEP): family income, education, occupation, and food security. We also examined associations with race/ethnicity. Results All four PFCs were positively associated with family income, whereas BPA was inversely associated with family income. BPA concentrations were higher in people who reported very low food security and received emergency food assistance than in those who did not. This association was particularly strong in children: 6-11 year-olds whose families received emergency food had BPA levels 54% higher (95% CI, 13 to 112%) than children of families who did not. For BPA and PFCs we saw smaller and less consistent associations with education and occupation. Mexican Americans had the lowest concentrations of any racial/ethnic group of both types of chemicals; for PFCs, Mexican Americans not born in the U.S. had much lower levels than those born in the U.S. Conclusions People with lower incomes had higher body burdens of BPA; the reverse was true for PFCs. Family income with adjustment for family size was the strongest predictor of chemical concentrations among the different measures of SEP we studied. Income, education, occupation, and food security appear to capture different aspects of SEP that may be related to exposure to BPA and PFCs and are not necessarily interchangeable as measures of SEP in environmental epidemiology studies. Differences by race/ethnicity were independent of SEP. Background Identifying populations that are highly exposed to environmental chemicals is important for protecting public health and preventing health inequalities. Identifying differential patterns of exposure in populations can also provide useful information for hypotheses about possible sources of exposure that, especially for many emerging chemicals of concern, are poorly understood. This study investigates differences by measures of socioeconomic position (SEP) and race/ethnicity in body burden of two types of chemicals, bisphenol A (BPA) and polyfluoroalkyl chemicals (PFCs). Both are suspected endocrine disrupting chemicals (EDCs) and may alter the normal functioning of hormones and other signaling molecules in the body [1]. BPA is a high volume chemical used industrially to form polycarbonate plastic (PC) and it is present in epoxy resins, including those used as the lining in canned foods [2]. It is an estrogenlike chemical found in some animal studies to disrupt reproductive development, body weight and metabolic homeostasis, and neurodevelopment, and to cause mammary and prostate cancer. Several comprehensive reviews of health outcomes associated with BPA have been published in the last five years [3][4][5][6][7]. PFCs are a class of chemicals used widely in consumer products to impart stain, oil, and water resistance. In particular they are used in food packaging and carpeting and textile treatments [8]. Laboratory studies have found tumors in certain organs and developmental delays in animals exposed to PFCs [9,10], and recent preliminary research in humans reported associations with birth weight, cholesterol levels, and fertility [11][12][13]. Though BPA and PFCs are ubiquitous in peoples' urine and blood, with U.S. studies detecting them in greater than 90% of people tested [14,15], the specific pathways of human exposure are not well understood. For both chemicals, diet is thought to account for the majority of exposure for most people. In the case of BPA, estimates for adults put the dietary contribution near 100% of total exposure [16,17]; the migration of the chemical from food cans and PC food containers into food may account for most of this, though lessunderstood exposure routes may also contribute. For PFCs, studies have estimated the dietary contribution as 61% [18], 72% [19], and 91% [20] of total exposure. However, the studies used to develop these estimates are limited in how fully they are able to assess overall human exposure. Recent studies suggest a contribution of indoor air and/or dust to PFC body burdens [21,22]. BPA and PFCs behave very differently once taken into the human body. BPA is rapidly metabolized via glucuronidation, with an estimated urinary elimination half-life in humans of 5.4 hours [23]. A recent study suggests that more accumulation may be occurring than previously assumed, though the half-life is thought to be on the order of days at the most [24]. PFCs, on the other hand, are poorly metabolized, with half-lives of greater than two years in human serum [25,26]. They are thought to bind to proteins in the blood and tissues rather than to lipids, unlike most other persistent organic chemicals [27]. Previous studies have found socioeconomic and racial/ethnic differences in urine and serum levels of BPA and PFCs in a representative sample of the U.S. population. Using data from the 2003-2004 cycle of the National Health and Nutrition Examination Survey (NHANES), Calafat et al. found that urinary BPA concentrations were highest among the lower income group (household incomes less than $20,000), and lowest among Mexican Americans compared to Non-Hispanic Blacks and Non-Hispanic Whites [15]. In contrast, PFC serum concentrations were lower in people with less education (did not graduate from high school), while Mexican Americans had lower levels than other racial/ethnic groups [14]. Differences by SEP and race/ethnicity were not the focus of these studies, and neither included detailed consideration of factors that may explain the disparities. SEP and race/ethnicity, in and of themselves, are not likely to explain the differential body burdens of these chemicals; rather, they serve as surrogates for activities, behaviors, or circumstances that may actually contribute to differences. SEP has been defined as "structural locations within society that are powerful determinants of the likelihood of health-damaging exposures and the possession of health-enhancing resources" [28]. Figure 1 presents a framework for conceptualizing these relationships: through several pathways, SEP and race/ethnicity may influence behaviors such as diet and use of consumer products which are sources of exposure to BPA and PFCs. Race/ethnicity is often associated with SEP, and may also be an independent determinant of dietary and other consumer behaviors. There are numerous ways to characterize SEP; the most commonly used measures are income, education, and occupation. While correlated with one another, each "emphasizes a particular aspect of social stratification" that, in this case, may be more or less relevant to the pathways by which people are exposed to BPA and PFCs [29]. Our study builds on the previous work by Calafat et al. that found opposite associations between measures of SEP and body burdens of BPA and PFCs; one study reported differences by income and the other by education [14,15]. We further investigate these apparent opposite trends by examining relationships between both chemicals and a common set of SEP measures: family income (categorized in four ways), education, occupation, and food security (measured in two ways). Occupation and food security have not been studied before in the general population in relation to both BPA and PFC levels. We also consider the complex relationship between SEP and race/ethnicity, and expand the investigation to an additional NHANES cycle, [2005][2006]. This study provides insights into social disparities in exposure to these two types of chemicals, and sheds light on hypothesized sources of exposure. Study population We used data from NHANES, an ongoing survey of the civilian non-institutionalized U.S. population conducted by the U.S. Centers for Disease Control and Prevention (CDC). NHANES uses a complex multistage probability sampling design to select participants, and certain racial/ethnic, income, and age groups are oversampled to ensure representativeness [30]. Approximately 5000 participants per year are enrolled, and data are released in two-year cycles. Our study used data from two cycles, 2003-2004 and 2005-2006. Participants came to a mobile examination center for a physical examination and to provide blood and urine samples, and numerous questionnaires were administered by trained interviewers [30]. Random one-third subsamples of participants had their urine and serum analyzed for environmental chemicals by the National Center for Environmental Health. BPA was measured in urine of participants aged six and older, and PFCs in serum of participants aged 12 and older. The subsamples of participants did not overlap for the chemical analyses. NHANES obtained informed consent from all participants. Biomonitoring measurements Total BPA concentration was measured in urine, and includes BPA parent compound and conjugated metabolites [15]. Measurements were made using solid phase extraction coupled online to high performance liquid chromatography and tandem mass spectroscopy [15]. PFCs were measured in serum using solid phase extraction coupled to high performance liquid chromatography-turbo ion spray ionization and tandem mass spectrometry [14]. The same laboratory techniques were used in both cycles, though limits of detection (LODs) for certain chemicals varied slightly between years. Twelve PFCs were measured in serum. We examined the four PFCs detected in greater than 98% of participants: perfluorooctane sulfonic acid (PFOS), perfluorooctanoic acid (PFOA), perfluorononanoic acid (PFNA), and perfluorohexane sulfonic acid (PFHxS). Values below the LOD were reported by NHANES as the LOD divided by the square root of two. Measures of SEP and race/ethnicity Numerous measures of self-reported SEP were available for participants, assessed through interviews conducted in-person by trained interviewers [31]. We used responses from the following questionnaires: demographics, food security, and occupation (2003-2004 only) [32-36]. Participants reported their annual family income in $5000 increments, with a top category of greater than $75,000. If they refused to answer at this level of detail, they were asked whether their income was less or greater than $20,000. We categorized annual family income in two ways: 1) in four groups, $0-19,999, $20-44,999, $45-74,999, and $75,000 and greater, and 2) in two groups, with a $20,000 cut point, a measure often used in NHANES studies because it maximizes sample size. We also considered the poverty-income ratio (PIR), a ratio of the midpoint of the family income category to the official U.S. poverty threshold as determined by the U.S. Census Bureau, adjusted for family size [35]. A PIR of 1 means that family income is equal to the poverty threshold [37]. We used the following categories: less than 1 (i.e. below the poverty threshold), 1-3, and greater than 3. Finally, we looked at family income adjusted by the square root [38] of family size (available only in 2005-2006 data) or household size (for 2003-2004 data), categorized into quartiles. Educational attainment was assessed for those aged 20 and older. We used the following categories: less than high school, high school graduate, some college/associate's degree, and college graduate or above. Data on occupation were available for 2003-2004 only, and for those over age 16. Participants were asked to choose from a list of 41 possible occupational groups for both their current and longest-held job; examples included "teacher," "waiter and waitress," "executive, administrator, manager," and "construction trades" [33]. To categorize occupation, we used an approach that is a hybrid of the U.S. model, which groups jobs by skill, industry, or type (i.e. white collar, service workers, farm workers, blue collar), and the U.K. "work relations" model, which uses 5 categories based on "aspects of work and market situations and of the labor contract" (ranging from managerial/professional to semiroutine/ routine) [29]. This hybrid classification system has been employed in previous studies using NHANES data [39]; detail on categories is available in Additional file 1. In our analysis we used information on longest-held occupation. Food security was measured by NHANES using the U. S. Food Security Survey Module that assesses whether participants and others in their family had adequate food over the last 12 months [32]. Questions included, "were you ever hungry but didn't eat because you couldn't afford enough food?" and "did your child ever skip meals because there wasn't enough money for food?" In 2005-2006, all households were asked the food security questions regardless of income; in 2003-2004, households with incomes over 4 times the poverty threshold were screened out [32,34]. Responses to the individual food security questions were summed by NHANES into an overall food security status variable, and reported as full, marginal, low, and very low. In addition, we looked at whether the participant or a member of their household received emergency food (from a church, food pantry, food bank, or soup kitchen) in the last 12 months. NHANES assessed race/ethnicity through a series of questions [35]. The participant was first asked whether they consider themselves Hispanic/Latino. They were then asked, "What race do you consider yourself to be?" and could select one or more from a list of fifteen options, including "White," "Black/African American," and "Some other race." Finally, they were asked to choose the one group that best represents their race, with the possible option, "I cannot choose one race." The variable released by NHANES combines these questions and groups people into one of five categories: Mexican American, Other Hispanic, Non-Hispanic Black, Non-Hispanic White, and Other including Multiracial. We also examined whether there were differences among Mexican Americans according to country of birth, since a previous study of polybrominated diphenyl ethers (PBDEs) found important differences in serum concentration by country of origin [40]. Covariates We included a small group of covariates in our analyses a priori based on known associations with urine/blood concentrations of BPA and PFCs: age (in categories: 6-11, 12-19, 20-59, older than 60), gender, and in the case of BPA, urinary creatinine, a measure of urinary dilution (continuous variable, mg/dL of urine). We included creatinine as a term in the model rather than using creatinine-adjusted BPA concentrations; creatinine is known to vary by age, gender, and race/ethnicity [41]. As previous studies have reported changes in BPA and PFC body burdens over time, we also controlled for NHANES cycle [14]. We tested to see whether additional variables were acting as confounders; these included time of exam session and total cholesterol (TC, in PFC models only). Participants over age 12 were randomly assigned to either the morning or afternoon/evening exam sessions; those attending the morning session were asked to fast for 9.5 hours, and the latter two for 6 hours [31]. An examination of urinary BPA and reported fasting time in 2003-2004 NHANES data found a decline in BPA concentrations with reported fasting time [24]. Although participants were randomly assigned to exam session time, it is possible that there could be differences in attendance or fasting adherence. TC has been shown to be associated with PFCs in this data set and is likely associated with SEP as well [13]. Statistical analysis We compared the different measures of SEP by examining frequency tables of education, occupation, and food security by quartiles of adjusted family income. We analyzed associations between chemical concentrations and SEP and race/ethnicity using multivariable linear regression. Both BPA and PFC concentrations were approximately log-normally distributed; while most individuals had detectable concentrations, the great majority of values were on the low end of the distribution. We thus analyzed both as natural log-transformed continuous variables. We first examined associations with SEP measures separately, controlling for race/ethnicity and the previously-mentioned covariates. Because we wanted to compare different SEP measures, the final study population in the income and food security models consisted of participants who had complete data on all income and food security variables. The sample sizes were smaller for the education and occupation analyses due to the more limited population for which these variables were available (Additional file 2). In this subset of participants we also examined associations with income and food security. To determine if certain SEP variables were more important predictors than others, we next included multiple SEP variables in the same model. We studied the relationship between SEP and race/ethnicity by comparing results of models with race/ethnicity alone to those that included SEP measures to assess whether this changed the race/ethnicity results. We also considered, separately, interaction by age and gender by including ageand gender-by-SEP terms in the models, and by using stratification. All regression analyses were performed using the SAS 9.1 Proc SURVEYREG procedure, which takes into account possible correlation between the strata and clusters by which NHANES samples the population. As our intent was to investigate these relationships in a defined population, models were adjusted for relevant covariates instead of using NHANES sampling weights. This adjustment is regarded as a good compromise between efficiency and bias [42]. We present effect estimates for levels of SEP variables and racial/ethnic groups, which represent the percent difference in BPA and PFC concentration for each category compared to the reference group, and their corresponding 95% confidence intervals (CIs). Effect estimates were calculated by exponentiating the natural log-transformed regression coefficients. We assessed statistical significant at the alpha = 0.05 level. Results Of the total NHANES 2003-2006 sample, 5062 participants had BPA measurements and 4214 had PFC measurements; the difference in numbers is due to the fact that PFCs were not measured in 6-11 year-olds. For income and food security measures of SEP, which were available for all age groups, our final sample size was 4739 for BPA and 3953 for PFCs, after excluding those with missing data for the variables of interest (see Additional file 2). The different income measures we studied had different numbers of participants with missing data: family income categorized as less or greater than $20,000 had the fewest missing participants (3%), and adjusted family income and PIR had the most (5%). The final sample sizes for the education analyses were restricted to those older than 20, and for occupation to those older than 16 and in the 2003-2004 cycle. Table 1 displays unadjusted median concentrations of BPA (creatinine-corrected) and PFCs by covariates, the SEP measures studied, and racial/ethnic groups. Median urinary BPA was highest in children, women, participants in the earlier NHANES cycle, and those with lower incomes. Of the PFCs, PFOS had the highest serum concentrations; median levels were an order of magnitude greater than PFOA, PFHxS, and PFNA. PFCs overall were higher in men than women, and PFOS was highest in the oldest age group and in the earlier NHANES cycle. Differences by income and race/ethnicity were most apparent for PFOS and PFOA, with the highest levels seen in higher income groups and non-Hispanic Whites. The different SEP measures were related to one another in a predictable fashion: of those who graduated from college, 67% were in the top adjusted family income quartile; 43% of participants who never worked were in the bottom quartile; and 53% and 57% of those with very low food security or who received emergency food, respectively, were in the bottom quartile (Additional file 3). However, there was some discordance across SEP variables. For example, almost 30% of participants with less than a high school education were in the top two income quartiles; the distribution of the occupational categories, particularly the "blue collar, high skill" group, was fairly evenly distributed across income quartiles; and close to 40% of those reporting full food security were in the bottom two quartiles. Socioeconomic position In adjusted regression analyses, urinary concentrations of BPA were inversely related to all four measures of income (Table 2). For example, those in the lowest quartile of adjusted family income had BPA concentrations 27% (95% CI, 15 to 40%) higher than those in the highest income quartile. Though the four family income variables revealed similar patterns, the magnitude of the difference was decreased with the two-category variable. We also saw higher concentrations in those with very low food security, and those who received emergency food. Though we did not see an inverse trend with educational attainment, college graduates had the lowest BPA levels. Results for occupation did not reveal a consistent pattern, though the "blue collar, high skill" group (including vehicle mechanics, construction workers, and members of the armed forces) had higher BPA concentrations. In these and all other models, controlling for exam session did not change the observed associations with SEP or race/ethnicity. Results for PFCs revealed an opposite relationship than that for BPA; all four PFCs had strong positive associations with income ( Table 2). For PFOA, those in the lowest quartile of adjusted family income had PFC serum concentrations 21% (95% CI, -26 to -14%) lower than those in the highest quartile. As in the BPA analysis, using the two-category income variable attenuated the association, and income measures adjusted for household size resulted in stronger associations with PFC levels. Those who received emergency food had lower concentrations of PFOS, PFOA, and PFNA, as did those with low food security (although we did not see the same strong association with very low food security). Associations between education and occupation and PFC level were weaker than for income, though PFNA concentrations were lower in those who had never worked. When restricted to the subset with information on education and occupation, relationships for income, food security, and emergency food assistance were slightly stronger than in the population overall (data not shown). Similar to exam session, controlling for TC in these and all other PFC models did not affect results. When multiple SEP measures were included in the same model, adjusted family income remained the predictor of the greatest magnitude and strength for both BPA and PFCs (Table 3). Effect estimates for food security status and use of emergency food decreased when income was added, though, for BPA, regression coefficients remained elevated in the same pattern (but without statistical significance). Modification by age and gender We found some evidence for different effects by age in the results for adjusted family income and food security. Overall, the effect estimates for family income were most consistent in 20-59 year-olds, with a clear trend for BPA and all four PFCs (Additional file 4). For BPA, income was only associated with urinary levels in the younger three age groups; there was no association in the oldest age group. The strong association between BPA concentrations and food security (both very low food security and use of emergency food) was markedly stronger in 6-11 year-olds. Children who received emergency food had BPA levels 54% higher (95% CI, 13 to 112%) than children who did not. This relationship was much smaller in 12-19 and 20-59 year-olds and not evident at all in those over 60. Results were similar in the very low food security group, except that participants over 60 had increased concentrations similar to 6-11 year-olds. For PFCs, the inverse associations by income and food security were most apparent in 20-59 year-olds except for PFHxS, where associations were also strong in those over 60. We observed fewer differences by gender (data not shown). Very low food security and receipt of emergency food were more strongly associated with BPA concentrations in women than in men. For PFCs, emergency food was more strongly associated in men than in women, whereas the magnitude of the association for family income was greater in women than in men. Race/ethnicity and SEP Tables 4, 5 examines the relationship between race/ethnicity and SEP. In models unadjusted for a measure of SEP, BPA concentrations were lower in Mexican Americans compared to Non-Hispanic Whites ( Table 4). The relationship was stronger in Mexican Americans born in the U.S. than in those born elsewhere ( Table 5). The association became even stronger when controlling for adjusted family income, indicating that this difference was not mediated by income. When stratified by quartile of adjusted family income, the decrease in Mexican Americans relative to other groups was strongest in the lowest two income quartiles and not evident in the top quartile (data not shown). BPA concentrations in Non-Hispanic Blacks and Whites were similar. Mexican Americans also had the lowest concentrations of all four PFCs (Table 4). When controlled for income, these differences decreased slightly. Foreignborn Mexican Americans had lower levels of PFCs than those born in the U.S. (Table 5) With PFOA, for example, foreign-born Mexican Americans had serum concentrations that were 40% lower (95% CI, -45 to -35%) than non-Hispanic Whites, whereas the difference in those born in the U.S. was 17% (95% CI, -25 to -8%). This difference by country of origin was less apparent for PFNA. Stratification by adjusted family income revealed that Mexican Americans for the most part had the lowest levels of PFCs across all income quartiles, compared with other ethnicities, with some evidence for slightly stronger decreases in the lowest income quartiles (data not shown). Non-Hispanic Blacks had lower PFOA concentrations compared to Non-Hispanic Whites, but higher PFNA and, to a lesser extent, PFOS levels. These positive associations increased with control for income. Discussion Our findings show that people with lower incomes, who may be more likely to suffer from other disparities in health and exposures, have a greater burden of exposure to BPA. The results for children are especially troubling. Children overall had higher urinary BPA concentrations than teenagers or adults, but children whose food security was very low or who received emergency food assistance -in other words, the most vulnerable children -had the highest levels of any demographic group. Their urinary BPA levels were twice as high as adults who did not receive emergency food assistance. Concerns about health effects from BPA exposure are strongest for young children and neonates because they are still undergoing development [3]. Results for BPA by race/ethnicity, adjusting for income, revealed that Non-Hispanic Whites and Blacks had similar urinary levels, and being Mexican American appeared to be highly protective. Findings for PFCs revealed differences by socioeconomic position in the opposite direction. Participants with the highest incomes had the highest serum concentrations. We did not see the same vulnerability in younger age groups as with BPA; associations with income were strongest in adults. However, NHANES did not measure PFCs in 6-11 year-olds. While there was some variation by race/ethnicity, Non-Hispanic Whites had the highest levels for two of the four PFCs and being Mexican American again appeared to be protective. The possible pathways by which SEP is associated with differential exposures to BPA and PFCs may be elucidated by comparing results for the individual SEP variables. Family income was by far the most consistent and important predictor of concentrations; it had a clear dose-response pattern for all chemicals, and remained the strongest when included in models with other SEP variables simultaneously. Conceptually, income reflects access to material goods; a family's current household income is the most specific measure of their immediate financial resources [28]. Thus, income may affect exposure to BPA and PFCs via types of foods consumed or via other consumer products used (or not used) in the home. Past studies that have examined the effect of income, education, and occupation on diet quality have consistently found that income is the most important and strongest predictor of diet [43,44]. Given that diet is assumed to be a major pathway of exposure to these chemicals, differences in food purchasing patterns by income seems one likely explanation for the observed differences. The literature on specific differences in diet by measures of SEP is large; a review offers this summary of socioeconomic status (SES) and dietary intake: "available evidence suggests that consumption of whole grains, lean meats, fish, low-fat dairy products, and fresh vegetables and fruit was consistently associated with higher SES groups, whereas the consumption of fatty meats, refined grains, and added fats was associated with lower SES groups" [45]. Cost of food is a compelling hypothesis for why this differential exists, as foods with higher energy density are cheaper per amount of energy, but also tend to be nutrient-poor [45]. This is thought to be an important reason why consumption of fresh fruits and vegetables in particular is lower in people with lower incomes. Income is also a strong determinant of where a person lives, and there is a growing body of literature on the lack of access to large supermarkets and ample fresh fruits and vegetables in lower income neighborhoods [46]. This is the first study to examine the relationship between body burdens of BPA and PFCs and two measures of food security as possible proxies for SEP. We conceptualized these variables as representing the intersection of income and dietary behavior, and assumed that those with very low food security, or those who received emergency food, were an especially vulnerable population in terms of accessible dietary options. Particularly in regards to BPA exposure, we hypothesized that they would be more likely to eat canned foods. Recent research indicates that eating canned and packaged foods can contribute to BPA body burdens [47]. We did see associations in the hypothesized direction between BPA and the food security measures; there was a particularly strong signal with very low food security compared to low and marginal food security, and an association of slightly lower magnitude in those who received emergency food. These associations were attenuated when controlling for income, though coefficients remained positive. More striking were the associations between BPA and food security in 6-11 year-olds, which were of the greatest magnitude of any age group. This could be due to greater consumption of foods containing BPA, or the fact that children consume more food per body weight than adults. Though use of emergency food was also associated with PFCs, food security status was not as important a predictor for these compounds. We saw fewer and less consistent associations between education and concentrations of BPA and PFCs, particularly PFCs. Education is a long-term indicator of SEP, and embodies the transition in SEP from childhood to adulthood [28,44]. In terms of the specific ways in which education may impact exposure to chemicals, it is thought to represent the ability to access and interpret health-related information [28]. With exposure to BPA and PFCs, however, this type of knowledge may not be useful in reducing exposures, as consumers most often do not know that they are being exposed to these chemicals, nor how exposure is occurring. The recent flurry of media and political action around BPA in baby bottles and PC water bottles may be changing this dynamic for BPA, and may explain the slightly stronger associations we observed with education, but increased attention began only in the last few years and is not relevant for the bulk of the study period [48]. Similar to previous studies, we found some discordance between education and current income [44,49]. For BPA, which has a very short half-life, we would be more interested in current and not long-term income, since the foods and products a person purchased in the very recent past directly contribute to urinary levels. There appeared to be little association between BPA and PFCs and occupation; however, our ability to draw conclusions about these relationships is limited by the fact that we had smaller numbers as data were only available for adults from 2003-2004. We did not see many associations between the compounds and occupation classified into five skill-and work relations-based categories. It seems unlikely that work-related psychosocial stress would affect exposure, though the physical conditions of a workplace could contribute to exposures. Examples include working in an office with new carpeting or furniture that contains PFC precursors [21] or a job in retail that involves handling credit card receipts that contain BPA [50]. The weaker associations observed for education and occupation may also be partly related to the fact that both are measured on the level of the individual, whereas family income and food security are familylevel measures [44]. The latter two variables may be more accurate measures of SEP insofar as family purchasing patterns are concerned. This distinction could be important for food purchasing behavior, as it is not clear who in the family (i.e. the participant or some other family member) makes the food shopping decisions. Our results clearly show differences in BPA and PFC body burdens by measures of SEP that were not explained by race/ethnicity, and vice versa. It is likely that cultural behaviors and patterns are associated with race/ethnicity independent of SEP. The strikingly lower concentrations of both chemicals in Mexican Americans, even after controlling for income, was the most notable result regarding race/ethnicity. This is particularly unexpected for BPA, where Mexican Americans do not fit with the observed pattern of lower income groups having higher urinary concentrations. Mexican Americans and Hispanics have been shown to have higher intake of fruits and vegetables compared to Non-Hispanic Blacks and Whites in different population-based surveys, including NHANES 2003NHANES -2004, the 2005 California Health Interview Survey [51], and the 2000 National Health Interview Survey [52]. Eating more fresh fruits and vegetables is likely to be associated with eating less canned foods, which may explain the lower urinary BPA levels seen in Mexican Americans compared to other groups. In addition, we observed that foreign-born Mexican Americans had markedly lower serum concentrations of PFCs than U.S.-born Mexican Americans, except for PFNA. This is consistent with the fact that PFCs have long half-lives, and exposure from many years past (i.e. when living in Mexico, where exposures may be lower) could impact current serum levels. Similar patterns have been seen for some other persistent lipophilic chemicals [40,53]. For BPA, foreign-and U.S.-born Mexican Americans had similar levels, which makes sense given that BPA has a short half-life, and lower exposures from years past would not matter. A final aspect of SEP and its relationship with race/ ethnicity that must be mentioned is wealth. Wealth can be thought of as the "accumulated assets" of an individual or family, usually in the form of savings, real estate, and inherited items, and represents economic security [54]. While there are no direct measures of wealth in NHANES, differences in wealth by race/ethnicity are reported to be much larger than differences by income; for the same income, the amount of wealth for African Americans and Hispanics has been shown to be much lower than for Whites [28]. Thus, adjusting for income alone may underestimate the real effect of SEP [28], and differences by race/ethnicity may suffer from residual confounding due to inability to adjust for wealth. Our findings have various practical implications for environmental epidemiology. It is standard in environmental epidemiology studies to include some measure of SEP as a covariate in models. But, it is rare to see a discussion of the rationale for the choice of SEP variable. In many instances, there seems to be an assumption that different measures, particularly income and education, serve as surrogates for the same underlying phenomenon, and that they can be used interchangeably. Our study finds that, for urine and serum concentrations of BPA and PFCs, this is not the case; the SEP measures we studied do not overlap entirely with one another, and had different estimates of effect in our regression models. We conclude that, in the context of this study, income, education, occupation, and food security do not represent the same socioeconomic constructs, but rather seem to capture different aspects of how SEP may be related to exposure to BPA and PFCs. While constraints regarding data availability and the need to maximize sample size will always be an issue, the question of which SEP measure to use is an important methodologic concern, and merits more consideration by researchers in the field. As discussed, family income was the most important SEP predictor in our investigation. We found that adequate gradations must be used, however, to see the full extent of the effect. When modeled as a dichotomous variable with a cut point of $20,000, which is tempting to do in NHANES as there are fewer missing participants, the full effect of SEP was underestimated. There was also some indication that income measures that adjusted for household size, such as adjusted family income and PIR, were stronger predictors. Regarding education, we found that using a dichotomous variable with a cut point of high school graduation did not fully capture the SEP difference. Participants with some college or an associate's degree were more similar to high school graduates than college graduates. A disadvantage of using education as a measure of SEP is that it is not a useful measure for children, a concern that also applies to occupation. In addition, our findings related to occupation are limited due to the smaller sample size, but it may be the case that a different approach to categorizing occupation, such as one based on type of industry, would be more closely related to the outcomes of interest. Food security, though not as commonly used to assess SEP, revealed important information about a vulnerable population -children whose families have very low food security or receive emergency food aidinformation that other SEP measures failed to provide. Regarding the measurement of race and ethnicity in studies such as these, our findings show that useful information can be gleaned from considering country of origin, particularly for Mexican Americans. This follows previous examples [40,53]. Inherent in our study are a number of limitations. One concern is possible confounding by geography, which we cannot assess with publicly-available NHANES data. Regional and local populations vary in measures of SEP and race/ethnicity; if BPA and PFC exposures also differ with geography, there could be confounding. This geographic variation in chemical exposures could occur through differences in environmental contaminationlocalized contamination with certain PFCs has been reported in the USA (e.g., Hoffman et al. [55])-or in consumption patterns of foods and other products that lead to exposure. In particular, the striking findings for Mexican Americans must be taken with caution. Zota et al. [40] received permission to access state-level geographic information for NHANES 2003-2004 PBDE data and showed that, because Californians overall had higher serum concentrations of PBDEs and a large proportion of Mexican Americans sampled by NHANES lived in California, residence in California confounded results for Mexican Americans. Further investigation of geographical differences in body burdens would be greatly aided by the public release of NHANES data indicating region of the USA, something that would appear unlikely to threaten confidentiality. We relied on a single biomonitoring measurement of the chemicals of interest. This is less of an issue for PFCs, which have long half-lives and we would not expect concentrations to vary significantly within an individual. For BPA, however, this is more of a concern. Mahalingaiah et al. [56] showed a single spot urine sample to be predictive of exposure over weeks to months, despite within person variability. However, they assessed a single sample's ability to classify participants into tertiles, which is not how we modeled our data. And, they concluded that a second sample offered improvements in classifying individuals. For both compounds, there are complications involved in interpreting results from biomonitoring data. While biomonitoring measurements provide a useful estimate of internal dose, there is likely inter-and intra-individual variation in measurements as a result of various factors that influence the chemical's pharmacokinetics, i. e. its distribution among compartments of the body, metabolism, and excretion [57]. These factors include genetics, biological characteristics such as gender, body fatness, and liver function, and environmental factors such as diet, all of which may affect a chemical's pharmacokinetics. Though these concerns may be particularly relevant for BPA, which is measured as a urinary metabolite, many questions remain about the pharmacokinetic behavior of both BPA and PFCs in the body. The measures of SEP we studied are all based on selfreported data. Getting participants to report personal income in particular is notoriously difficult [28]. However, the NHANES approach of asking people to report in income categories seemed to work reasonably well, as less than 4% of participants were missing family income data. A limitation in our assessment of SEP was the availability of only two years of data for occupation. Our study has several strengths, including a large sample size, unrivalled in studies of this nature that involve costly biomonitoring measurements. The NHANES sampling methodology of oversampling certain racial/ ethnic, income, and age groups was critical in providing an excellent distribution of participants across different categories. Thus, we had ample power to detect associations between different SEP and racial/ethnic groups, and were able to consider modification by age and gender. Another key advantage was the availability of robust data on a variety of SEP measures. This enabled us to compare different SEP-related variables. This paper has primarily explored associations between body burdens and measures of SEP and race/ ethnicity. More research is needed on the specific aspects of diet, consumer products, and other activities or circumstances that provide the links between SEP/ race/ethnicity and body burdens ( Figure 1). This question might be approached using dietary intake data, measures of indoor exposure, and other techniques. Conclusions Characterizing social disparities in exposure to potentially harmful chemicals is an important responsibility of environmental health. The juxtaposition of BPA and PFCs together reveals a striking opposite pattern of associations with measures of SEP, particularly income. BPA levels were inversely associated and PFC levels positively associated with family income. Differences by race/ethnicity -most notably, markedly lower concentrations in Mexican Americans for both BPA and PFCs -were independent of SEP. We conclude that income, education, occupation, and food security represent distinct facets of social stratification and are not necessarily interchangeable as measures of SEP in environmental epidemiology studies. In these data, family income with adjustment for family size was the strongest predictor of BPA and PFC levels among the different measures of
v3-fos-license
2019-03-22T16:09:56.858Z
2007-11-12T00:00:00.000
85299171
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/ijecol/2007/074090.pdf", "pdf_hash": "d5f6fd4009354132a513d9bd20e292a0bef7c54c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44052", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "6453aaad7c7ba4002b145863b315ebdf9006b7f9", "year": 2007 }
pes2o/s2orc
The Contribution of Endozoochory to the Colonization and Vegetation Composition of Recently Formed Sand Coastal Dunes The objective of this study was to determine whether endozoochory contributes to the dispersal and colonization of plant species in recently formed coastal dunes. At least 5.7% of species present in the study area are being dispersed by wild rabbits (Oryctolagus cuniculus L.). Most dispersed species are perennial herbs with small seeds size. The continuous input of seeds through rabbit feces into newly created areas would ensure the constant arrival of seeds and would facilitate colonization. Therefore, endozoochorous dispersal may play a relevant role for the structure and composition of dune plant communities. INTRODUCTION In coastal dune ecosystems, dispersal and colonization take place mainly through wind and water [1], and endozoochory has not been considered an important dispersal mechanism in terms of its contribution to the structure and composition of plant communities [2][3][4][5]. In young dunes, vegetation presence is limited by the availability of propagules, stressful environmental conditions, and sand movement [6][7][8].In these dunes, the seeds arrival and species colonization are important to the early stabilization of freshly deposited sediments [9,10].Endozoochory could significantly contribute to seed rain and play a leading role in primary succession because one of the main constraints on colononization is the seed inputs [11].The objective of this study was to determine whether endozoochory contributes to the dispersal and colonization of plant species in recently formed coastal dunes. STUDY AREA AND METHODOLOGY This study was carried out in the twelve young dunes situated on the distance end of "El Rompido spit".This prograding spit is located in the Rio Piedras estuary, Huelva Province, South-Western Spain.Dunes have been formed in the last 47 years, increasing the total spit area by approximately 1.5 ha per year [13].The climate is mediterranean, with moist winters and dry summers.Mean annual temperature is 18.2 • C, and mean annual precipitation is 620 mm.From previous studies conducted in the spit, we can assess that the rabbits (Oryctolagus cuniculus L.), medium size mammalian herbivore, have a high population number and are the main disperser of plant dune species [14]. The endozoochorous dispersal activity of rabbits was studied through fecal pellet counts.One permanent line transect was laid at each dune (12 in total), being 1 m wide and with variable length depending on dune size.Line transects were cleared of all rabbit feces in December 2003, and pellets were subsequently collected monthly in 1 m 2 plots over the line transect from January to December 2004.Collected feces were dried in the laboratory and germinated on sterile sand in a greenhouse.They were watered daily and seedling emergence was also monitored daily for eight months.When seedlings could not be identified, they were transplanted into small pots where they were allowed to grow until identification was possible.At each dune ridge, total vegetation cover and individual species cover were estimated by [15] method during spring and autumn, aiming to obtain an accurate list of all species present (annuals and perennials). RESULTS Fifty-two species were recorded in the vegetation surveys and 10 of them were present in rabbit feces (Table 1). Most seeds dispersed by rabbits belong to perennial herbaceous species with small seeds size (0.2 to 4.4 mm) (Table 1).The largest seeds were those from Solanum alatum and Retama monosperma, the latter being the only shrub species present in rabbit feces (Table 1). The seed rain was observed throughout the year (1,89 ± 0,24, 0,32 ± 0,04, 0,29 ± 0,06, and 0,60 ± 0,13 seed by square meter in summer, winter, spring, and autumn, resp.)(Table 2) but considering the flowering/fruiting months of species present in rabbit feces, seeds were consumed and dispersed mainly in spring and summer, that is, when they are available (Table 2).R. monosperma was dispersed throughout the year because its fruits fall and persist on the ground for a long time (Table 2). DISCUSSION In the young dunes of the study area, at least 10 plant species were dispersed by rabbits through endozoochory, which represents 5.7% of the total species pool of the El Rompido spit [13] and 19% of species recorder during vegetation surveys. Most species present in rabbit feces have widespread dispersal mechanisms (wind) and are mainly herbaceous species with small seeds size.These results support Janzen's "foliage is the fruit" hypothesis [16] who suggested that endozoochory is an important dispersal mechanism for smallseeded species because they are consumed as the same time as the foliage. The largest seeds found in rabbit feces belong to species such as R. monosperma and S. alatum which have no apparent means of dispersal so endozoochory would be an important mechanism for dispersion and colonization. The seed rain to recently formed dunes was monitored throughout the year, and although seed input was greater during summer, the time of year when most fruits are available, we observed that rabbits operate as a permanent dispersion agent in the study area. In recently formed dunes, which are under continuous erosion and sand accumulation processes, seeds dispersed through feces can germinate, become part of the soil seed bank, or be buried and lost.The continuous input of seeds through endozoochory would ensure that, once the necessary conditions for germination and establishment are reached, dispersed species may prosper and colonize new areas. In young dunes, the endozoochory allowed the arrival of species such as M. littorea, S. lividus, S. nicaeensis, S. tenerrimus, S. salina, and R. monosperma to new areas open for colonization.Except S. salina, that is characteristic of salt marshes, all species dispersed will be able to grow in sand dunes and their establishment will contribute to stabilization of freshly deposited sand.R. monosperma recruitment will mean an important change in the dynamics dunes because this species reduces stressful environmental conditions and it increases the organic matter of the soil and this will facilitate the entrance of numerous herbaceous plants [17]. Endozoochory implies apermanent seed influx, the dispersion of some species without evident dispersal mechanisms, and the exclusive dispersal of certain species.Furthermore, in some cases, the germination rate is increased, enhanced by the passage through rabbit guts (6).This suggests that endozoochory is an important mechanism shaping the structure and composition of plant communities in young mobile dunes.Endozoochory contributes not only to the species colonization but the richness and density of dune plant communities. Table 2 : Flowering/fruiting months of species dispersed by rabbits and time of dispersal.Shaded cells indicate flowering/fruiting months.Cells with vertical lines indicate the months during which species were dispersed by rabbits.W: winter, S: spring, Sum: summer, A: autumn.
v3-fos-license
2017-11-10T16:12:22.442Z
2016-05-24T00:00:00.000
14901904
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep22874.pdf", "pdf_hash": "5c5ff388f1c8f588d1e8150f68e7e1f5672adf70", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44053", "s2fieldsofstudy": [ "Physics" ], "sha1": "fa549e2daeff874b04200e88cc7bcb173771ef13", "year": 2016 }
pes2o/s2orc
Protons at the speed of sound: Predicting specific biological signaling from physics Local changes in pH are known to significantly alter the state and activity of proteins and enzymes. pH variations induced by pulses propagating along soft interfaces (e.g. membranes) would therefore constitute an important pillar towards a physical mechanism of biological signaling. Here we investigate the pH-induced physical perturbation of a lipid interface and the physicochemical nature of the subsequent acoustic propagation. Pulses are stimulated by local acidification and propagate – in analogy to sound – at velocities controlled by the interface’s compressibility. With transient local pH changes of 0.6 directly observed at the interface and velocities up to 1.4 m/s this represents hitherto the fastest protonic communication observed. Furthermore simultaneously propagating mechanical and electrical changes in the lipid interface are detected, exposing the thermodynamic nature of these pulses. Finally, these pulses are excitable only beyond a threshold for protonation, determined by the pKa of the lipid head groups. This protonation-transition plus the existence of an enzymatic pH-optimum offer a physical basis for intra- and intercellular signaling via sound waves at interfaces, where not molecular structure and mechano-enyzmatic couplings, but interface thermodynamics and thermodynamic transitions are the origin of the observations. Indeed, here we show that i) acoustic pulses can be excited in lipid monolayers through local acidification of the interface, ii) that the excitation is specific and exhibits a local pH threshold and iii) that the resulting pulse reversibly changes the local pH of the interface. With propagation velocities of ~1 m/s, these pulses are orders of magnitudes faster than the lateral proton translocation at membrane interfaces 18,19 and represent hitherto the fastest "protonic communication" observed. Finally, we discuss the potential of these pulses as a new mechanism for intra-and intercellular biological signaling. Results & Discussions The following section is divided into four parts: At first we will demonstrate that local acidification of lipid monolayers leads to acoustically propagating pressure pulses. Secondly, we will show that the excitation involves head group protonation and thus directly relates to the pK a of the lipid head group, which in turn opens the door for specific excitation. The third part will provide evidence for the adiabatic coupling between pressure pulses and pH of the interface, enabling the local control of pH from remote. In the fourth part these findings will be supported by surface potential measurements, additionally revealing the simultaneous propagation of an electrical pulse. In the final, concluding part we will discuss the biological relevance of these results and propose a new model for specific biological signaling. Acidic excitation of acoustic waves. The addition of hydrochloric acid (gas) onto a DMPS monolayer results in a propagating change of lateral pressure [Setup see Fig. 1]. In Fig. 2(a) a typical time plot of the lateral pressure signal π(t), following an excitation by hydrochloric acid gas, is shown. At first the pulse reaches sensor 1, resulting in a strong lateral pressure decrease of around 2.0 mN/m and a relaxation back to equilibrium. Around 0.25 seconds after the first sensor detects the pulse, it reaches sensor 2 with slightly damped amplitude (~45%) considering the macroscopic distance. From the time delay of the pulse between sensor 1 and sensor 2 the propagation velocity c can be calculated [ Fig. 2(b)]. In the case of a sound pulse c should depend on the lateral density ρ 0 and the adiabatic compressibility κ S of the material. In the linear case 25 : S 0 κ S is not directly accessible, but is may be approximated by the isothermal compressibility 5,24,26 , obtained from the inverse derivative of the DMPS isotherm: In accordance with its mechanical susceptibility, c increases to almost 0.7 m/s in the liquid expanded-phase, followed by a drop to 0.6 m/s in its phase transition regime. At pressures beyond the phase transition region, in the liquid-condensed state, the propagation speed rapidly increases up to 1.4 m/s at 30 mN/m. The correlation between the mechanical properties of the interface and the pulse velocities illustrates the acoustic foundation of these pulses 24,27 . Propagating pulses can also be evoked by other acids, e.g. acetic acid or nitric acid. This indicates the protonic nature of the excitation process, since the only common features between the acids are their dissociated protons [SI-S2]. Furthermore the excitation and propagation of pulses is not limited to DMPS, but also works with other Figure 1. The film balance setup (Langmuir trough) for analyzing propagating monolayer pulses consists of two pressure sensors and a Kelvin probe in order to measure mechanical and electrical changes at the lipid interface. In a typical experiment a fixed amount of nitrogen is blown through a glass bottle partly filled with an acid solution (in this case 32% HCl). The resulting gas mixture is then gently blown onto the lipid monolayer (red spot). Lateral pressure and surface potential are recorded and velocities are calculated. Two moveable Teflon barriers enable us to compress or expand the lipid film and thereby to record lateral pressure and surface potential isotherms. For fluorescent pH-measurements the Kelvin probe is exchanged by an optical setup (not shown). The dyes are excited at 465 nm and the emission is measured at 535 nm and 605 nm ( However, not every addition of acid leads to a propagative pulse. There exists a lower and an upper pH threshold for the excitation as described next. Subphase pH and specific/threshold excitation. If the subphase of the lipid monolayer is too acidic or too alkaline, no pulses can be excited . In order to explain the pH bulk dependency of the excitation, the isothermal pH behavior of DMPS is studied [ Fig. 3]. The plateau region of the isotherms represents the phase transition of the lipids from the liquid-expanded to the liquid-condensed state. At high pH values (≥ 7) as well as at low pH values (≤ 4) the phase transition pressure π T changes only slightly with pH. In between these two regions, π T strongly depends on the pH of the subphase, leading to a sigmoidal π T -pH-profile of the lipid. This behavior is well known for charged lipids 28,29 and due to the protonation of the lipid head group. From the first . Lateral pressure -area isotherms for varying subphase pH conditions. The plateau regime of the isotherms corresponds to the first order phase transition of the DMPS monolayer. The phase transition pressure π T increases monotonically in a sigmoidal shape for increasing pH bulk values (See inset). This behavior is typical for the pK a -value of the lipid head group. From the first derivate of the sigmoidal fit, we obtain a pK a of around 5.4 which -in good agreement with literature -corresponds to the pK a of the carboxyl group of the lipid (sigmoidal fit: , we obtain a pK a -value of 5.4 for the carboxyl group of DMPS, in good agreement with literature 28,30 . The dependency of the excitation on the pH of the subphase can now be easily explained by the sigmoidal pK a -profile. At high pH values the change in surface pH has to be large enough, in order to facilitate the protonation of the lipids and thereby a detectable propagative change in lateral pressure. If the monolayer is already fully protonated, as it is the case for low pH values, the addition of acid does not lead to propagating pulses anymore. Thus the dynamic response of the interface to a certain excitation π ∂ ∂ ( ) pH S , depends to a great degree on its chemical properties and environment, exhibiting a maximum near the pK a of the lipid monolayer. This threshold behavior of the interface introduces "specificity" in the excitation process and allows to control signal strength, which, as described below, opens up new possibilities for "specific communication". So far we observed that close to the pK a of the monolayer, local pH changes inevitably lead to lateral pressure changes. Therefore the question arises, if the inverse relationship holds, too: Do propagating pressure pulses evoke pH changes π ∂ ∂ ( ) pH S at the interface? Propagating pH-pulses. Lipid conjugated fluorescence probes provide a fast, noninvasive and effective method for measuring the local pH at a lipid interface 31 . The emission characteristics of these probes are sensitive to pH changes in its environment, especially near its pK a . Importantly, in a lipid monolayer the emission intensity at a certain wavelength is also a function of surface pressure and thus cannot be interpreted in terms of only pH changes 6 . To quantify changes in the optical signal, one is better off measuring the ratio of intensities at two different wavelengths, eliminating the trouble of having to deal with absolute intensities. Fig. 4(a) shows the intensity ratio I R = I 535nm /I 605nm as a function of lateral pressure between 5 and 8 mN/m during isothermal expansion at different buffer pH values of 6.5, 7 and 7.5, respectively. Clearly, the ratio reacts sensitively to pH changes of the subphase but not to lateral pressure changes at the lipid interface. For a pH increase of one unit from pH 6.5 to pH 7.5 I R increases linearly from 2.0 ± 0.1 to 2.6 ± 0.1. Hence I R can be used as a measure for the local pH and allows for studying possible pH changes at the interface during a propagating pressure pulse. It is important to note, that this behavior is of course a fingerprint of the inherent phenomenology of the specific dye used and cannot be generalized. Indeed, without careful calibration in the proper environment a change in intensity cannot be converted into a change in pH. The transfer from bulk to interface for instance can change the dyes characteristics entirely. Fig. 4(b) depicts the time course of the pH at the dye during a propagating lateral pressure pulse, within the respective "calibration range" of Fig. 4(a). Obviously, the two signals correlate (inversely) and based on the quasi-static coupling [ Fig. 4(a)] a pH increase of approximately 0.6 units at the interface takes place. Subsequently the monolayer relaxes back to equilibrium where the pressure as well as the interfacial pH reacquire their former values. In the same way as proton addition leads to condensation an expansion leads to the liberation of protons from the interface and hence an increase in local pH 29,32 . Thus, the negative correlation between pressure and pH origins from the fact, that the propagating front is actually an expansion caused by the local acidification at the Electrostatic contributions. It is important to note that the propagating pulses are not solely, but also mechanical. This is not only obvious from the observed pH-pulse [ Fig. 4(b)], but also from measuring lateral pressure π(t) and total surface potential V total (t) of the interface, revealing a simultaneously propagating voltage pulse [See Fig. 5(a)]. Although the propagating pulse is adiabatic in its nature, a quasi-static approximation from isotherms [ Fig. 5(b)] reproduces the pulse in shape and magnitude very well [ Fig. 5(a)]. This observation was consistently found to be true for all excitations in the liquid-expanded phase of the monolayer (See [SI-S6]). It seems intuitive that the electrical change at the interface should be closely linked to the measured pH-pulse. In order to calculate this change we follow the work of Möbius and subdivide the total surface potential V total of the lipid monolayer into two contributions 33 : The hydrophilic interface (head group potential ψ) and the hydrophobic interface (tail group potential V tail ). Importantly, the pK a -value of the head group does not vary for different tail group potentials 33 . This shows that the head group protonation is determined by the head group potential. In order to estimate ψ, we need to extract the area change from the measured pressure amplitude of the pulse [ Fig. 5(a)] and the isotherm [ Fig. 5(b)], using a quasi-static approximation. With this numbers and the known dissociation degree of 88% of the carboxyl groups at pH 7 (α = 0.88) [Fig. 3] the head group potential ψ follows from the Gouy-Chapman theory (note: the negatively charged phosphate group and the positively charged amino groups compensate each other) 34,35 : For an area change of A 1 = 75 Å 2 to A 2 = 82 Å 2 , we obtain ψ ψ ψ 1 1 = − 113 mV + 117 mV = +4 mV. That is, the propagating expansion front locally increases the head group potential, which will lead to a proton release from the surface 32 . The reliability of this calculation can be tested experimentally from surface potential measurements at different subphase pH-values [See Fig. 6]. At pH 1.5 the carboxyl groups are completely protonated, while at pH 9 they are entirely ionized. At one certain molecular area the contribution of the tail groups to the total surface potential should be constant. Hence when building the difference between two states of protonation at a given area, the tail group potential drops out: total t otal total t ail t ail 9 0 1 5 9 0 1 5 9 0 1 5 The observed deprotonation leads to a change of ψ ∆ = − 180 mV at an area of 75 Å 2 [See Fig. 6]. Taking into account the dissociation degree of 0.88 at pH 7 [ Fig. 3] ψ ∆ = − 158 mV, which is in good agreement with the surface potential of the Gouy-Chapman theory. In order to get a quantitative estimate of the interfacial pH change the Boltzmann distribution can be used (I ≙ interface, B ≙ bulk) 29 : Fig. 5b) a surface potential change of ~14 mV can be calculated (surface potential calc), which is ca. 30% less than the measured value (surface potential exp). (b) Isothermal measurement of lateral pressure (black) and surface potential (blue) as a function of molecular area. The phase transition of the lipids from liquid-expanded to liquid-condensed phase is clearly visible in both signals (horizontal regimes), conclusively demonstrating thermomechanic-electrical coupling. The concatenation of the two curves leads to a relationship V total (π)(See [SI-S5]), from which the change in surface potential due to a given pressure variation can be calculated. For a surface potential change from ψ 1 (A 1 ) to ψ 2 (A 2 ) equation 5 gives: ΔpH I = + 0.1. This qualitatively yields the right trend (the surface pH increases), although it somehow underestimates the measured change of 0.6. The discrepancy between the two values is not totally clear, however, differences ought to be expected, for instance, from the differences in adiabatic and isothermal behavior of the system. What is more, our approximation neglects the decrease of the pK a -value of the carboxyl group due to the expansion of the interface 29,32 .The shift in pK a will further decrease the proton affinity of the lipid molecules and thus enhance the observed effect 36 . Finally, although we carefully calibrated the fluorescent probe, the propagating electric field may also interfere with the emission. We would like to add, that for stronger excitations we could record pulses with amplitudes > 100 mV . This is well in the order of action potentials and should be considered during the current controversial discussion on the underlying mechanism of action potentials 26,37,38 . In summary, we imagine the electrical pulse to consist of two contributions: One from the air/tail interface and one from the head group/water interface, of which only the latter affects the local pH. The change in the hydrophobic part is therefore: V tail = V total − ψ. The electrostatic measurements as well as the surface pH measurements lead us to a consistent interpretation: a pH increase at the interface during the adiabatic pulse. We do not attempt to provide an explanation for the observed discrepancy between isothermal and adiabatic response, as this is rather the rule than the exception and already known from the ideal gas. Since mechanical, electrical and chemical properties within this system are coupled, it comes with no surprise that the electrical and chemical response vary as well from isothermal to adiabatic expansion. Nevertheless, we imagine that important insights on the internal timescales of the system may arise from measurements where local processes can be monitored with high time resolution. FCS seems to be the proper tool to open up this door. Conclusion We have shown that lipid monolayers enable propagating mechano-chemical-electrical pulses with velocities controlled by the compressibility of the monolayer. The acoustic waves can be evoked only above a certain threshold. This threshold origins from a transition, namely the head group protonation and is hence determined by the pK a of the lipid interface. It is important to point out, that the propagation of local pH perturbations as described follows from fundamental physical principles applied to the phenomenology of interfaces. Providing significant localized proton release, it has therefore to be expected to exist in biology as well, even if velocities and/or amplitudes may vary significantly. Previously we have therefore proposed acoustic pulses as a new physical foundation for biological communication 24,26 . The results presented here constitute a crucial step in confirming this speculation as they i) begin to bridge the gap between physics (adiabatic pulses due to protonic transitions) and biochemistry (regulation of enzymes) and ii) introduce a thermodynamic concept of specificity [ Fig. 7]. In the following we are taking Figure 6. Influence of subphase pH on the surface potential of a DMPS monolayer: A pH change from 1.5 to 9 in the subphase of a DMPS monolayer not only shifts the phase transition pressure to a much higher value [cf. Fig. 3], but at the same time significantly decreases the total surface potential. At a given molecular area the potential of the lipid tails is approximately constant. Therefore the change in the total surface potential from pH 1.5 to pH 9 can be related to the change in head group potential ψ Δ during the protonation process of the carboxyl groups. For an area of 75 Å 2 : ψ Δ = − 180 mV. The start and the end of the liquid-expanded phase are marked by the dashed lines, respectively. Scientific RepoRts | 6:22874 | DOI: 10.1038/srep22874 the liberty to briefly outline our ideas. We imagine an (membrane-bound) enzyme, which -as we will explainwill first serve as stimulus and in the next step as receptor for specific pulses: Specific excitation. In its catalytically active state many enzymes (e.g. esterases, lipases) will locally liberate protons 39,40 . If the proton concentration and hence the pH reaches a certain threshold and if the protonation of the lipids proceeds fast enough, a propagating sound pulse will be triggered [ Fig. 7(a)]. Its amplitude depends on the pK a of the interface and on the strength of the excitation. Specificity comes in through transitions: the protonation transition (pK a ) and potentially order-disorder transition of the lipid tails. Specific Interaction. Due to its mechanical, electrical and in particular chemical properties, the propagating pulses will affect proteins at the interface (e.g. enzymatic activity) in the same way it changes the emission properties of the dye here. In Fig. 7(b) two possible interactions of a pH-pulse with an enzyme are shown: If the local pH (pH loc ) is far from the enzyme's pH optimum (pH opt ), the pH-pulse will have only minor impact on the enzyme's activity. If, however, the surrounding pH is close to the pH opt of the enzyme, the enzyme activity could change enormously: Increasing, if the local pH is shifted towards pH opt or decreasing when the local-pH is shifted away, i.e. (pH loc − pH opt ) decreasing or increasing, respectively. Taken together, only if i) the stimulus of enzyme A leads to a propagating pulse across the interface and ii) the pulse shifts the local pH at enzyme B towards or away from its pH opt , effective and specific communication between enzyme A and B will take place. It has to be stressed, that in contrast to earlier models, here specificity arises from two (nonlinear) transitions and thus from physical principles rather than structural considerations. This also implies, that the mechanism of enzymatic regulation is not simply mechano-enyzmatic coupling, but is thermodynamic in nature and "exploits" the existence of transitions (order-disorder as well as protonation transitions). Clearly, specificity can be further enhanced if nonlinear relations between activity and other physical parameters, e.g. compressibility, heat capacity, electrical capacity etc. exist. Such relations have indeed been observed extensively and the maximal activity of phospholipase A 2 and phospholipase C at the lipid phase transitions are excellent examples [40][41][42][43] . Along the same lines it may turn out that only nonlinear (e.g. solitary) waves are sufficiently strong in amplitude to induce changes at a distance remote from the excitation. We have shown, that interfacial solitary waves create 10-100 fold stronger local changes in pressure and voltage when compared to linear waves 26 . The existence of such solitary waves, however, requires a range of specific conditions, for instance the vicinity of a phase transition as well as a threshold strength of excitation. It seems obvious, that the simultaneous appearance of solitary waves, local protonation-transitions represent a very unique and rare combination of conditions, which would make the Only if the resulting local pH ~ pK a of the lipids a pulse is excited. This scenario is illustrated in the lower part of the figure, where the lipids (red) are susceptible to the enzyme-induced pH change. As a result pulses propagate along the interfaces (indicated by blue arrows). In the upper part, the pK a of the surrounding lipids is too acidic and the enzyme-induced change in local pH is insufficient to evoke a protonation transition (i.e. crossing the local pK) and hence a propagating perturbation. (b) Specific protein interaction: Enzymes exhibit a maximum activity at a certain pH (pH opt ). Only if the propagating pH-pulse carries the enzyme environment into or out of the pH opt regime, significant pulse-enzyme interaction is observed and the enzyme can be either "switched-on" or "off ". In this cartoon the activity of the yellow enzyme is switched on, while the activity of the blue enzyme is hardly affected by the pH-pulse (blue and yellow enzymes have different pH opt ). The interplay between specific excitation depending on the pK a of the interface and the specific interaction depending on the pH opt of the enzyme results in specific signaling between two enzymes. Of course, coupling of pulses to proteins can also take place electrically via charged groups or mechanically and are expected to be particularly increased near the lipid phase transition (Idea from MFS, proposed enzyme-enzyme communication a very specific process. Importantly, this follows from thermodynamic and does not require structural information. It will be thrilling to see whether the type of communication suggested can be verified as a fundamental principle to orchestrate the individual elements of a cell, cell clusters or even entire organs. Experiments along the lines of those in [43][44][45][46][47] will have to show. The Langmuir trough is equipped with two Wilhelmy plate pressure sensors, situated 15 cm apart from each other and a Kelvin probe sensor, facing pressure sensor 1 [ Fig. 1]. The rapid readout of the sensors (10000 samples/s, 0.01 mN/m and 5 mV resolution) ensures accurate velocity and surface potential measurements. For the detection of fluorescent signals the Kelvin probe is substituted by an optical setup (not shown). Pulses are excited blowing a fixed amount of pure nitrogen gas (5 ml for sole lateral pressure measurements and 25 ml for surface potential and pH measurements) through the gas phase of a glass bottle filled with 32% hydrochloric acid solution (for reference measurements: 100% acetic acid). pH measurements show that the excitation by 25 ml nitrogen gas drags along (2.0 ± 0.2) × 10 −6 mol of hydrochloric acid. The acid/nitrogen gas mixture is then gently blown onto the lipid monolayer in order to prevent capillary waves. The excitation takes place 10 cm away from pressure sensor 1. The gaseous excitation allows for protonating bigger areas of the monolayer while using less amounts of acid than it would be possible for pipetting. To exclude artifacts, we performed reference measurements on pure water surfaces. Neither nitrogen, nor hydrochloric acid or acetic acid induced any detectable pressure change at the surface. Furthermore pure nitrogen gas was blown onto a DMPS monolayer, to exclude any excitatory effect by N 2 [SI-S1]. pH changes at the interface are detected using lipid conjugated pH sensitive dye Oregon Green ® 488 1,2-Dihexadecanoyl-sn-Glycero-3-Phosphoethanolamine spread along with DMPS (1 mol % dye). The emission of the dyes embedded in the monolayer was measured at 535 nm and 605 nm simultaneously with lateral pressure. Propagating changes were measured at a distance of 10 cm from the excitation spot. In order to rule out diffusion effects, a Teflon ring with a small opening facing away from the excitation spot was used to encircle the lipid monolayer around the spot for optical measurements.
v3-fos-license
2023-08-30T15:16:59.857Z
2023-08-27T00:00:00.000
261329009
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/15/17/3058/pdf?version=1693271757", "pdf_hash": "6cf5e123f879524a18bc91cd880e4ac77c526520", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44054", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "5d10f1376e4324e212e88373288ad763d39fa47a", "year": 2023 }
pes2o/s2orc
Temporal and Spatial Analysis of Water Resources under the Influence of Coal Mining: A Case Study of Yangquan Basin, China : The activities of coal mining often lead to the formation of underlying runoff areas and have great impacts on both the basin hydrological cycle and water resource management. In order to investigate the changes in the hydrological processes of the coal goaf catchment, this paper analyzes and explores the coal mining influences on the hydrological processes in a small watershed in the Yangquan Basin of China. To disentangle the mining process, a distributed hydrological model, which highlighted the integration of sub-hydrological processes, was developed and implemented in the study area. The calibration and validation results indicated that the developed model simulated streamflow well. This was indicated by the Nash–Sutcliffe model efficiency (NS) and the Coefficient of Correlation (r 2 ) for monthly runoff. The model was first calibrated in the period of 1990–2004 and then validated in the period of 2005–2018. Different scenarios were simulated and cross-compared in order to study the mining effects; the rainfall and runoff of each hydrological station are positively correlated in 2009–2018, and the scenario of change in mining area is negatively correlated with runoff in 2009–2018. The contribution of the changing input variables (rainfall and coal mining area) to the runoff of the Yangquan Basin was analyzed qualitatively and quantitatively; the impact contribution rates of mining activities are 85.96% and 39.34% during the mining and recovery periods in Yangquan station, respectively. The hydrological simulations provided a better understanding of runoff changes in the Yangquan Basin. The analysis results indicate that the hydrologic response to the mining process in Yangquan Basin is changing, and it thus draws attention to other mining places over the world. Methods used in this study can be applied in other regions to orientate the policy-making process. Introduction A long-term unsustainable and unscientific coal mining activity is one of the main driving factors leading to the loss of water resources in the catchment. By changing the land use distribution and the geological structures in the catchment, the coal mining goaf creates an extra water passage that connects the land surface and the subsoil layer [1,2]. As a result, more surface water will infiltrate into soils, and the total runoff at the catchment outlet will be strongly decreased. Therefore, in a water-poor area such as the Shanxi province in China, the local stakeholders are eager to have an integrated method for analyzing coal mining impacts on hydrological processes [3,4]. To investigate coal mining impacts on hydrological processes, Knighton et al. mainly focused on streamflow simulation under a variable geological and coal mining policy [5][6][7][8]. Wei et al. [4] used a combination analysis of an empirical formula and a fitting formula to find the influences of coal mining on shallow water resources and produced a water resource leakage differentiation graph for the Shennan mining area based on a geological map and soil layer group thickness chart. Similarly, Jiang et al. [9] established the Yellow River Water Balance model (YRWBM) and proved that coal mining is an important reason for the runoff change in the study river. Wu et al. [10] applied the SWAT mode to quantitatively identify the changes in river runoff caused by coal mining impacts in the catchment. Tang et al. [11] summarized the research on the influence of coal mining on small-scale floods in recent years and found that the flood simulation accuracy of the study area, covered by goaf, can be improved by tuning the model parameters. The existing research has proven that coal mining activities have important impacts on catchment water resources. However, a long-term hydrological modeling analysis of the catchment under coal mining impacts is still unclear [12,13]. Among different modeling approaches, the distributed hydrological model has been commonly considered an effective modeling tool for investigating and understanding the hydrological processes in the coal mining catchment. Compared with the traditional modeling method, the distributed modeling method in this study can improve the efficiency and accuracy of modeling; therefore, it is important to transform from traditional modeling to modular modeling [14]. Previous modeling applications have laid the foundation for this study. For instance, (i) the Xin'anjing model [15,16] can realize the continuous simulation of rainfall and runoff and is mainly used in semi-humid and humid regions, (ii) the distributed Geomorphology-Based Hydrological Model (GBHM) includes two main simulated hydrological processes: hydrological simulation at each hillslope and river routing in the river network [17,18], (iii) the precipitation-runoff modeling system (PRMS) developed by the United States Geological Survey that can evaluate the impacts of various hydrological units [12], and (iv) the spatiotemporal variable source mixed runoff (SVSMR) model proposed by Liu et al. [19], which considers the unsaturated soil infiltration process on different geomorphological hydrological response units, analyzes the main parameter characteristics of different soil types and also revealed the characteristics of the underlying surface and rainfall-runoff generation in a small watershed. Shanxi province is located in the middle part of China, and around 60% of the province area has been characterized as a water shortage area that lacks irrigation and drinking water resources [20]. As a typical coal mining province in China, among 118 county and city administrative units in the province, there are 94 countries and cities identified as rich coal resource areas. Under current conditions, the loss and pollution of available water resources in Shanxi province are becoming more serious [21][22][23]. This paper aims to apply an improved modeling approach to simulate the variation of surface runoff in relation to changes in meteorological and geological conditions. Choosing the Yangquan catchment (485 km 2 ) located in the eastern part of Shanxi as the study area [24], this paper presents (1) a modification of the existing SVSMR model, (2) test scenario results to show how the new model performs with variation in rainfall and mining in the affected area, and (3) discussion of rainfall/runoff results from the new model for the Yangquan catchment. Study Area The Yangquan Basin (485 km 2 ) is located in the eastern part of Shanxi province. There are two runoff gage stations in this basin-Yangquan and Jiujie. As a result of the coal mining activities in Shanxi province, the coal goaf-impacted area in this basin is around 108.2 km 2 , which accounts for 21.5% of the total area [17]. Using ENVI (Environment for Visualizing Images), land use patterns were interpreted from 2.5 m resolution image classification [25]. The main type of land use is pasture land, which accounts for 68.6%, and the minimum type of land use is water area, which accounts for 1.2% of the total study area (Figure 1). 108.2 km 2 , which accounts for 21.5% of the total area [17]. Using ENVI (Environment for Visualizing Images), land use patterns were interpreted from 2.5 m resolution image classification [25]. The main type of land use is pasture land, which accounts for 68.6%, and the minimum type of land use is water area, which accounts for 1.2% of the total study area ( Figure 1). The catchment climate follows the inland plateau and warm temperate monsoon climate. Air temperature decreases in the fall, and so does rainfall. The cold air activity in winter is frequent, with a cold and dry climate, and the weather is sunny; meanwhile, precipitation is scarce. The average annual temperature from 1958 to 2012 in Shanxi Province was 8.6 °C, the highest value was 10.2 °C, and the lowest value was 7.4 °C; the overall trend shows a significant increase in fluctuation [26]. The catchment topography was derived from the Geospatial data cloud website within a 25 m spatial resolution [27]; the river network extends over approximately 208.85 km. The soil distribution was obtained through field soil exploration and a survey combined with the second national soil census in China [28]: the loam accounts for 66.7%, the clay accounts for 32.5%, and the remaining 0.8% is sandy soil. Experiments on soil and water characteristics in the Yangquan Basin were also performed, and we obtained the soil water characteristic curve shown in Figure 2. The catchment climate follows the inland plateau and warm temperate monsoon climate. Air temperature decreases in the fall, and so does rainfall. The cold air activity in winter is frequent, with a cold and dry climate, and the weather is sunny; meanwhile, precipitation is scarce. The average annual temperature from 1958 to 2012 in Shanxi Province was 8.6 • C, the highest value was 10.2 • C, and the lowest value was 7.4 • C; the overall trend shows a significant increase in fluctuation [26]. The catchment topography was derived from the Geospatial data cloud website within a 25 m spatial resolution [27]; the river network extends over approximately 208.85 km. The soil distribution was obtained through field soil exploration and a survey combined with the second national soil census in China [28]: the loam accounts for 66.7%, the clay accounts for 32.5%, and the remaining 0.8% is sandy soil. Experiments on soil and water characteristics in the Yangquan Basin were also performed, and we obtained the soil water characteristic curve shown in Figure 2. Supported by the Shanxi Hydrological Bureau, we collected the two hydrological stations' daily rainfall and runoff for the period of 1977-2018 in this study. A simple trend analysis was conducted for both variables, as shown in Figure 3. The changing trends of daily rainfall and runoff exhibited some similarities during the periods of 1977-1990 and 2005-2019. However, the reasons for the abrupt change observed between 1991 and 2005 remain unclear. Apart from the influence of rainfall, surface runoff may also be affected by other factors, such as coal mining [29,30]. Supported by the Shanxi Hydrological Bureau, we collected the two hydrological stations' daily rainfall and runoff for the period of 1977-2018 in this study. A simple trend analysis was conducted for both variables, as shown in Figure 3. The changing trends of daily rainfall and runoff exhibited some similarities during the periods of 1977-1990 and 2005-2019. However, the reasons for the abrupt change observed between 1991 and 2005 remain unclear. Apart from the influence of rainfall, surface runoff may also be affected by other factors, such as coal mining [29,30]. To investigate whether coal mining has an impact on surface runoff, additional data were sought in this study. The available data indicate that coal mining activity in the Yangquan Basin has been booming since 1990, resulting in a series of changes in the underlying surface. After 2005, due to the state control of coal mining activities, the development of goaf became stable [31]. Combining daily rainfall and runoff trends, and considering the existing data, the inflection point of the hydrological series, the mining time, and the gradual change in the underlying surface, the total research period is divided into three sequential subperiods. The period of 1977-1990 is regarded as a natural state, called the natural period; 1991-2018 is called the period of change, in which 1991-2004 is called the first stage of the change period, and 2005-2018 is called the second phase of the change period. The characteristics of different periods in the Yangquan Basin are shown in Table 1. To investigate whether coal mining has an impact on surface runoff, additional data were sought in this study. The available data indicate that coal mining activity in the Yangquan Basin has been booming since 1990, resulting in a series of changes in the underlying surface. After 2005, due to the state control of coal mining activities, the development of goaf became stable [31]. Combining daily rainfall and runoff trends, and considering the existing data, the inflection point of the hydrological series, the mining time, and the gradual change in the underlying surface, the total research period is divided into three sequential subperiods. The period of 1977-1990 is regarded as a natural state, called the natural period; 1991-2018 is called the period of change, in which 1991-2004 is called the first stage of the change period, and 2005-2018 is called the second phase of the change period. The characteristics of different periods in the Yangquan Basin are shown in Table 1. Modeling Approach Coal mining activities often generate coal goaf in the catchment, which may increase the surface and subsurface water exchange process ( Figure 4). When the rainfall-runoff process occurs in the catchment, more rainfall infiltrates into soils; thus, less surface runoff flows to the catchment outlet. Moreover, during a flood event, as more stream water may flow into the coal goaf through the crack below the riverbed, the flood peak could obviously be reduced. In order to represent the impacts of the coal goaf on the rainfall-runoff process of the catchment, an improved modeling approach based on the SVSMR model proposed by Liu et al. has been designed [19]. the surface and subsurface water exchange process ( Figure 4). When the rainfall-runoff process occurs in the catchment, more rainfall infiltrates into soils; thus, less surface runoff flows to the catchment outlet. Moreover, during a flood event, as more stream water may flow into the coal goaf through the crack below the riverbed, the flood peak could obviously be reduced. In order to represent the impacts of the coal goaf on the rainfall-runoff process of the catchment, an improved modeling approach based on the SVSMR model proposed by Liu et al. has been designed [19]. SVSMR Model The SVSMR model is built based on the modular modeling structure and uses different runoff-generating mechanisms for different hydro-geomorphological response units, depending on their geological/hydrological characteristics. By using Geographic Information System (GIS) and Remote Sensing (RS) technologies, the model first identifies landform features (e.g., terrain, land use, vegetation cover, and soil type) on a hillslope scale, then defines the runoff generation mechanism depending on the characteristics of each hydro-geomorphological response unit [21,32]. The SVSMR model first used the GARTO model to calculate the infiltration process when the rainfall intensity exceeds the soil infiltration capacity. Using the one-dimensional numerical method proposed by Lai and Talbot [33], the infiltration and redistribution processes in the vadose zone are simulated. The GARTO model consisted of two main parts, including the Green-Ampt infiltration with Redistribution (GAR) model and the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain [34]. It shows a better way to represent the exchange flow between surface water and groundwater compared with SVSMR Model The SVSMR model is built based on the modular modeling structure and uses different runoff-generating mechanisms for different hydro-geomorphological response units, depending on their geological/hydrological characteristics. By using Geographic Information System (GIS) and Remote Sensing (RS) technologies, the model first identifies landform features (e.g., terrain, land use, vegetation cover, and soil type) on a hillslope scale, then defines the runoff generation mechanism depending on the characteristics of each hydrogeomorphological response unit [21,32]. The SVSMR model first used the GARTO model to calculate the infiltration process when the rainfall intensity exceeds the soil infiltration capacity. Using the one-dimensional numerical method proposed by Lai and Talbot [33], the infiltration and redistribution processes in the vadose zone are simulated. The GARTO model consisted of two main parts, including the Green-Ampt infiltration with Redistribution (GAR) model and the Talbot-Ogden (T-O) infiltration and redistribution method in a discretized moisture content domain [34]. It shows a better way to represent the exchange flow between surface water and groundwater compared with other methods like the classical Richard's equation and the Green-Ampt (GA) [35] method. Then, the flow in the underground area is theoretically divided into three parts, including topsoil flow, subsoil flow, and aquifer flow ( Figure 5). Integrated with the surface flow directly generated by the rainfall-runoff process, the river convergence process is calculated based on the kinematic wave method. The main parameters of the SVSMR model are classified into 4 categories, including evaporation parameters, interception parameters, infiltration parameters, and soil water dynamic balance parameters [36,37]. other methods like the classical Richard's equation and the Green-Ampt (GA) [35] method. Then, the flow in the underground area is theoretically divided into three parts, including topsoil flow, subsoil flow, and aquifer flow ( Figure 5). Integrated with the surface flow directly generated by the rainfall-runoff process, the river convergence process is calculated based on the kinematic wave method. The main parameters of the SVSMR model are classified into 4 categories, including evaporation parameters, interception parameters, infiltration parameters, and soil water dynamic balance parameters [36,37]. Model Improvement The existing SVSMR model has been operationally used for forecasting flash flooding in 20 provinces in China (such as Henan and Hainan provinces) [38]. However, when this model is applied to a watershed with a special underlying surface like goaf, the model Model Improvement The existing SVSMR model has been operationally used for forecasting flash flooding in 20 provinces in China (such as Henan and Hainan provinces) [38]. However, when this model is applied to a watershed with a special underlying surface like goaf, the model result is obviously overestimated, and relevant improvements should be implemented on the runoff generation and concentration mechanism of the goaf area. The improved parts of the SVSMR model are highlighted in Figure 6. The improved model is based on the assumption that the goaf area within the simulated catchment will be unchanged during the whole modeling period. Then, depending on the location of the goaf area, the flow contributions to the river runoff are calculated separately: 1. When the goaf area is located under the land surface: (1) (2) where Km is the equivalent permeability coefficient (mm/s) to the coal goaf impacted area, and β is the Empirical coefficient (-), which needs to be calibrated carefully. Ks is the soil saturated hydraulic conductivity (mm/s), Qm is the seepage discharge of goaf (mm), and Wm is the water content of the hollow fissure reservoir (mm). So, the total surface runoff can be calculated by the equation below: where Qt is the total surface runoff; Qh and Qd are the surface runoff under unsaturated and saturated soil conditions, respectively; Qp is the preferential flow; Ql is the lateral flow; and Qg is the groundwater flow. 2. When the goaf area is located under the riverbed: We assumed that the loss of river flow caused by leakage to the coal goaf area below the riverbed is related to the quantity of river discharge. The improved model is based on the assumption that the goaf area within the simulated catchment will be unchanged during the whole modeling period. Then, depending on the location of the goaf area, the flow contributions to the river runoff are calculated separately: 1. When the goaf area is located under the land surface: where K m is the equivalent permeability coefficient (mm/s) to the coal goaf impacted area, and β is the Empirical coefficient (-), which needs to be calibrated carefully. K s is the soil saturated hydraulic conductivity (mm/s), Q m is the seepage discharge of goaf (mm), and W m is the water content of the hollow fissure reservoir (mm). So, the total surface runoff can be calculated by the equation below: where Q t is the total surface runoff; Q h and Q d are the surface runoff under unsaturated and saturated soil conditions, respectively; Q p is the preferential flow; Q l is the lateral flow; and Q g is the groundwater flow. 2. When the goaf area is located under the riverbed: We assumed that the loss of river flow caused by leakage to the coal goaf area below the riverbed is related to the quantity of river discharge. where S i is the amount of seepage (m 3 ) in the river during time step i; k is the equivalent leakage coefficient [-]; Q i is the river discharge (m 3 /s) above the coal goaf area at time step I; and t is the calculation time interval (s). Modeling Setup With heterogeneous surface topography, land use, and soil distribution, the Yangquan Basin was divided into 53 hydrological response units. According to the runoff generation and confluence process of each hydrological response unit, the model is built based on each unit of similar generation and confluence process. Then the confluence process algorithm is used to link the sub-basin integration model to form an integration hydrological process model for the entire basin scale. A time-space source hydrological model, which is suitable for the preferential flow mechanism of the underlying surface of the coal goaf in the Yangquan Basin, was constructed. The model can allow for analysis of the runoff generation and confluence process of the coal goaf. The Nash-Sutcliffe Efficiency (NS) was used to compare and assess the observed and simulated datasets. The equations were given as follows [42]: where Q obs , Q sim , and Q o are the observed data, simulated data, and average observed data, respectively, and m is the total number of data records. NS indicates more accurate simulation as it approaches 1. When NS is negative, the model is a worse predictor than the measured mean [43]. where Q obs and Q sim are the measured and simulated data; similar to NS, as r 2 approaches 1, the model more accurately simulates the measured data [43]. Scenario Analysis The study of runoff response to the changes in meteorological conditions and coal mining activities is mainly to model the variation of catchment total runoff. These scenarios are set up according to the fluctuation in rainfall in the area, which is about 5-10%. This paper uses the following scenario analyses to quantitatively analyze the response of the runoff to the mining areas and rainfall in the Yangquan Basin: The formula for change rate of runoff in year and month is: where η represents the average change rate of runoff (year, month); y i represents the average runoff (year, month) under the i scenario; and y represents the average real runoff (year, month) during the current year. Model Calibration and Validation The NS value and r 2 value are 0.78 and 0.81 in Yangquan and 0.76 and 0.78 in Jiujie, respectively, demonstrating good results between monthly simulated values and observed values in the calibration period. For the validation dataset, the NS value is 0.80, and the r 2 value is 0.81 in Yangquan and 0.80 and 0.83 in Jiujie, respectively, showing that the validation results were comparable to the calibration dataset. Thus, the improved model can be used to simulate runoff in the basin, and the best-simulated parameters are shown in Table 2. The simulation results were extracted within the monthly time step to show in the figure (Figure 7). served values in the calibration period. For the validation dataset, the NS value is 0.80, and the r 2 value is 0.81 in Yangquan and 0.80 and 0.83 in Jiujie, respectively, showing that the validation results were comparable to the calibration dataset. Thus, the improved model can be used to simulate runoff in the basin, and the best-simulated parameters are shown in Table 2. The simulation results were extracted within the monthly time step to show in the figure (Figure 7). Annual Runoff Response Results The runoff variation of meteorological and geological impacts is shown in Figure 8. The rainfall and runoff of each hydrological station are basically positively correlated in 2009-2018. The runoff of different hydrological stations has different levels of response to the increase or decrease in rainfall; in the case of reduced rainfall, the runoff response was relatively large in the Yangquan station in 2017, while the runoff response of the Jiujie station was obvious in 2013. Under the scenario of increased rainfall, the streamflow response of the Jiujie station was higher than that of the Yangquan station, and both the Yangquan and Jiujie stations showed greater runoff in 2014. 2014 is a year with a low level of runoff, and the runoff of the Jiujie station is smaller than that of the Yangquan station. This shows that the effect of rainfall is more obvious when the runoff is small. Rainfall was increased and decreased by the same proportion, but the respective changes in runoff response were disproportionate, indicating that in addition to rainfall, there are other factors that affect the runoff. The results in Figure 9 show that the scenarios for change in mining area tively correlated with runoff, especially when the mining area is reduced by 10% runoff shows an increasing trend. The degrees of response of runoff to the chang ing area are different for the Yangquan and Jiujie stations. The response of run reduction in the goaf area is greater than the response to the increase in the goa the case of a 5% and 10% reduction in the area of the goaf, the increase in the run Yangquan and Jiujie stations was the most obvious in 2014. In the case of a 5% increase in the area of the goaf, the runoff reduction of the Yangquan station w pronounced in 2017, while the Jiujie station had a significant reduction in runoff In most of the annual runoff responses under the mining area change scenarios, station showed more runoff than the Yangquan station. The area of the coal go Jiujie station is not extensive, implying that the Jiujie station is sensitive to the c the goaf, and the impact of mining area change is more obvious in the runoff. The results in Figure 9 show that the scenarios for change in mining area are negatively correlated with runoff, especially when the mining area is reduced by 10%, and the runoff shows an increasing trend. The degrees of response of runoff to the change in mining area are different for the Yangquan and Jiujie stations. The response of runoff to the reduction in the goaf area is greater than the response to the increase in the goaf area. In the case of a 5% and 10% reduction in the area of the goaf, the increase in the runoff of the Yangquan and Jiujie stations was the most obvious in 2014. In the case of a 5% and 10% increase in the area of the goaf, the runoff reduction of the Yangquan station was more pronounced in 2017, while the Jiujie station had a significant reduction in runoff in 2013. In most of the annual runoff responses under the mining area change scenarios, the Jiujie station showed more runoff than the Yangquan station. The area of the coal goaf in the Jiujie station is not extensive, implying that the Jiujie station is sensitive to the change in the goaf, and the impact of mining area change is more obvious in the runoff. Seasonal Runoff Response Results In order to better analyze the annual runoff response to rainfall and the mi of the Yangquan Basin, the seasonal average runoff change rate of the Yangquan stations was calculated, which was based on the monthly average runoff under scenarios. Among these, spring is from March to May, summer is from June to autumn is from September to November, and winter is from December to Febr effects of rainfall and mining area changes on seasonal runoff in the Yangquan stations are analyzed based on scenario simulation results (shown in Figure 10 a 2). Seasonal Runoff Response Results In order to better analyze the annual runoff response to rainfall and the mining area of the Yangquan Basin, the seasonal average runoff change rate of the Yangquan and Jiujie stations was calculated, which was based on the monthly average runoff under different scenarios. Among these, spring is from March to May, summer is from June to August, autumn is from September to November, and winter is from December to February. The effects of rainfall and mining area changes on seasonal runoff in the Yangquan and Jiujie stations are analyzed based on scenario simulation results (shown in Figure 10 and Table 2). The results show that for the impact of rainfall and mining area changes on runoff, the response trends in the Yangquan and Jiujie stations are consistent. In most years, the runoff increases with increased rainfall and reduced mining area. The effects of rainfall and mining area changes on runoff are mainly concentrated in summer and autumn, and the runoff response in summer and autumn is greater than that in spring and winter. Be- The results show that for the impact of rainfall and mining area changes on runoff, the response trends in the Yangquan and Jiujie stations are consistent. In most years, the runoff increases with increased rainfall and reduced mining area. The effects of rainfall and mining area changes on runoff are mainly concentrated in summer and autumn, and the runoff response in summer and autumn is greater than that in spring and winter. Between 2009 and 2018, with a 10% reduction in rainfall and a 10% increase in mining area, the degrees of runoff reduction in spring and winter increased year by year. In spring and winter, when rainfall and mining areas changed at the same percentage, the impact of the mining area changes on runoff was greater than the impact of rainfall changes on runoff. In order to better study the influence of rainfall and mining area on runoff, this paper uses the Yangquan hydrological station as an example to calculate the runoff variation rate of each season under different scenarios (shown in Table 3). Taking the Yangquan station as an example in the case of the rainfall change, in the spring, when the rainfall increases and decreases by the same proportion, the runoff change responds accordingly, such as in 2009,2010,2014,2015,2017. When the rainfall increases by 5% and decreases by 5%, the range of runoff changes is consistent; in summer, in most years, the effect of reduced rainfall on runoff has a greater impact on runoff than the increase in rainfall; in autumn, as the proportion of rainfall increases, the runoff response also shows significant changes; in winter, with the change in the proportion of rainfall increase and decrease, the trend of runoff response is not obvious, which shows the impact of rainfall changes on runoff is reduced. For the mining area change scenarios, in the spring and winter, the impact of the change in the mining area on the runoff is gradually increasing, and the impact of reducing the mining area is greater than that of increasing the mining area. For example, in spring 2018, when the mining area was reduced by 5%, the change in runoff was about three times that for the situation of a 5% increase in the mining area. The change in runoff after the mining area was reduced by 10% is about four times that of the change in runoff after the mining area was increased by 10%; in summer and autumn, the change in runoff is fluctuant. The influence of the reduction in the mining area is much greater than the increase in the mining area on the runoff. When the mining area is reduced by 5% and 10%, the change rate for the runoff is quite different. When the mining area increases, the difference in the change rate for the runoff is small. This shows that under certain conditions when the impact of the mining area on runoff has been maximized, the runoff will not obviously change as the area of the mining area increases. Discussion Through the above analysis, the paper finds that the change in runoff in the Yangquan Basin is most affected by human activities-mainly coal mining activities. In order to quantitatively analyze the impact of rainfall and the mining area on runoff in the Yangquan Basin, this paper divides the study time into "natural period", "exploitation period", and "recovery period" [44]. The average runoff in the natural period can be obtained from the observation data, which is the benchmark runoff; second, based on the original hydrological model before the improvement, we can calculate the runoff during the mining and recovery periods without considering the impact of the mining area, and the difference from the benchmark runoff is the impact of rainfall change on runoff; finally, the difference between actual observation of runoff and calculation of runoff, considering the impact of the mining area, is the impact of human activities on runoff [45,46]. The results of the calculation of the runoff changes due to rainfall and human activities in the Yangquan Basin are shown in Table 4. It can be concluded that mining activity is the main driving factor for the significant decline of runoff in the Yangquan Basin. The impact of coal mining on runoff in the Yangquan Basin is greater than the impact of rainfall on runoff during the mining period (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004) [47,48]. The impacts on the runoff during the mining and recovery periods are both negative, and their contribution rates are 85.96% and 39.34%, respectively. The impact of rainfall on runoff during the recovery period is greater than that in the mining period, and the impact contribution rates are 60.66% and 14.04%, respectively. The impact contribution rates of rainfall on runoff reduction have been increasing over the years, and the contribution rate of mining activities on runoff has decreased. This indicates that the impact of human activities is gradually decreasing during the recovery period, and human activities had the most serious impact on the underlying surface during the mining period because the large-scale mining of coal mines caused serious changes in the underlying surface. Since 2005, the introduction of relevant coal mining remediation measures by the Chinese government has played a significant role in reducing coal mining activities, which protects water resources in the Yangquan Basin. Conclusions This study illustrates a systematic procedure for the calibration and validation of a hydrological model for the Yangquan Basin. Based on the formation mechanism of the goaf, a hydrological model considering the influence of the special underlying surface of the goaf is established. (1) The annual runoff of the two hydrological stations in the Yangquan Basin is verified by the model, and the evaluation indicators meet the requirements, indicating that the improved model has good applicability in the Yangquan Basin and can be used to study the hydrological response under the influence of the mining area. (2) Taking 2008-2018 as the base period, the effects of rainfall and mining area change on runoff in the Yangquan Basin were studied under different scenarios. The rainfall and runoff of each hydrological station were basically positively correlated, while the change in mining area was negatively correlated with runoff. The effects of rainfall and mining area changes on runoff are mainly concentrated in summer and autumn, and the runoff response in summer and autumn is greater than that in spring and winter. In spring and winter, when rainfall and mining areas change by the same percentage, the impact of changes in the mining area on runoff is greater than the effect of rainfall changes on runoff. (3) During the coal mining period, the impact of human activities on runoff was much greater than the impact of rainfall on runoff, and this is the main reason for the sudden change in the rainfall-runoff relationship in the Yangquan Basin in 1990. The pre-exploitation rainfall has the same trend as the runoff. The impact of rainfall on runoff rises before and after the mining period and decreases during the mining period. The contribution rates of the rainfall on runoff are 14.04% and 60.66% during the mining and recovery periods, respectively, and the contribution rate of mining activities on runoff is gradually decreasing. During the mining and recovery periods, the impact contribution rates of mining activities are 85.96% and 39.34%, respectively. Through hydrological simulation, this study provides a better understanding of runoff changes in the Yangquan Basin. The results indicate that the hydrological response to mining activities in the Yangquan Basin is undergoing changes, which can draw attention from other mining regions worldwide. To mitigate water-related hazards and resource depletion caused by mining activities, the methods used in this study can be applied in the policy-making processes of other regions to achieve sustainable development. Data Availability Statement: All authors made sure that all data and materials support published claims and comply with field standards.
v3-fos-license
2021-12-01T16:20:18.744Z
2021-11-26T00:00:00.000
244757029
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/18/23/12444/pdf?version=1638368790", "pdf_hash": "67f49550ec3b5cfb2256baa79ade08828f240e6c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44058", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e0af8b98722ec1a5ee46462e2d690e886214f538", "year": 2021 }
pes2o/s2orc
Analysis of Nurses’ and Physicians’ Attitudes, Knowledge, and Perceptions toward Fever in Children: A Systematic Review with Meta-Analysis Context: Fever is a common symptom in children that nurses and pediatricians treat. Although it is a common sign in clinical practice, fever instills irrational fears in parents that health professionals share. Objective: To investigate whether doctors’ and nurses’ knowledge, perceptions, and attitudes toward fever influence how this sign is managed. Furthermore, it intends to evaluate whether educational programs increase knowledge and change attitudes and/or perceptions of nurses about children’s fever. Data Sources: A systematic review with meta-analysis was conducted with PRISMA international standards and the Cochrane recommendations. Study selection: Articles examining health professionals’ (doctors and/or nurses) knowledge, perceptions, and/or attitudes toward fever in children and the use of antipyretics were selected for the study. Data extraction: The qualitative analysis was carried out by classifying the articles according to the applied educational programs for nurses related to fever care for children that evaluated different outcomes to determine their efficacies. Results: For the qualitative synthesis, 41 articles were included, and 5 of these were taken in meta-analysis, which measured the effectiveness of educational programs for fever management in nurses. Limitations: All of the included studies generally had a high risk of bias. Conclusion: According to the evidence reviewed, nurses’ and physicians’ perceptions and attitudes regarding fever management in children indicate an overtreatment of this sign. We can give a recommendation grade of D on the use of educational programs to modify attitudes, perceptions, and knowledge about fever in children and improve clinical practice in nurses. Navarro and de Carlos demonstrated that professionals' incorrect attitudes toward the febrile child are cultural errors passed down from generation to generation, mainly from fear of febrile seizures and neurological sequelae [17].The irrationality of these fears is evident when the evidence shows that febrile convulsions do not cause neurological damage and antipyretics do not prevent them, although they are sometimes used for that purpose [3,9,10,13,15,18]. Separating the sign from the underlying condition and understanding the febrile process, according to the authors, allows them to provide the essential care to the child, alerting for signs of serious illness, avoiding dehydration, and ensuring nutrient intake [6,7,11]. In this regard, Razón stated that there is little information about fever in which doctors and nurses are trained and that it causes anxiety in managing the febrile child [1].In another study, Demir and Sekreter found that 65% of physicians consider this sign harmful, and 85% of pediatricians believe that fever can develop brain damage [19]. A temperature limit for antipyretics administration is a fundamental aspect in reaching a consensus on their usage.According to a study conducted with Australian nurses, using drugs on other aspects such as the child's discomfort can lead to conflicts with parents and/or peers [20].Parental influence on antipyretic measures, nursing colleagues, medical professionals, and workload, among other factors, was expressed by Australian nurses [20]. Educational programs have been considered a resource for changing the activities that professionals perform daily in clinical practice.Studies assessing educational programs that were included in this study were aimed to modify the ingrained knowledge, attitudes, and perceptions in nurses and evaluate their efficacy on increased knowledge [21,22], changing attitudes [23], knowledge and attitudes [24], perceptions and attitudes [25] or knowledge, attitudes, and perceptions [26]. The purpose of this systematic review and meta-analysis is to determine how doctors' and nurses' knowledge, perceptions, and attitudes toward fever management in children influence their practice.Furthermore, it intends to investigate whether educational programs increase knowledge and change nurses' attitudes and/or perceptions about children's fever. Design A systematic review with meta-analysis was carried out on doctors' and nurses' knowledge, attitudes, and perceptions about fever in children under the age of 14. PRISMA international standards and Cochrane recommendations were followed, and it was registered in PROSPERO on 31 August 2020 (No: CRD42020201362). Search Strategy From 15 November 2020 to 15 January 2021, the following databases were used for the bibliographic search: Virtual Health Library, Pubmed, Web of Science, and Cochrane.In addition, EBSCOhost meta-search was conducted with the following selected databases: Psychology and Behavioral Sciences Collection, APA PsycInfo, CINAHL with Full Text, Educational Administration Abstracts, MLA Directory of Periodicals, MLA International Bibliography, APA PsycArticles, E-Journals, eBook Collection (EBSCOhost), Social Work Abstracts, and SocINDEX with Full Text. The search strategy was developed by truncating the DeCS/MeSH descriptors and a free term with Boolean operators.To avoid losing results, the search formula was: (pediatricians OR nurses, pediatric) and (fever OR Fever Phobia) in addition to a subsequent search with the only free term "fever phobia". Subsequently, a directed or snowball search was conducted, which included reviewing the references in the articles as well as those relevant to the study phenomenon that had not appeared due to the included limits.The article selection process was performed in two phases.The titles and abstracts were reviewed first, followed by a full-text reading to determine if they met the inclusion criteria and were of sufficient quality. Inclusion Criteria (a) Articles examining health professionals' (doctors and/or nurses) knowledge, perceptions, and/or attitudes toward fever in children under the age of 14 in hospital and community settings, as well as the use of antipyretics.(b) Written in English or Spanish.(e) Articles focusing on parents' knowledge, perceptions, and/or attitudes toward fever in children.(f) Articles on the assessment of discomfort in children.(g) Letters to the editor, comments from experts, and translations of original articles. Data Collection The articles were selected in pairs, and any disagreements were resolved by consulting a third researcher.Identification, screening, selection, and inclusion were the four stages of the paper's selection procedure.All the articles' titles were scrutinized using the inclusion criteria to eliminate those that were irrelevant.Papers with dubious titles were included in the following phase for in-depth analysis.A summary of each chosen study was reviewed in the third phase of selection to determine doctors' and nurses' knowledge, perceptions, and attitudes about fever in children. An Excel coding sheet was then created for each article: literature reviews, surveys, common practice descriptions, and educational programs.Finally, the meta-analysis included the selected articles containing the evaluation of an educational program for nurses. The studies' quality was investigated using the "Critical Appraisal Skills Program (CASPe)" via the online critical reading card tool "FLC 3.0".According to the criteria applied by this tool, the studies were classified as low, medium, or high quality. Assessment of the Bias Risk The bias risk of the articles included in the meta-analysis was evaluated.The authors agreed on the biases of the included research.The risk of bias was assessed using seven domains of the Cochrane Collaboration Tool version 5.1.0[27], including appropriate sequence generation, allocation concealment, blinding, incomplete outcome measures, selective reporting, and other biases.First, the aspects of the studies related to the aforementioned domains were examined, and then the bias risk was determined.The risk levels were labeled as "low risk", "high risk", or "moderate/uncertain risk".The overall risk of bias for each study was calculated based on the analysis of each domain separately.The rating was identified via the most prevalent risk of bias value in various items of each study. Qualitative Synthesis The qualitative analysis was carried out to gain a better understanding of the phenomenon under investigation.The variables that influence the care and/or treatment of febrile children by health professionals were investigated.Non-analytical articles, or those that did not establish relationships between variables, were listed in this category of analysis.All of the articles could not be included in the meta-analysis due to their heterogeneity and non-analytical conditions. Quantitative Synthesis After that, studies with an analytical nature were considered for the meta-analysis.The articles evaluating educational programs for nurses that had at least two statistical measures of the variables were extracted.The Meta-Essentials Excel tool was used to create the meta-analysis.The articles' analyses were divided into three groups based on the variables: those that evaluated knowledge, and those that investigated the attitudes and/or perceptions.If an article examined more than one variable, it might be classified as belonging to more than one category. All the articles included in the meta-analysis shared the relationship of the variables through the mean and standard deviation (SD).Therefore, these two measures were chosen to compare the results of the various articles and draw conclusions. The mean and SD of each variable were extracted from the pre-test and post-test results of the selected studies, depending on whether their intervention included an experimental and control group or only an experimental group.The standardized mean difference (SMD) at 95% confidence intervals (CI) was calculated by dividing the mean difference between the experimental and control groups by the SD of both groups.In each of Cohen's studies, SMDs in the means were weighted by the inverse of their variance to obtain the pooled index of the magnitude of the effect.A random-effects model was selected due to the high heterogeneity of the studies. The differences between the averages of the pre-test and post-test of experimental and control groups were calculated to determine the size effect of variables.Subsequently, the difference between the mean of the experimental and control group's pre-tests was assessed.Accordingly, the size effect could be obtained by adding this difference to the experimental group mean difference based on the relation proposed by Cohen. The magnitude effect of the involved variables was calculated for one of the studies that did not have a control group but had a pre-test and post-test of a single group.This was obtained by dividing the difference in the standardized means of the pre-test and post-test proposed by Cohen via the post-test SD.The magnitude effect was calculated using Rosenthal's r because in a study lacking the mean and SD data (Considine & Brennan, 2007).The Z value was extracted by taking the square root of U Mann Whitney's with N, which has the same properties as Cohen's d. Heterogeneity was assessed using the inferential Q test proposed by Cochrane and the I2 index of heterogeneity with its 95% CI.When I2 was more than 50%, heterogeneity was considered as high.The size effect was interpreted using the following thresholds: 0.2-small, 0.5-medium, and 0.8-large.The p-value of 0.05 was also used to determine statistical significance. Search Results The search was completed in January 2021, with 1298 articles discovered in databases and 42 articles found using the "snowball" technique.After removing duplicates, there were 1046 articles left.88 of the aforementioned cases were evaluated in full text for inclusion in the study.In addition, 47 papers were excluded for the following reasons: Concentration on parental knowledge, perceptions, and attitudes toward fever in children, focus on fever after vaccination, studying fever solely from a biological or pharmacological standpoint, defining malaise, and stating expert opinions as well as translations.Finally, 46 articles were derived.Forty-one were included in the qualitative synthesis, while the quantitative synthesis comprised 5 cases.This information is represented in Figure 1: PRISMA flowchart. The included articles were then used to create two tables.Table 1 lists the articles included in the qualitative synthesis.This table provides the following information for each paper: design, data collection method, objectives, location and date, population and sample, results, conclusions, quality, and level of evidence.The second table (Table 2) summarizes articles that evaluated educational programs and could be used in the quantitative analysis.Therefore, the following sections were added to Table 2: Intervention and comparison, the number of participants in intervention and control groups, the measurement instrument applied, and the variables analyzed.The temperature that pediatricians consider as fever was higher than 37.0 • C for 14.3%; 37.5 General beliefs about fever and fever control were positive; for example, 60% believed that fever was not necessarily related to the severity of the illness and 75% reported that children with cardiac and/or respiratory disorders were "at risk" for fever.However, several negative beliefs were identified that would significantly affect practice.More than half of the nurses (57%) believed that their peers were phobic of fever and that fevers below 41 • C could be harmful to children (61%).Some determined the need for antipyretic administration by temperature measurement alone (39%) and reduced all temperatures of 38.3 • C and above (39%), even when the child was asleep (37%).Most believed that fever needed to be treated aggressively in children with a history of febrile seizures (85%). This study identified that level 2 nurses and nurses with one to four years of paediatric experience knew the most about fever and its control.However, this knowledge did not positively influence their beliefs; their beliefs were like those of novice paediatric nurses.It is essential that learning "on the job" is evidence-based.Programmes should focus on beliefs and knowledge, as higher levels of knowledge in fever management do not positively influence nurses' beliefs. The descriptive studies included 472 nurses; 83 stated that they had training in pediatrics, but the majority referred to pediatric nurses; however, it was not specified whether the training was a regulated or official postgraduate course.The total number of doctors considered was 4651, with 4343 being pediatricians and 20 being resident doctors.Within the quantitative studies, quasi-experimental evaluating educational program effectiveness stood out (n = 6) [21][22][23][24][25][26].There were 293 pediatric hospital nurses in total.The metaanalysis included five studies, one of which was excluded due to a lack of statistical data. The meta-analysis was conducted with five studies that included samples from 59 Korean nurses [26], 126 Turkish nurses [21], 31 Australian nurses [22], and two other studies that did not specify the sample of Australian nurses used [20,24,25].According to the bias analysis, none of the studies used adequate randomization for sample selection.Gender was only specified in two of the quantitative studies included in the investigation. The average age of the participants was 31.51 years, with a standard deviation of 7.5 years [21,22,[24][25][26].In terms of participant distribution and design, 3 presented an experimental and control group, administrating pre-tests and post-tests in both groups [26], or including a "latency test" carried out 4 months after the educational intervention [24,25].There were no procedures for group randomization used.The other three studies only presented an intervention category with the evaluation of a pre-test and a post-test in the same group [21,22].Finally, a qualitative methodology based on focus groups in an Australian hospital with 15 nurses was included [20]. Qualitative Analysis The qualitative analysis was carried out by classifying the articles by educational programs for nurses that evaluated different outcomes to measure the efficacy of the studied program related to fever care for children, which would later be included in the quantitative meta-analysis.These classes were attitudes, knowledge, and perceptions. As a result, the research was classified according to the variable they measured.When the evaluation of the education programs probed nurses' understanding of fever physiology, fever management, and antipyretic drugs or measures, the results were categorized as knowledge.The attitude class contained tests that assessed changes in the professionals' clinical practice regarding the management of a febrile child, performance against febrile convulsions, and the health education they would provide to parents following the training.The perceptions category included results relating to how parents, other nursing colleagues, and physicians influence the care of a febrile child, and how much control they have over the management of the febrile child. The analyzed studies employed various educational methods and attempted to adjust these variables as well as different evaluation methods to determine whether these programs are effective.A Korean study compared a "blended learning program" (which combines traditional classes with online learning) to "face-to-face lessons" (Traditional classes).Based on the findings, there was no difference in effectiveness between the two methods, but the intervention group had higher satisfaction in both methodologies through pre-test and post-test [26].Another study measured the increase in knowledge between the pre-test and the post-test for those who were given a "training booklet".The authors specified that those who were provided the "training booklet," had a slight increase in knowledge, but when each professional read this information on their own, it was impossible to control whether this reading was done correctly [21].Two Australian studies compared the prior and subsequent knowledge of a group of nurses after receiving two tutorials.Only 45.2% of the participants completed the tutorials, and the rest of the professionals were given the information in writing prior to the evaluation [22,23].Two other studies with the same sample compared the pre-test, post-test, and latency test of nurses who had participated in a peer education program to those who continued with their usual practice but did not specify which sample was the control group [24,25]. The studies were all performed in hospitals.Because of staff mobility, which resulted in the loss of professionals who changed units or terminated their contracts [24,25], the sampling method used was convenience [21][22][23], selecting nurses from two children's hospitals [26], or choosing entire units.Although there were differences in nurse specialization, work experience, and unit category, no prior selection criteria were established.It should be noted that the majority of the sample's losses occurred during the follow-up and evaluation of the educational program.Therefore, the studies included had a high risk of bias in participant selection. In terms of questionnaires, all educational programs used a structured self-administered questionnaire in their tests.Four of them applied or adapted the "fever management survey" developed by Walsh et al. in 2005, which is divided into three other questionnaires: fever management knowledge (FMK), fever management attitudes (FMA), and fever management practices (FMP) [23][24][25][26].The study authors created and validated one of them to assess knowledge [22] or used an unvalidated questionnaire to evaluate knowledge [21]. Quantitative Analysis and Meta-Analysis The meta-analysis was carried out based on five studies that evaluated educational programs [21,22,[24][25][26].One study was excluded because the results were not properly analyzed using statistical data [23].All three variables show a high degree of heterogeneity.In the first-place knowledge is the variable with the highest heterogeneity (I2: 97.23%), followed by the variable attitudes (I2: 88.25%) and lastly the variable perceptions show the least heterogeneity (I2: 60.75%).Considering statistical significance, the knowledge and attitude variables show statistically significant results (p < 0.001) but the perceptions variable does not present statically significant result (p > 0.005).These analyses are depicted graphically in Figures 2-4. Risk of Bias In general, all of the included studies had a high risk of bias.In addition, all were at high risk of insufficient sequence generation, allocation concealment, and blinding [21][22][23][24][25][26]. With the exception of one, all had a high risk of bias due to incomplete outcome measures [26].Two of the reports chosen had a low risk of bias [21,26], while the others had a moderate risk level [22][23][24][25].The risk of bias in the included articles is graphically depicted in Figures 5 and 6. Discussion This systematic review included 41 studies to determine whether the knowledge, perceptions, and attitudes toward fever by doctors and nurses who work with children affect antipyretic measures.In addition, we aimed to assess if the educational programs included in the meta-analysis could lead to changes in the usual clinical practice of nursing care of the febrile child. As Razón points out, lack of knowledge and understanding of this process leads to the use of aggressive treatments to achieve normothermia, such as combination antipyretic therapy [1] Nurses and doctors agreed that fever might be beneficial, but they were concerned about the long-term consequences.For example, 50% of nurses in Ireland supposed that fever has beneficial effects on the immune system, and 84.9% reflected that regular paracetamol usage could disguise symptoms, but that fever should be treated rapidly to avoid febrile seizures [29]. The research demonstrated that a temperature limit, rather than discomfort, was the most important criterion for providing antipyretics.According to Radhi's study, most physicians believe that antipyretic medication is intended to reduce fever symptoms; therefore, doctors tend to prescribe antipyretics for every child with this sign.As a result, it could be given to a child who is depressed as well as a playing child [5,9]. In one study, parents were advised to use antipyretics whenever the temperature rose above 38.3• C [28].Another study reported the temperature limits used by Argentine doctors to administer antipyretics, when 49% managed it at 38 • C or lower [37].In a Spanish study, only three doctors advocated the general condition as a criterion for delivering antipyretics.Meanwhile, 67.8% of primary care pediatricians and 66.7% of hospital pediatricians recommended antipyretics when the temperature reached 38 • C [38]. According to an audit conducted in a hospital of the same nationality, 45% of antipyretics were given at temperatures below 38.3 • C.Although the authors of a recent study clarified that these medications could also be used for purposes other than temperature control, such as pain relief or discomfort, the reasons for their use were not explicitly stated in the reviewed article [39]. According to the pediatricians who participated in the Martins & Abecasis study, fever is a healthy physiological process for the immune system, and the child's health should be considered during treatment due to the discomfort it may cause.However, antipyretics are still recommended by 78.1% of family doctors and 81.4% of pediatricians [35]. A mixture of antipyretics was shown to be effective in lowering body temperature.However, its safety, efficacy in improving the child's comfort, and other clinical outcomes are still questioned [2][3][4]8,11,15].Even so, only 15% thought the child's discomfort as the first symptom [41].Similarly, 76.1% of a Spanish pediatrician sample maintained this practice, with the caveat that it should only be used in exceptional cases [38]. Physical treatments such as applying the cold compresses or removing the child's clothing would counter fever treatment [2,4,5,8,11,16].Other examples using these strategies in pediatricians could be found in the Lava et al. research, where only 7% of pediatricians prescribe antipyretic therapy as an alternative.Whereas 65% recommended physical temperature lowering strategies [31]. The usage or recommendation of various temperature reduction methods by doctors and nurses has also been examined through descriptive research, and the practice may have changed over time.As an example, studies attended by Chiappini et al. in 2009Chiappini et al. in , 2012Chiappini et al. in , and 2015 clarified that the percentage of pediatricians using alternative antipyretic therapy or suggesting it to reduce the incidence of febrile seizures has decreased from 27% to 12.2%.There has also been a reduction in the recommendation of physical measures from 65% to 52%.Although, the recommendation of thinking about the child's discomfort rather than a temperature was declined from 45.3 to 38.2% [33,34,40]. Notably the present review has focused on the knowledge, attitudes, and perceptions of professionals about fever in children, however several studies have shown that how professionals treat fever influences parents. In this regard, it is worth noting that most authors seem to agree that the aggressive fever management by professionals promotes parental fear of this sign and their desire to achieve normothermia in their children [2,3,10,14,17].This results in a rebound effect in which the parents' anxiety influences the professionals, who seek to quickly resolve fever to satisfy them and reduce their anxiety [1,5,7].When asked about this, pediatricians denied reducing fever to calm parents (81% and 63%) [31,32].Moreover, the nurses did mention that parents were pressuring them to provide their children antipyretics [20]. Consequently, researchers have developed actions to change nurses' attitudes, perceptions, and/or knowledge in pediatric practice by measuring the efficacy of various educational methods. Edwards's examination demonstrated that peer education could increase general knowledge about fever but did not significantly improve knowledge about antipyretics.In terms of attitudes, they reported a significant improvement [24].There were no differences in attitudes toward the efficacy of antipyretics between groups.In contrast, in the experimental group, the perception of control increased, and the intention to use antipyretics decreased [25]. Jeong and Kim compared a hybrid online and face-to-face approach versus a traditional method.Based on the findings, both the control and intervention groups significantly improved their knowledge of fever, attitudes, and intentions to use antipyretics.Nevertheless, regulatory influences and the perception of control did not change significantly.As a result, while the type of education did not improve the traditional method, it resulted in a higher level of satisfaction [26].In other cases, using a training book slightly increased their knowledge [21].Another study measured face-to-face tutorials and concluded that they improve knowledge and clinical practice [22,23].In general, meta-analysis revealed that educational methods cause a statistically significant change in knowledge and attitudes, as opposed to perceptions which did not show a statistically significant change. There are some limitations to this review.Due to language limitations, relevant studies might have been left out.In addition, the use of unvalidated questionnaires in articles may restrict the validity of the results.Furthermore, the study phenomenon, i.e., attitudes, knowledge, and perceptions, are variables that are difficult to quantify. Conclusions The attitudes, knowledge, and perceptions of health professionals that lead to overtreatment and overestimation of fever in children have received little attention.According to the reviewed literature, the way professionals understand fever and how they respond to it may result in fever management in children based on overtreatment and overestimation of fever and its complications, reflecting a possible irrational fear of this sign.On the one hand, most studies are descriptive and do not investigate these issues analytically, making it difficult to draw conclusions with a high level of evidence.Existing studies that evaluate educational programs, on the other hand, are an intriguing approach to this phenomenon, as they attempt to change knowledge, perceptions, and attitudes to modify daily clinical practice.Meanwhile, they still present a high risk of bias, and their efficacy cannot be affirmed.A qualitative study could delve deeper into the phenomenon, determining the reasons and acting based on them.The majority of the included studies, according to the SIGN scale "Scottish Intercollegiate Guidelines Network", have a descriptive evidence level of 3 or a quasi-experimental level of 2. As a result, we can give a grade of recommendation D on the use of educational programs for the modification of attitudes, perceptions, and knowledge about fever in children and the improvement of clinical practice in nurses.Hence, the interventions evaluated cannot be recommended or discouraged. (a) Articles that only evaluate fever from a biological perspective.(b) Research articles on the pharmacological properties of antipyretics.(c) Articles about fever after vaccination.(d) Articles assessing the effectiveness of temperature measurement methods. Figure 2 . Figure 2. Data analyzed for the variable knowledge. Figure 3 . Figure 3. Data analyzed for the variable attitudes. Figure 4 . Figure 4. Data analyzed for the variable perceptions. • C for 32.7%; 38.0 • C for 41.2%.69% of pediatricians stated that they would give antipyretics for temperatures >38.5 • C; 17.7% above 38.0•C and 11.6% above 39.0 • C. In both surveys, most pediatricians recommended the use of physical methods if the fever persisted over time.In 2009 only 11% of the pediatricians, correctly, clarified that there was no temperature cut-off to initiate the use of antipyretics, but that it depends on the patient's discomfort; while in 2012 a higher percentage of pediatricians, 45.3%, declared it.Contrary to the GIF recommendations, in 2009 27.0% of the participants declared to recommend the alternative use of ibuprofen and paracetamol.This proportion decreased to 11.3% in 2012.The findings underline the importance of disseminating the IFG to improve pediatricians' knowledge of fever.Some misbehaviours, such as the alternate use of antipyretics and their rectal administration in the absence of vomiting, need to be further discouraged.An additional strategy may be needed to disseminate the IFG through other channels and to remove possible barriers to adherence to the IFG.MEDIUM (not recommended by the IFG) in 2015, a significant increase from the results reported in the 2012 survey (36.4%).The use of antipyretics based on the presence of discomfort, and not for a specific cut in body temperature, was recommended by only 38.2% of pediatricians.MEDIUM
v3-fos-license
2023-12-04T17:39:10.113Z
2023-11-28T00:00:00.000
265595743
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1911-8074/16/12/497/pdf?version=1701165228", "pdf_hash": "68aad51ad5dc5523a866377852dcef477e315820", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44060", "s2fieldsofstudy": [ "Business", "Economics" ], "sha1": "cd548f82fc76f34311409bd83065cd13a71f0548", "year": 2023 }
pes2o/s2orc
Examining the Impact of Agency Issues on Corporate Performance: A Bibliometric Analysis : An agency problem is defined as a conflict of interest arising due to a misalignment of interests among the managers and other stakeholders of the company. This article aims to review the articles addressing the agency problem and their impact on business performance. This article reviews the contributions of prominent theorists on agency problems and agency costs. Using bibliometric attributes of 740 articles from the Scopus database, this study highlights the publishing trend and outlets, along with leading contributors and collaborators in terms of authors, institutions, and countries. This study identifies the clusters through the bibliographic coupling technique and a trend topics analysis. Most researchers have focused on corporate governance and expressed the agency problem as one of the impact areas. This study is unique as no study to date specifically focuses solely on agency theory or the agency problem through the lens of bibliometric analysis. Future research directions on agency problems and their solutions conclude this study. Introduction The principal-agent problem (or the agency dilemma) occurs when one entity (the "agent") is employed to make decisions and/or take actions on behalf of, or impacts, another entity (the "principal").The dilemma happens when agents act in their best interests, contrary to principals' interests.This problem usually arises when both entities maximize their interests.When agents focus on their own gains before the principal's gains, it is called an agency problem (AP).The emergence of agency theory and the associated problems is rooted in the complexities arising from the separation of ownership and control within organizations.Originating in the 1970s, agency theory became a pivotal framework employed across diverse disciplines such as economics, law, finance, accounting, and political science.Initially introduced by Jensen and Meckling (1976), the theory gained traction due to its applicability in analyzing the challenges arising when one entity, the agent, acts on behalf of another entity, the principal, often leading to misalignments of interests.Ownership separation from control in big companies leads to a conflict of interests among shareholders and management.The firm's managers often focus on personal goals that conflict with the shareholders' wealth maximization objective (Shaifali 2019).The issues that arise among principals and their agents are often due to a lack of congruence in their approach because of information asymmetry (Jiang 2023).Information asymmetry happens when either party has more information than the other.Thus, the main focus of both principals and agents should be on resolving APs and saving on agency costs.Panda and Leepsa (2017) define agency costs as the internal costs arising from the misalignment of interests of the agent and the principal.It constitutes the cost of selecting and recruiting a suitable agent, costs incurred in setting benchmarks, overlooking the agent's actions, the bonding costs, and the residual loss arising from conflicts between the management and shareholders.Scholars researching Agency Theory (AT) study the relationship between principals and agents, and suggest ways to minimize the occurrences of agency issues and, ultimately, agency costs. The principal and agent theory emerged in the 1970s from the combined economics and institutional theory disciplines.The theory was taken up by researchers in several disciplines, like strategy (Barnard 1938), law (Banfield 1985), economics and finance (Jensen and Meckling 1976), accounting (Baiman 1990), and political science (Mitnick 1982), among many.Researchers use agency theory to analyze the top leaders in big private and public enterprises.Given its roots in economics, agency theory suggests that the agents who work in an organization have a utility maximization logic and seek to get what is in their best interest, even when it is not in the best interest of the organization (Eisenhardt 1989).Based on the essential contributions of the work of Barnard (1938) on cooperation in organizations, agency theory focuses on the conflict between objectives, created by various individuals who, while engaged in these organizations, seek what is in their best interest. The number of bibliometric studies on AP is limited (Bendickson et al. 2016).Past studies have focused a lot on corporate governance (Jahja et al. 2020;Naciti et al. 2022), boards of directors (Pascual-Fuster and Crespí-Cladera 2018), or more specific topics, such as board diversity and its impact on CSR (Baker et al. 2020a;Do 2023;Eliwa et al. 2023).This study differs from other published reviews because, to the best of the authors' knowledge, this research is the first bibliometric study that focuses primarily on the agency problem (AP) and its impact on financial performance across business fields with language, scholarly, and subject filtration in the Scopus database.This review focuses on mapping the domain of AP research through a bibliometric analysis.The insights on the current scenario and future research directions are shared after different analyses on AP.Thus, this study has the below-mentioned Research Questions (RQs): • What is the trend in publications on AP? • Which are the most influential publishing outlets for research on AP? • Who are the prolific contributors to the field of AP? • What are the themes and clusters for research on AP? • What are the future research areas in the field of AP? The remaining sections of this document are arranged as follows.The second section discusses the background of AP.The third section describes the methodology applied for this study.Results and discussions for all analyses are summarized in the fourth section.Further sections contain the research themes and future research directions to strengthen the field of AP. Theoretical Background Though the problem of the agency has existed for a very long time, Smith was the first author to ever write about it (Seth 2018).He forecasted that if the management of an organization is handed over to a person or a group of persons other than the owners, then it is likely that they may not work for the benefit of the owners.Bhabra and Wood (2014) discussed the ownership structure of large firms operating in the USA and argued that agents may use the assets of the organization to maximize their interests.The roots of agency theory trace back to seminal works that have shaped its conceptual foundation.Berle and Means' groundbreaking work in 1932, particularly in "The Modern Corporation and Private Property", laid the groundwork for understanding the challenges arising from the separation of ownership and control in large corporations.Moving forward, Eisenhardt's influential theories significantly advanced the discourse by addressing the intricacies of control mechanisms within organizations (Eisenhardt 1985(Eisenhardt , 1989)).These milestones underscore the theoretical evolution of agency theory, emphasizing shifts in focus from corporate governance dynamics to nuanced examinations of principal-agent relationships.Furthermore, pivotal contributions by scholars such as Jensen and Meckling in their 1976 paper, "Theory of the Firm: Managerial Behavior, Agency Costs, and Ownership Structure", have been instrumental in defining the theoretical landscape of agency issues.Jensen and Meckling (1976) discussed three types of agency costs-monitoring costs, bonding costs, and residual losses.Monitoring costs are incurred by the principal to oversee the conduct and limit the aberrant activities of its agent.Bonding expenses are incurred to ensure that agents do not make certain decisions that may impact the principal's interests.The residual losses arise due to the misalignment of interests of the principal and the agent and are measured in terms of the dollar equivalent of the losses to the principal.Often the agents tend to underdeliver on their promises to the principal to maximize their gains.This is referred to as a 'moral hazard'.Also, the more autonomy an agent gets to conduct complex work, the more significant the moral hazard becomes (Cowden et al. 2020).As per theorists, there are two main reasons behind principal-agent problems-one arising out of different risk preferences of the principals and the agents, and another arising since both the principals and agents are rational human beings and work towards maximization of their self-interests.Managers may misbehave if their interests differ from those of the company (Dalton et al. 2007). Panda and Leepsa (2017) segregated the AP into three types.The first type occurs amongst the principal and agents, due to the different levels of risk appetite, information asymmetry, and self-satisfying behavior based on the rational behavior of human beings (Elfenbein and Knott 2015), which states that rational individuals maximize their interests.This misalignment in interests of agents and the principals gives rise to the principal-agent problem.The second type of AP happens between the major and minor shareholders in a company.Shareholders with major holdings have a higher weight in voting and are likely to make decisions for their benefit which may obstruct the interests of shareholders with a lesser stake in the company.This problem is usually found in companies with higher ownership proportion gaps (Fama and Jensen 1983).The third type of problem arises because of risk preferences between the principals and creditors of the company.Quite often, projects are funded with more debt and less equity, as financing completely through equity is expensive (Jiraporn et al. 2012;Khandelwal et al. 2023;Narayan et al. 2021).Some projects are subject to a high risk of default.If such a project is successful, good premiums are enjoyed by the shareholders, and creditors are paid at a pre-decided rate of interest; however, if the project is unsuccessful, the creditors are asked to accept partial settlements due to loss in projects.This problem is seen in companies engaging in project financing.This leads to creditors being stuck with lesser returns for high risks. Search Strategy The search strategy for this review meticulously employed a three-step process to ensure the inclusion of relevant articles while adhering to specific criteria.The first step involved a database search, primarily focusing on the Scopus database due to its comprehensive coverage and reliable bibliometric parameters (Archambault et al. 2009;Kumar et al. 2021;Mongeon and Paul-Hus 2016).The search targeted articles related to agency theory using the keywords "agency cost" and "agency problem" with the 'OR' operator, forming the foundational elements of agency theory.To narrow down the focus, additional keywords like "performance*" and "profit*" were included with the Boolean operator '*' to capture all keywords starting with "profit" (Tripathi et al. 2023).The study specifically concentrated on articles related to 'business', 'organization', or 'firm'.Exclusion criteria were then applied, excluding articles from 2022 and limiting the search horizon to 2021.The second step involved subject filtration, considering only articles within the "Business, Management, and Accounting" category in the Scopus database, aligning with the overarching discipline where agency theory resides.The third and final step incorporated scholarly filtration, restricting the review to research articles published in English, thereby excluding other languages and publication types such as conference proceedings, reviews, books, and book chapters (Mukherjee et al. 2022).Through this comprehensive inclusion and exclusion criteria framework, the study ultimately reviewed a total of 740 documents, ensuring a focused and relevant dataset for analysis. Bibliometric Analysis Bibliometric analysis serves as an invaluable methodological tool in scrutinizing the state of research within complex domains such as AP (Naciti et al. 2022).Its utility lies in its ability to systematically evaluate and quantify the existing body of literature, offering insights into the trends, contributors, and thematic clusters shaping the field (Mukherjee et al. 2022).By employing bibliometric analysis, this study navigates the expansive landscape of AP research, unraveling patterns that might be challenging to discern through traditional literature reviews.This study applied a comprehensive bibliometric analysis to examine 740 selected publications on AP.Extracting bibliometric data from the Scopus database, an array of analyses explored the landscape of AP research.The investigation covered publication years, authors, journal titles, citations, institutes, and countries, addressing specific research questions.The study aimed to discern evolving publishing trends (RQ1), identify prominent outlets in the field (RQ2), and highlight top-performing authors, institutions, and countries (RQ3).The bibliometric analysis also delved into keyword exploration through co-occurrence analysis, forming knowledge clusters that delineate sub-themes within the AP domain (RQ4).Inspired by Donthu et al. (2021), this approach assessed the impact and centrality of each knowledge cluster.Within these clusters, articles were scrutinized to ascertain current research topics (RQ5) and identify gaps in the existing literature, shaping the future research agenda. To implement the bibliometric analysis, the study utilized the bibliometrix package in the R software (version 4.3.1)environment, facilitated by the RStudio platform.Specifically, the 'Biblioshiny' command harnessed bibliometric techniques, including the identification of top authors, sources, and articles, as well as the analysis of countries, institutions, and trending keywords.Additionally, science mapping was employed to visually represent knowledge clusters, providing a comprehensive overview of interconnections and focal points within the AP research landscape.Through this multifaceted approach, the study aims to contribute a nuanced understanding of the current state and future directions of research on AP and its implications for corporate performance. Results and Discussion The study highlights that the earliest articles on AP were published in 1985, and the total research articles indexed in Scopus till 2021 stand at 740 after the language, scholarly, and subject filtration.This section further contains detailed findings on the bibliometric attributes of the articles under study.Firstly, the line chart represents the year-wise publications corresponding to the year of publication (Figure 1).Secondly, the top publishing outlets are listed in order of decreasing total citations (Table 1, Figure 2). Publishing Trend The line plot depicts the published articles of each year following the search strategy.As evident from Figure 3, it is evident that AP has seen increased scholarly participation over the previous 36 years.The highest number of articles were published in 2021 (n = 60), being the most recent year of the study.A sharp growth is observed from 2002 with increased outputs each year hence. Publishing Outlets The analysis of documents by publishing outlets reveals the top journals publishin articles on AP.This study lists the top 20 journals, sorted in order of total citations o articles, in Table 1.The journals are listed with their corresponding h-index, total citatio on articles on AP, the number of publications, and the publication year start. The study puts the 'Strategic Management Journal' by Wiley at the first rank base on total citations.Interestingly, the 'Strategic Management Journal' is also the best journ based on its h-index for the study.This is followed by 'Academy of Management Journa by the Academy of Management, and 'The Journal of Financial Economics' by Elsevie Figure 4 depicts the top 20 publishing outlets based on the number of papers contribute Publishing Outlets The analysis of documents by publishing outlets reveals the top journals publishing articles on AP.This study lists the top 20 journals, sorted in order of total citations on articles, in Table 1.The journals are listed with their corresponding h-index, total citations on articles on AP, the number of publications, and the publication year start. The study puts the 'Strategic Management Journal' by Wiley at the first rank based on total citations.Interestingly, the 'Strategic Management Journal' is also the best journal based on its h-index for the study.This is followed by 'Academy of Management Journal' by the Academy of Management, and 'The Journal of Financial Economics' by Elsevier.Figure 4 depicts the top 20 publishing outlets based on the number of papers contributed to the existing literature on AP.The 'Corporate Governance: An International Review' is the highest contributor in this field, followed by the 'Journal of Corporate Finance' and 'Strategic Management Journal', respectively. Global Citations Global citations refer to the count of all articles globally that have cited the study without any filtration (e.g., language, scholarly, subject, etc.) (Baker et al. 2020b).Table 2 summarizes the articles on AP in decreasing order of their total global citations.In this study, we find that the article with the most global citations is "Control: Organizational and Economic Approaches", published in 1985 in the journal "Management Science".It is cited a total of 1227 times globally, and is followed by the article titled "Internationalization and firm governance: The roles of CEO compensation, top team composition, and board structure", published in 1998 in the "Academy of Management Journal" with a citation count of 613.Global citations refer to the count of all articles globally that have cited the study without any filtration (e.g., language, scholarly, subject, etc.) (Baker et al. 2020b).Table 2 summarizes the articles on AP in decreasing order of their total global citations.In this study, we find that the article with the most global citations is "Control: Organizational and Economic Approaches", published in 1985 in the journal "Management Science".It is cited a total of 1227 times globally, and is followed by the article titled "Internationalization and firm governance: The roles of CEO compensation, top team composition, and board structure", published in 1998 in the "Academy of Management Journal" with a citation count of 613. Local Citations Local citations refer to the count of all articles in the review corpus that have cited the study (Mukherjee et al. 2022).Alternatively, local citations are the citations received on the article from the current study sample of 767 articles after the language, scholarly, and subject filtration of the Scopus database.From this study, we find that the article titled "Board control and CEO compensation" published in Strategic Management Journal in 1994 has been cited by 49 articles (6.4%).This is followed by the article titled "Do corporations award CEO stock options effectively?",published in the Journal of Financial Economics in 1995 with a local citation count of 45 articles (5.9%).The article ranking based on local citations is summarized in Table 3. Prolific Authors The analysis of literature on AP reveals that Kathleen M. Eisenhardt, a professor in the school of engineering at Stanford University, has the highest number of total citations with a count of 1227 citations of articles on agency problems.Her first publication was in the year 1985 entitled "Control: Organizational and economic approaches" wherein she discussed agency theory and control (Eisenhardt 1985).She is followed by the late Mason A. Carpenter of the University of Wisconsin-Madison with 851 total citations in the field.The ranking of authors based on the number of papers published is shown in Table 4.As per the number of documents, Luis Gomez-Mejia of Arizona State University has the highest published on AT (n p = 11).He is followed by Robert M Wiseman of Michigan State University with eight published articles on AP (see Figure 5). Author Collaborations The co-authorship analysis reveals the nature and groups of authors, which is similar to the social network of researchers working on a common project (Donthu et al. 2021).van Eck and Waltman (2010) stated that the co-authorship networks have been studied extensively; however, the visualization of such networks has been given little attention.The analysis reveals the prominent collaborative groups in Figure 6 Country Collaborations This study visualizes the country-wise collaborations in the form of a network diagram (Figure 8).Cardoso et al. (2020) presented a country research performance model to evaluate a country's research dominance.They considered the countries' overall performance, the countries' journals' performance, and the countries' institutions' performance to ascertain the countries' research dominance.However, the researchers missed out on studying the cross-country collaborations on the topic.In Figure 8, the country-wise collaboration network is depicted.The countries marked with identical colors are part of the same cluster.The countries in the same clusters are shown to work together over the countries marked with different colors.Five country-wise clusters can be observed from the figure, with the biggest cluster dominated by the USA, China, Canada, and Australia.The second cluster consists of European countries such as the United Kingdom, Italy, Poland, and Cyprus, along with Pakistan.The third cluster in terms of its size is France, Finland, Norway, and Tunisia.This is followed by the fourth cluster of Belgium, Germany, and Netherlands, and, lastly, a separate and unrelated cluster of Indonesia and Malaysia are shown to have researched together on AP. Country Collaborations This study visualizes the country-wise collaborations in the form of a network diagram (Figure 8).Cardoso et al. (2020) presented a country research performance model to evaluate a country's research dominance.They considered the countries' overall performance, the countries' journals' performance, and the countries' institutions' performance to ascertain the countries' research dominance.However, the researchers missed out on studying the cross-country collaborations on the topic.In Figure 8, the country-wise collaboration network is depicted.The countries marked with identical colors are part of the same cluster.The countries in the same clusters are shown to work together over the countries marked with different colors.Five country-wise clusters can be observed from the figure, with the biggest cluster dominated by the USA, China, Canada, and Australia.The second cluster consists of European countries such as the United Kingdom, Italy, Poland, and Cyprus, along with Pakistan.The third cluster in terms of its size is France, Finland, Norway, and Tunisia.This is followed by the fourth cluster of Belgium, Germany, and Netherlands, and, lastly, a separate and unrelated cluster of Indonesia and Malaysia are shown to have researched together on AP. Leading Institutions The analysis indicates that Arizona State University made the highest contribution to the field of AP in the past with 19 articles published (see Figure 9).This is followed by the University of Pennsylvania with 15 articles.These are followed by the University of Melbourne, Michigan State University, and Northwestern University with eleven, ten, and nine articles respectively.The highest contribution of Arizona State University can be credited to Prof. Luis Gomez-Mejia who is also the leading author in the field.Contrastingly, the contribution of Robert M Wiseman and Richard A Lambert is also significant for boosting the impact of Michigan State University and Northwestern University. Leading Institutions The analysis indicates that Arizona State University made the highest contribution to the field of AP in the past with 19 articles published (see Figure 9).This is followed by the University of Pennsylvania with 15 articles.These are followed by the University of Melbourne, Michigan State University, and Northwestern University with eleven, ten, and nine articles respectively.The highest contribution of Arizona State University can be credited to Prof. Luis Gomez-Mejia who is also the leading author in the field.Contrastingly, the contribution of Robert M Wiseman and Richard A Lambert is also significant for boosting the impact of Michigan State University and Northwestern University. Institutional Collaborations The network visualization diagram of institutional co-authorship reveals five major collaboration groups (see Figure 10). Themes The bibliographic coupling of documents is used to form knowledge clusters.The knowledge clusters contain documents of similar themes underlining the thematic structure of AP.Following the methodology shared by Kumar et al. (2021), we form knowledge clusters by the bibliographic coupling of documents based on authors' keywords.Four knowledge clusters are formed from the analysis; Table 5 summarizes the keywords and their occurrence for each cluster.The knowledge clusters are also plotted in Figure 9 based on their centrality and impact.Centrality in bibliometric research refers to the prominence of a publication or author within a scholarly network, often measured by the number and strength of connections.Impact of the cluster assesses the influence and significance of a research output, typically measured by citations and other indicators of scholarly impact (Sahoo et al. 2023).Impact measures the extent to which a research output is cited and acknowledged within the context of the discussed theme, highlighting its influence and relevance in the scholarly discourse (Sahoo et al. 2022).Table 6 lists the top ten most relevant documents for each cluster. Themes The bibliographic coupling of documents is used to form knowledge clusters.The knowledge clusters contain documents of similar themes underlining the thematic structure of AP.Following the methodology shared by Kumar et al. (2021b), we form knowledge clusters by the bibliographic coupling of documents based on authors' keywords.Four knowledge clusters are formed from the analysis; Table 5 summarizes the keywords and their occurrence for each cluster.The knowledge clusters are also plotted in Figure 9 based on their centrality and impact.Centrality in bibliometric research refers to the prominence of a publication or author within a scholarly network, often measured by the number and strength of connections.Impact of the cluster assesses the influence and significance of a research output, typically measured by citations and other indicators of scholarly impact (Sahoo et al. 2023).Impact measures the extent to which a research output is cited and acknowledged within the context of the discussed theme, highlighting its influence and relevance in the scholarly discourse (Sahoo et al. 2022).Table 6 lists the top ten most relevant documents for each cluster.The first cluster comprises research articles on Corporate Governance.Westphal and Zajac (1995) suggested that high incentives and monitoring costs are not optimal.A firm's strategy should focus on corporate governance implications equally as product and market implications.Devers et al. (2007Devers et al. ( , 2008) ) in their studies on executive compensation factors and their robustness, revealed that no theoretical model is strong enough to determine optimal executive compensation.van Essen et al. (2012) find that strong directors can establish tighter links between executive pay and firm performance.Scholarship has further highlighted that CEOs that have higher board influence enjoy higher compensation packages and only shareholders and their agents can control them.The articles in this cluster are highest in terms of centrality to the theme of AP and have a high impact on the literature.The second cluster is the smallest cluster in size, focusing on Agency Costs and Governance.Researchers in this cluster have studied: (i) Agency costs arising due to governance hazards (Lambert 2001), (ii) Tax evasion and its impact on agents and principals (Crocker and Slemrod 2005), (iii) Relationship of free cash flows and governance (Jabbouri and Almustafa 2021), (iv) Ownership concentration and agency costs (Pandey and Sahu 2019).Scholars find that accounting disclosures authorized by agents can be misleading and can manipulate stock prices (Lambert 2001).Scholars also highlight earnings management as a reason for agency costs.Agents tend to follow reporting standards that benefit their pocket at the expense of principals (Michiels et al. 2013).Research in this cluster has a high impact on AP literature but lacks centrality. The third cluster comprises articles around APs focusing on Agency Theory and Compensation.Scholars listed out determinants of a suitable pay structure for executives and have tested them empirically.Ittner et al. (1997) list out performance measures for determining bonus structures of executives.O'Donnell (2000) criticized AT for its prediction ability for the management of international subsidiaries.She stated that the model based on intra-firm interdependence has higher predictive power in comparison to AT. Björkman et al. (2004) linked the managerial compensation structure of MNCs with knowledge transfer mechanisms; however, they could not find support for their proposal.Another branch in this cluster is observed with the use of CSR as an employee governance tool.Flammer and Luo (2017) suggest the integration of CSR-based governance in strategic planning.Employee governance on CSR practices is proven to mitigate employee absenteeism, shirking, and employee theft and fraud.This cluster is also the highest in terms of research impact and identifies compensation and social governance as the road to minimizing agency costs. The fourth cluster consolidates studies on Executive Compensation and Agency Costs.Panda and Leepsa (2017) suggested the use of variable compensation on profits as motivators for executives.If the principals and agents will benefit from a common thing, occurrences of AP can be minimized.Yermack (1995) states that performance incentives in form of cash rewards and stock options relate to agency cost reduction.Efendi et al. (2007) stated that performance-based benefits often lure managers to misstate accounting facts.Authors state that in the post-1990s market bubble world, the likelihood of cooked financial statements increased as CEOs have sizable holdings in the form of stock options.They also argue that agency costs also arise due to overvalued equities as managers try to maximize the value of their stock options in shorter runs.Chou and Buchdadi (2018) find that dynamic compensation structures have increased executive attrition and led to an increase in residual losses.Consistent with Conyon and He (2011), they found that performance-linked incentives are lower in state-owned firms and organizations with concentrated ownership.Their study also highlights the country-based differences with the example that executive pay for US managers is seventeen times higher than Chinese managers, proving that the agency costs differ on a geographic basis. Topics The keywords are analyzed by the bibliographic coupling technique to assess the use of keywords over the years (Agbo et al. 2021).The trend topics package in biblioshiny plotted the keywords by use frequency and years of most use (Figure 10).The article count (left axis) and year of publication (right side) are plotted on the Y-axis, whereas the prominent keywords over the past seven years are plotted on the X-axis.The analysis reveals that the researchers have studied executive compensation the most in the last seven years (n = 139).The majority of studies in the field began during 2010 and have a median year of study of 2015, considering research articles up to 2021.The upcoming research topics are identified as managerial ownership, family firms, and stewardship theory, respectively, as they have the most recent median years of study. Discussion The performance analysis of this bibliometric study addresses the research questions posed at the outset, shedding light on various facets of the AP field.Firstly, the study captures the evolving trend in AP publications, revealing a notable increase in scholarly engagement over the past 36 years, with a peak in 2021.Secondly, the identification of influential publishing outlets, with a focus on journals such as 'Strategic Management Journal', 'Academy of Management Journal', and 'The Journal of Financial Economics', provides valuable insights for researchers seeking impactful platforms for AP research dissemination.Thirdly, the analysis of prolific contributors highlights key individuals shaping the field, with scholars like Kathleen M. Eisenhardt and Mason A. Carpenter emerging as influential figures.Fourthly, the thematic clusters uncovered in the analysis, including Corporate Governance, Agency Costs and Governance, Agency Theory and Compensation, and Executive Compensation and Agency Costs, provide a comprehensive overview of the diverse research themes and clusters within AP.Fifthly, the study identifies emerging research areas, with a focus on managerial ownership, family firms, and stewardship theory, offering valuable guidance for future investigations in the AP domain.The next sections provide the research areas that should be explored by academic scholarship. Further Research Agenda The latest trends and topics for study are presented in this section to provide insights on the recent research.With the reading of the top ten research papers from each knowledge cluster and scrutiny on trend topic analysis, we draw attention to the following listed gaps and ongoing research streams (Figure 10). Managerial Debt and Firm Performance Research highlights that the use of short-term debt mitigates the agency costs of the firm by constraining CEOs' risk-taking preferences.Brockman et al. (2010) studied the impact of duration of debt on managerial risk-taking, thus minimizing the agency costs to the firm.Dhole et al. (2016) highlight that inside debt counteracts the CEOs' motivation to smooth earnings through earnings management; thus, CEOs are proven to be effective when they hold higher stakes of inside debt.Managerial debt is compared with multiple proxies of firm performance, and much research is going on in this area.Scholars (Harris and Raviv 1991;Naveed Kashan and Siddiqui 2021) have pointed out that debt commits the firm to pay out money in the form of interest payments, thereby leaving less 'free cash flow' for the managers to engage in selfish pursuits. CEO Pay of Family-Owned Companies Numerous studies have been conducted to see the impact of a CEO's origin on the performance of the business.While this issue may be subjective, some studies have found a difference in the leadership of a professional CEO hired from outside with one hired from the controlling family.Denis and Osobov (2008) highlighted the importance of studies on corporate governance before the millennium.Michiels et al. (2013) discuss the CEO pay structure of the private family-owned firms against the non-family-owned firms and find that the pay-for-performance relation is lower in family-owned firms.Kyung et al. (2021) stated that CEO compensation varies with type of investors and their stakes.On the contrary, Delgado-García et al. (2023) found that family firm CEOs have higher compensation in comparison with non-family CEOs.The contrasting findings of studies coupled with the trendiness of the topic indicate the need for further research. CEO Compensation and Sustainability Masulis and Reza (2015) found that CSR expenditures are linked with CEOs' image.A hike in societal expenditure is likely to benefit the management's public image, but at the same time will reduce net profits and, thus, shareholder's earnings.Francoeur et al. (2021) show that environment-compliant firms offer their CEOs less total compensation and are less dependent on incentive-based compensation than environmentally carefree firms.Karim (2020) finds that the remuneration patterns of CEOs and executive directors linked with socially responsible activities tend to a reduction in agency costs.Additionally, they find that having independent and executive female directors are linked with lower compensation for executives. CEO Compensation and Corporate Governance Westphal and Zajac (1995) suggested that a firm's strategy should focus on corporate governance implications equally as product and market implications.Devers et al. (2007) shared the theoretical framework for compensation models of top executives.He addressed the ongoing debate on determinants and consequences of executive compensation while asking scholars to take forward their work.Luo et al. (2023) evaluated the components of executive compensation and found a positive relationship with the firm performance of Chinese public firms.The researcher finds that incentives to top executives result in better firm performance as compared to non-incentivized executives. Economic Value Added and Employee Compensation Studies reveal that there is a positive relationship between the Economic Value Added (EVA) and executive compensation.A few studies also claim that high-paid managers are more arrogant and are more prone to agency issues (Brahmana et al. 2020).Chen et al. (2015) suggest using governance measures to bring down agency costs.Tripathi et al. (2023) suggest the methodology to calculate EVA and regress it with executive compensation.Eliwa et al. (2023) study the impact of governance indicators (board size, minority representation, appointment of family directors) on the EVA of listed companies, thereby suggesting an impact on the firm value. Stakeholder Theory As outlined by Kahler (2011), the stakeholder theory suggests that instead of amassing shareholders' wealth, the management should work towards the fulfillment of a variety of goals.The theory shifts the perspective from an organization's shareholders to its stakeholders.According to Freeman et al. (2018), stakeholders are individuals or a group of individuals who can affect or get affected by organizations' decisions.Freeman et al. (2018) carefully noted that any theory that redistributes decision-making ability was open to exploitation by non-shareholders.The reallocation of power from wealthy shareholders to the comparatively less wealthy stakeholders could potentially maltreat the existing shareholders who have put in funds as capital. Stewardship Theory and Agency Theory The works of both stewardship and agency theories can be used to work out principalagent relationships for non-profit firms (Chrisman 2019).The stewardship-based approach presumes that non-profit firms are motivated to act for benefit of their donors (principals).Peck et al. (2021) suggest that a manager (steward), if independent and given a choice in self-sustaining behavior or cooperation with the company (lord), will favor cooperation with the owners.Chrisman (2019) recommends the use of stewardship theory over AT for family firms.He states that the lack of assumptions in stewardship theory makes it more realistic for firms to implement.He provided observations on how to bolster stewardship theory for the study of family firms by rectifying its assumptions on models of man, goals, and control, and asked scholarship to empirically verify more domains of stewardship theory. Conclusions Entrepreneurship is critical to economic development, and constant research is needed to figure out problems relating to agency issues and their solutions for both the principals and the agents.In conclusion, the extensive literature review conducted offers valuable insights into the intricate dynamics of agency problems and their profound impact on firm performance.While the exploration covered various facets such as managerial debt, CEO compensation, stakeholder theory, and stewardship theory, the need for a more focused examination of the relationship between agency theory and firm performance is acknowledged.Despite the breadth of topics discussed, the concern raised about the clarity of future research gaps is valid. To address this, emphasis is placed on the pivotal intersection of agency problems and firm performance as a central theme for future investigation.Specifically, a more nuanced exploration into the interplay between agency mechanisms and their direct implications on business outcomes is warranted.By honing in on specific dimensions within the agency theory framework, such as the effectiveness of mitigating agency costs or the optimization of governance structures, researchers can contribute more directly to the ongoing discourse.Furthermore, scholars are encouraged to delve deeper into the determinants of agency costs and devise innovative strategies to minimize them, providing actionable insights for both academics and practitioners.By narrowing the focus and delineating clear avenues for future research within the broader context of agency problems and firm performance, aspirations are set to enhance the scholarly contributions in this critical field of study. Theoretical implications of this study extend to refining our understanding of agency issues and their intricate connections with corporate performance, contributing to the ongoing theoretical discourse in the field.Managerially, the findings underscore the significance of informed decision-making in mitigating agency problems for improved corporate performance.As practitioners navigate the complexities of agency relationships, the insights derived from this study can serve as a strategic guide, fostering more effective governance structures and practices within organizations. Figure 3 . Figure 3.Leading Authors Contributing to Research on Agency Theory (Minimum of Three R search Articles). Figure 3 . Figure 3.Leading Authors Contributing to Research on Agency Theory (Minimum of Three Research Articles). J. Risk FinancialManag.2023, 16, x FOR PEER REVIEW 8 of 25 to the existing literature on AP.The 'Corporate Governance: An International Review' is the highest contributor in this field, followed by the 'Journal of Corporate Finance' and 'Strategic Management Journal', respectively. Figure 4 . Figure 4. Prominent Collaborators (Authors) on the Research Topic of Agency Theory. Figure 4 . Figure 4. Prominent Collaborators (Authors) on the Research Topic of Agency Theory.4.3.Publication Performance 4.3.1.Global Citations on the research topic of AP.The collaborative author group is Luis Gomez-Mejia of Arizona State University, Robert M Wiseman of Michigan State University, Geoffrey Martin of Melbourne Business School, and Herman Aguinis of George Washington University.The size of the circles in Figure 4 resembles the influence of the author.Other collaborative groups include Michael Wolff and Jana Oehmichen of the University of Groningen, Lerong He of State University of New York at Brockport and Martin Conyon of Bentley University, and, lastly in the figure, David F. Larcker of Stanford University and Richard A Lambert of Northwestern University.Robert M Wiseman of Michigan State University, Geoffrey Martin of Melbourne Business School, and Herman Aguinis of George Washington University.The size of the circles in Figure 4 resembles the influence of the author.Other collaborative groups include Michael Wolff and Jana Oehmichen of the University of Groningen, Lerong He of State University of New York at Brockport and Martin Conyon of Bentley University, and, lastly in the figure, David F. Larcker of Stanford University and Richard A Lambert of Northwestern University. Figure 6 . Figure 6.Prominent Collaborators (Institutions) on the Research Topic of Agency Theory. Figure 7 summarizes the country-wise academic contributions in the field of AP.Notably, Alaska and United States dominate the field with more than 800 citations in the countries.They are followed by China with cited research in the bracket of 200-400 studies.Other countries that are highlighted in red represent lower impact output within the range of 0-200 citations.The nations that are shown in white color denote no or very little involvement in the academic research in this area. Figure 6 . Figure 6.Prominent Collaborators (Institutions) on the Research Topic of Agency Theory. Figure 7 summarizes the country-wise academic contributions in the field of AP.Notably, Alaska and United States dominate the field with more than 800 citations in the countries.They are followed by China with cited research in the bracket of 200-400 studies.Other countries that are highlighted in red represent lower impact output within the range of 0-200 citations.The nations that are shown in white color denote no or very little involvement in the academic research in this area. Figure 7 . Figure 7. Geographic Heat Map of Countries Contributing to Research on Agency Theory. Figure 7 . Figure 7. Geographic Heat Map of Countries Contributing to Research on Agency Theory. Figure 8 . Figure 8. Prominent Collaborators (Countries) on the Research Topic of Agency Theory. Figure 8 . Figure 8. Prominent Collaborators (Countries) on the Research Topic of Agency Theory. Figure 9 . Figure 9. Knowledge Clusters Identified as a Result of "Bibliographic by Coupling" Analysis.4.6.2.Institutional Collaborations The network visualization diagram of institutional co-authorship reveals five major collaboration groups (see Figure 10).The biggest group consists of Arizona State University, the University of Melbourne, Michigan State University, and the University of Wisconsin-Madison.The next group consists of Texas A&M University, Indiana University, and Texas Christian University.The two groups are followed by collaborative duos of Stanford University-University of Pennsylvania, Iowa State University-San Diego State University, and University of Texas-University of Minnesota. Figure 9 . Figure 9. Knowledge Clusters Identified as a Result of "Bibliographic by Coupling" Analysis. Figure 10 . Figure 10.In-trend keywords on the research in the field of AT over the past seven years. Figure 10 . Figure 10.In-trend keywords on the research in the field of AT over the past seven years. Table 1 . Top Performing Publishing Outlets in the Research Domain of Agency Theory. Note: Articles are Ranked based on total citations received, TC-Total Citations, NP-Number of Publications, PY-Start-Publication Year Start.J. Risk Financial Manag.2023, 16, x FOR PEER REVIEW of 25 Figure 1.Publishing Trend of Research on Agency Theory. Table 1 . Top Performing Publishing Outlets in the Research Domain of Agency Theory. Note: Articles are Ranked based on total citations received, TC-Total Citations, NP-Number of Publications, PY-Start-Publication Year Start. Table 2 . Leading 10 Articles in Research Domain of Agency Theory, based on Total Global Citations. (Efendi et al. 2007)Journal of Financial Economics 425 Managing knowledge transfer in MNCs: The impact of headquarters control mechanisms (Björkman et al. 2004) Journal of International Business Studies 415 Table 2 . Leading 10 Articles in Research Domain of Agency Theory, based on Total Global Citations. Table 3 . Leading 10 Articles in the Research Domain of Agency Theory based on Total Local Citations. Note: TLC-Total Local Citations. Table 4 . Top Performing Authors in the Research Domain of Agency Theory. Figure 5.Leading Institutions Contributing to Research on AP (Minimum of Six Research Articles). The biggest group consists of Arizona State University, the University of Melbourne, Michigan State University, and the University of Wisconsin-Madison.The next group consists of Texas A&M University, Indiana University, and Texas Christian University.The two groups are followed by collaborative duos of Stanford University-University of Pennsylvania, Iowa State University-San Diego State University, and University of Texas-University of Minnesota. Table 5 . Descriptive Summary of Formed Knowledge Clusters. Table 6 . Most relevant documents cluster-wise sorted on normalized local citation score.
v3-fos-license
2017-05-05T05:01:07.750Z
2016-07-07T00:00:00.000
5830297
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.00987/pdf", "pdf_hash": "f5f01d65ebe779b49078979291c46ee72019bdb7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44061", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "f5f01d65ebe779b49078979291c46ee72019bdb7", "year": 2016 }
pes2o/s2orc
Physiological Traits Associated with Wheat Yield Potential and Performance under Water-Stress in a Mediterranean Environment Different physiological traits have been proposed as key traits associated with yield potential as well as performance under water stress. The aim of this paper is to examine the genotypic variability of leaf chlorophyll, stem water-soluble carbohydrate content and carbon isotope discrimination (Δ13C), and their relationship with grain yield (GY) and other agronomical traits, under contrasting water conditions in a Mediterranean environment. The study was performed on a large collection of 384 wheat genotypes grown under water stress (WS, rainfed), mild water stress (MWS, deficit irrigation), and full irrigation (FI). The average GY of two growing seasons was 2.4, 4.8, and 8.9 Mg ha−1 under WS, MWS, and FI, respectively. Chlorophyll content at anthesis was positively correlated with GY (except under FI in 2011) and the agronomical components kernels per spike (KS) and thousand kernel weight (TKW). The WSC content at anthesis (WSCCa) was negatively correlated with spikes per square meter (SM2), but positively correlated with KS and TKW under WS and FI conditions. As a consequence, the relationships between WSCCa with GY were low or not significant. Therefore, selecting for high stem WSC would not necessary lead to genotypes of GY potential. The relationship between Δ13C and GY was positive under FI and MWS but negative under severe WS (in 2011), indicating higher water use under yield potential and MWS conditions. INTRODUCTION Since the Green Revolution the yields of wheat and other cereals have increased considerably in many regions of the world, including Chile (Calderini and Slafer, 1998;Engler and del Pozo, 2013;del Pozo et al., 2014), as a result of genetic improvement and better agronomic practices. The yield potential, i.e., the yield achieved when the best available technology is used, has also increased almost linearly since the sixties, particularly in more favorable environments where soil water availability is not limited (Zhou et al., 2007;Fischer and Edmeades, 2010;Matus et al., 2012;del Pozo et al., 2014). Yield under water-limiting conditions, such those of the rainfed Mediterranean environments, has also increased during the past decades (Sánchez- García et al., 2013). Notwithstanding the possible need for phenological adjustment (earliness) a higher yield potential may also translate into a higher performance under water stress (Nouri et al., 2011;Hawkesford et al., 2013). However, the potential yield and water-limited yield of wheat needs to continue increasing in order to cope with future demand for food, which is a consequence of the growing population and changes in social habits (Fischer, 2007;Hawkesford et al., 2013), and also to reduce the negative impacts on crop productivity of global climate change (Lobell et al., 2008;Lobell and Gourdji, 2012). The increase, in the yield potential and stress adaptation of wheat has been attained mainly through empirical selection for grain yield (GY). However, there is evidence that phenotyping using physiological traits, as a complement to agronomic traits, may help in identifying selectable features that accelerate breeding for yield potential and performance under drought (Araus et al., 2002(Araus et al., , 2008Fischer, 2007;Foulkes et al., 2007;Cattivelli et al., 2008;Fleury et al., 2010). The increases in yield potential of wheat since the sixties have been both positively correlated with shoot dry matter and harvest index (HI); the latter also being positively associated with water-soluble carbohydrate (WSC) content of stems at anthesis . Under water limiting conditions, various physiological process and traits have been associated with GY (e.g., Araus et al., 2002Araus et al., , 2008Condon et al., 2004;Reynolds et al., 2006;Tambussi et al., 2007). Among them are traits related to pre-anthesis accumulation of WSC in stems and its further use during grain filling (Ehdaie et al., 2006a,b;Reynolds et al., 2006), delays in senescence during grain filling assessed via changes in leaf color (Lopes and Reynolds, 2012), and those related to water use efficiency, in particular carbon isotope discrimination ( 13 C) in kernels (Richards et al., 2002;Araus et al., 2003Araus et al., , 2008. WSCs are accumulated in stems prior to anthesis and then are remobilized to the grain during the grain-filling period (Blum, 1998;Bingham et al., 2007). Under water limiting conditions, where canopy photosynthesis is inhibited, the contribution of stem carbohydrate to grain growth could be very significant (Ehdaie et al., 2006a,b;Reynolds et al., 2006). Both spring and winter wheat lines have been shown to vary significantly for WSC concentration and WSC content in stems around anthesis (Ruuska et al., 2006;Foulkes et al., 2007;Yang et al., 2007), whereas positive correlations have been observed between accumulated WSC at anthesis and GY in winter wheat genotypes , as well as with kernel weight in recombinant inbred lines (RILs) from the Seri/Babax population (Dreccer et al., 2009). However, stem WSC concentrations can be negatively correlated with stem number m −2 (Dreccer et al., 2013). Drought increases senescence, by accelerating chlorophyll degradation, leading to a decrease in leaf area and canopy photosynthesis. There is evidence that stay-green phenotypes with delayed leaf senescence can improve their performance under drought conditions (Rivero et al., 2007;Lopes and Reynolds, 2012). 13 C can be used as a selection criterion for high water use efficiency (Condon et al., 2004;Richards, 2006), but also can provide an indirect determination of the effective water used by the crop (Araus et al., 2002(Araus et al., , 2008Blum, 2009). In fact, kernel 13 C can be positively or negatively correlated with GY depending on soil water availability. Indeed, under moderate stress to well-watered Mediterranean conditions 13 C has been reported to be positively correlated with GY in wheat (Araus et al., 2003(Araus et al., , 2008 for wheat) and barley (Acevedo et al., 1997;Voltas et al., 1999;, whereas the opposite trend has been reported under severe drought conditions (but see Araus et al., 1998). In this study we investigated the genotypic variability of flag leaf chlorophyll content (measured with a portable leaf meter), stem WSC accumulation at anthesis and the 13 C of mature kernels, as well as the relationship of these traits with GY and its agronomical components, in spring bread wheat under contrasting water conditions in a Mediterranean environment. It is hypothesized that within a large set (384 genotypes) cultivars and advanced lines of spring bread wheat there is high genotypic variability for agronomic and physiological traits. In addition, the yield performance of genotypes under drought conditions is associated with stem WSC accumulation, delayed leaf senescence, and carbon discrimination in grains. Plant Material and Growing Conditions A collection of 384 cultivars and advanced semidwarf lines of spring bread wheat (Triticum aestivum L.), including 153 lines from the wheat breeding program of the Instituto de Investigaciones Agropecuarias (INIA) in Chile, 53 from the International Wheat and Maize Improvement Centre (CIMMYT) that were previously selected for adaptiveness to Chilean environments (these lines share common ancestors with the INIA-Chile breeding program), and 178 lines from INIA in Uruguay (Table S1). The objective with this set of lines was to create a germplasm base to breed for drier areas in Chile and subsequently other countries within the projects involved. This large set of genotypes was evaluated in two Mediterranean sites of Chile: Cauquenes (35 • 58 ′ S, 72 • 17 ′ W; 177 m.a.s.l.) under the water stress (WS) typical of the rainfed at this site, and Santa Rosa (36 • 32 ′ S, 71 • 55 ′ W; 220 m.a.s.l.) under full irrigation (FI) and moderate water stress (MWS) conditions achieved through support irrigation. Trials were assayed during two consecutive (2011 and 2012) crop seasons, except for the MWS trial, which was only set up during 2011. Cauquenes corresponds to the Mediterranean drought-prone area of Chile; the average annual temperature is 14.7 • C, the minimum average is 4.7 • C (July) and the maximum is 27 • C (January). The evapotranspiration is 1200 mm (del Pozo and del Canto, 1999) and the annual precipitation was 410 and 600 mm in 2011 and 2012, respectively. Santa Rosa corresponds to a high yielding area; the average annual temperature in this region is 13.0 • C, the minimum average is 3.0 • C (July) and the maximum is 28.6 • C (January; del Pozo and del Canto, 1999). The annual precipitation was 736 and 806 mm, in 2011 and 2012, respectively. The experimental design was an α-lattice with 20 incomplete blocks per replicate, each block containing 20 genotypes. In each replicate two cultivars (Don Alberto and Carpintero) were included eight times. Two replicates per genotypes were used, except at Cauquenes and Santa Rosa SI in 2011 where a single replicate was established. Plots consisted of five rows of 2 m in length and 0.2 m distance between rows. The sowing rate was 20 g m 2 and sowing dates were: 07 September and 23 May, in 2011 and 2012, respectively at Cauquenes; 31 and 7 August, in 2011 and 2012, respectively at Santa Rosa. Because the sowing date in 2011 at Cauquenes was much later than in 2012, the water stress was more severe in the first year. Plots were fertilized with 260 kg ha 1 of ammonium phosphate (46% P 2 O 5 and 18% N), 90 kg ha −1 of potassium chloride (60% K 2 O), 200 kg ha −1 of sul-po-mag (22% K 2 O, 18% MgO, and 22% S), 10 kg ha −1 of boronatrocalcite (11% B), and 3 kg ha −1 of zinc sulfate (35% Zn). Fertilizers were incorporated with a cultivator before sowing. During tillering an extra 153 kg ha −1 of N was applied. Weeds were controlled with the application of Flufenacet + Flurtamone + Diflufenican (96 g a.i.) as pre-emergence controls and a further application of MCPA (525 g a.i.) + Metsulfuron-metil (5 g a.i.) as postemergents. Cultivars were disease resistance and no fungicide was used. Furrow irrigation was used in Santa Rosa: one irrigation at the end of tillering (Zadocks Stage 21; Zadoks et al., 1974) in the MWS trial and four irrigations at the end of tillering, the flag leaf stage (Z37), heading (Z50), and middle grain filling (Z70) in the FI trial respectively. Soil moisture at 10-20, 20-30, 30-40, and 40-50 cm depth was determined by using 10HS sensors (Decagon Devices, USA) connected to an EM-50 data logger (Decagon Devices, USA). The 10HS sensor determines volumetric water content by measuring the dielectric constant of the soil using capacitance/frequency domain technology. Two sets of sensors were set up in each environment and mean values of two sensors per depth are presented in Figure 1. Agronomical Traits Days from emergence to heading (DH) were determined in Santa Rosa, through periodic (twice a week) observations, when approximately half of the spikes in the plot had already extruded. At maturity and for each plot the plant height (PH) of the different trials, up to the extreme of the spike (excluding awns), was measured, the number of spikes per m 2 (SM2) were determined for a 1 m length of an inside row, and the number of kernels per spike (KS) and 1000 kernel weight (TKW) were determined in 25 spikes taken at random. Grain yield was assessed by harvesting the whole plot. Leaf Chlorophyll Content and Water-Soluble Carbohydrates Chlorophyll content (SPAD index) was determined at anthesis and then during grain filling about 2 weeks after anthesis (both measured on given calendar dates) in five flag leaves per plot using a SPAD 502 (Minolta Spectrum Technologies Inc., Plainfield, IL, USA) portable leaf chlorophyll meter. WSC concentration in stems (harvested at ground level and excluding leaf laminas and sheaths) was determined at anthesis and maturity, on five main stems per plot, using the anthrone reactive method (Yemm and Willis, 1954). The stem length was measured and then dried for 48 h at 60 • C, weighed and ground. Next, a 100 mg subsample was used for WSC extraction, with 3 ml of extraction buffer containing 80% ethanol 10 mM Hepes-KOH (pH = 7.5), and incubated at 60 • C overnight. Then, to separate the debris, the samples were centrifuged at 60 rpm for 30 min. The anthrone reagent was added to each supernatant and placed over a hotplate at 80 • C for 20 min. Finally, the absorbance of the sample was measured at 620 nm in an EPOCH microplate UV-Vis Spectrophotometer (Biotek) using COSTAR 3636 96 wellplates (Corning) for the UV range. WSC content per whole stem and per unit land area were calculated as WSC concentration per unit stem weight (mg CHO g stem −1 ), and WSC content per unit of stem (mg CHO stem −1 ) and per unit grown area (g CHO m −2 ), respectively. In addition, the apparent WSC remobilization was calculated as the differences from anthesis to maturity in WSC content on a stem and land area basis. Stable Carbon Isotope Analysis The stable carbon ( 13 C/ 12 C) isotope ratio was measured in mature kernels using an elemental analyser (ANCA-SL, PDZ Europa, UK) coupled with an isotope ratio mass spectrometer, at the Laboratory of Applied Physical Chemistry TABLE 1 | F-values of ANOVA for agronomic and physiological traits, for 378 genotypes of wheat grown under severe water stress (Cauquenes WS) and full irrigation (Santa Rosa FI) in two growing seasons. Yield Tolerance Index The yield tolerance index (YTI), which combines the relative performance of a genotype under drought with its potential yield under irrigated conditions (Ober et al., 2004), was calculated as: where Y D and Y I are the genotype mean yield under drought (Cauquenes) and irrigation conditions (Santa Rosa, fully irrigation), respectively, and Y D and Y I are the mean yield of all genotypes growing under drought and irrigated conditions, respectively. Statistical Analysis In 2011, 10 genotypes were discarded from analysis due to low emergence. In addition six genotypes from Uruguay were discarded from the analysis for having late heading time (more than 100 days) an plant height >120 cm. ANOVAs for physiological and yield-related traits were performed for the whole set of genotypes using PROC MIXED of the SAS Institute Inc. Genotypes and environment (Cauquenes WS and Santa Rosa FI) were considered fixed effects, whereas blocks and incomplete blocks within each replication (in an α-lattice design) were considered random effects. Data from Santa Rosa MWS where not considered in the ANOVAs because there was no replication and only one year (2011) of observations. Correlation analysis was performed between agronomic and physiological traits, and also stepwise regressions between grain yield and related agronomical and physiological traits. Principal component analysis (PCA) was carried out for the 378 genotypes using the mean values for physiological and agronomical traits evaluated under severe water stress in Cauquenes and full irrigation in Santa Rosa, in two growing seasons, using IBM SPSS Statistics 19. Agronomical and Physiological Traits For SM2, KS and TKW the genotype x environment (GxE) interaction was highly significant (P < 0.001) in both growing seasons, whereas for GY, PH, and KM2 was only in one growing season ( Table 1). Among the physiological traits, the SPAD index exhibited a significant (P < 0.001) GxE interaction in both growing seasons, but stem weight and WSC concentration and content, and 13 C of kernels was only in 2012 (Table 1). Under FI in Santa Rosa, the average GY of the three sets of wheat genotypes (378 in total) was 8-10 Mg ha −1 but some genotypes produced up to 12 Mg ha −1 (Figure 2A). Under MWS in Santa Rosa the average GY was 4.8 Mg ha −1 . Under WS GY was significantly (P < 0.0001) reduced in Cauquenes, by 79 and 68% in 2011 and 2012, respectively, compared to Santa Rosa under FI (Figure 2A). Also, plant height was reduced under WS by 40 and 9% in 2011 and 2012, respectively ( Figure 2B). The reduction in SM2, KS and TKW under WS compared with FI was in general more pronounced in the first growing season; on average (of the two growing seasons) these traits were reduced by 25, 41, 21, and 18%, respectively, whereas KM2 was reduced by 53% (Figure 3). The relationships for GY under FI and WS showed no significant correlation in both years (P > 0.05). The yield tolerance index (YTI) of the 378 genotypes based on GY under WS and FI presented a wide range of values in both years, from 0.05 (very susceptible) to 0.65 (very tolerant genotypes). The frequency distribution of YTI had a left-skewed deviation in 2011 (mean YTI = 0.21) compared to 2012 (mean YTI = 0.32). Days to heading, determined under FI, differed by about 20 days between the earliest and latest genotypes ( Table 2). A wide range of SPAD index values among genotypes was observed in environments (WS, MWS, and FI) and growing seasons ( Table 2). A significant reduction (P < 0.001) in the SPAD index at anthesis and during grain filling was observed under WS in 2012. Stem weight and stem WSC concentration and content were much higher at anthesis compared to maturity. Their average reductions over two growing seasons were about 43, 77, and 87%, respectively, under WS at Cauquenes, and 23, 79, and 84%, respectively, in Santa Rosa under FI ( Table 2). The apparent WSC remobilization was on average 279, 220, and 170 mg per stem under WS, MWS, and FI, respectively (data not shown). The WSC concentration and content per stem at anthesis and maturity presented large genotypic variabilities in all the environments ( Table 2). The stem WSC per unit area (g m −2 ) at anthesis was highly correlated to the WSC concentration (r = 0.66 and 0.84, P < 0.001, for WS for FI, respectively, in 2012) and the stem biomass (g m −2 ; r = 0.81 and 0.66, P < 0.001, for a WS for FI, respectively, in 2012). Relationships between Yield, Agronomical, and Physiological Traits GY was positively correlated with SM2 and KM2, but negatively correlated with TKW, in both water regimes and growing seasons (Figure 4). GY was also positively correlated (r = 0.3-0.52, P < 0.001) with plant height in all the environments. Days to heading (determined at FI) was not correlated with GY, but it was positively correlated with SM2 and negatively correlated with TKW, except under FI in 2012 ( Table 3). The SPAD index was positive and significantly correlated with GY (except under FI in 2011) and the agronomical components KS and TKW ( Table 3). The WSC content at anthesis (WSCCa) was negatively correlated with SM2, but positively correlated with KS and TKW under WS and FI conditions (Figure 5). As a consequence, GY exhibited a low positive correlation with WSCCa under WS in 2012, and non or negative correlation under FI ( Table 3). The relationship between 13 C and GY was slightly negative under WS in 2011, but positive and highly significant in 2012, and also positive under MWS and FI in 2011 and 2012 (Table 3; Figure 6A). Indeed, Pearson correlation values of the relationship between 13 C vs. GY depended on the environment, increasing from low to medium yields and further declining at higher GY ( Figure 6B). The correlation between 13 C and STI under SWS was not significant in 2011 but was positive and significant in 2012 (r = 0.51; P < 0.01). PCA analysis indicated that the two first principal components (PC) explained >50% of the observed variability, under WS and FI conditions (Figure 7). KS was the agronomical component more close related with GY under WS and FI (except in 2011). Among the physiological traits, 13 C presented the strongest association with GY, except under the severe WS in 2011 (Figure 7). The SPAD index at anthesis was close associated with GY under WS in 2011, but with TKW under WS in 2012 and FI. WSCCa was also close related to TKW in all the environments, and days to heading was associated SM2. The stepwise regression analysis between GY and related agronomical (SM2, TKW, and KS) and physiological (SPADa, WSCCa, and 13 C) traits indicated that under water stress conditions, the contribution of the agronomical trails was greater than the physiological ones, but under full irrigation conditions WSCCa and 13 C contributed similarly to the agronomical traits to GY (Table 4). DISCUSSION The set of 378 wheat genotypes tested in this work exhibited a high phenotypic variability for physiological and agronomic traits. The water stress in Cauquenes was very severe as reflected in the low average GY (1.7 Mg ha −1 in 2011). However, some genotypes were able to produce more than 4 Mg ha −1 under such WS conditions and showed high values of YTI (>0.50). Actually, YTI was highly correlated (r > 0.92; P < 0.0001 in both years) with GY under WS in Cauquenes. Under the full irrigation conditions of Santa Rosa some genotypes achieved extremely high yields (12 Mg ha −1 ), for a Mediterranean environment. Large genotypic variability in GY and its agronomical components has also been found in 127 recombinant inbred lines (Dharwar Dry × Sitta) of wheat growing under severe water stress in Obregon, Mexico (Kirigwi et al., 2007), and in 105 lines of the double-haploid population (Weebil × Bacanora) in four contrasting highyielding environments . The strong reduction in GY under WS was mainly a consequence of the decline in SM2 (41%), followed by KS (21%), and as a consequence the number of kernels per m 2 was reduced (53% ; Table 2). Thus, kernels per m 2 is the agronomical component most affected by drought, as previously reported by other authors (Estrada-Campuzano et al., 2012). In addition the TKW also decreased, but to a lesser extent (18%). As a consequence GY was positively correlated with the number of kernels m −2 (Figure 4; r = 0.81, P < 0.0001 for all the environments), but the correlation coefficients for each environment were not as high as has been reported by several authors (see Sinclair and Jamieson, 2006). In fact, a trade-off among the agronomical components was observed where SM2 was negatively correlated with KS under FI (r = −0.50 and −0.58 in 2011 and 2012, respectively) and TKW in WS (r = −0.36 and −0.49 in 2011 and 2012, respectively) and FI (r = −0.60 and −0.58 in 2011 and 2012, respectively) conditions. The PCA indicated that KS was better associated with GY in both WS and FI conditions (Figure 7). Other studies have also shown that KS but not TKW was associated with GY under water stress conditions (Denčić et al., 2000) and also a high-yielding environment . Chlorophyll Content Chlorophyll content at anthesis was positively correlated with GY and the agronomical components KS and TKW, particularly under WS (Table 3). Drought increases senescence by accelerating chlorophyll degradation leading to a decrease in leaf area and photosynthesis. There is evidence that staygreen phenotypes with delayed leaf senescence can improve their performance under drought conditions (Rivero et al., 2007;Lopes and Reynolds, 2012). In wheat and sorghum, genotypic variability has been detected in chlorophyll content as well as in the rate of Table 3. (Harris et al., 2007;Lopes and Reynolds, 2012). In durum wheat (Triticum turgidum ssp. durum) staygreen mutants growing under glasshouse conditions remained green for longer and had higher rates of leaf photosynthesis and seed weight (Spano et al., 2003). These mutants with the stay-green characteristic also had higher levels of expression of the Rubisco small subunit of (RBCS) and chlorophyll a/b binding protein (Rampino et al., 2006). Bread wheat genotypes with functional stay-green characteristics have also shown higher GY and total biomass in field conditions (Chen et al., 2010). Another study on Canadian spring wheat revealed that GY was positively correlated with green flag leaf duration and total flag leaf photosynthesis (Wang et al., 2008). Studies on spring wheat in the USA found a positive correlation between the staygreen trait and GY and grain weight in both water-limited and The relationship between GY and correlation coefficients between GY and 13 C for each replicate (block) and environment (WS, MWS andFI in 2011, andWS andFI in 2012). well-watered conditions . Therefore, a delay in leaf senescence would increase the amount of fixed carbon available for grain filling. Stem Water-Soluble Carbohydrate Large genotypic variability in stem WSC concentration and content was found at anthesis and maturity, in both environments (Table 2; Figure 5). Other studies conducted in spring and winter wheat lines have also found large variability in WSC concentration and WSC content on an area basis in stems around the time of anthesis (Ruuska et al., 2006;Foulkes et al., 2007;Yang et al., 2007). WSCs are accumulated in stems prior to anthesis and are then remobilized to the grain during the grain-filling period (Blum, 1998;Bingham et al., 2007). Indeed under water limiting conditions, where canopy photosynthesis is inhibited, the contribution of stem carbohydrate to grain growth could be very significant (Ehdaie et al., 2006a,b;Reynolds et al., 2006). In our study, more carbohydrate was accumulated at anthesis under WS than under FI, and the decline in stem WSC from anthesis to maturity was greater under WS, particularly in 2012 (360 vs. 130 mg per stem under WS and FI, respectively). This suggests that there was a larger remobilization of reserves during grain filling under WS. However, there were no clear relationships between the stem WSCCa, or the apparent WSC remobilization and GY, varying the correlation values from not significant to negative on the different environments (Table 3; Figure 5). Zhang et al. (2015) found also no significant correlation between stem WSC and GY in 20 genetically diverse double haploids derived from the cross of cvs. Westonia × Kauz, growing under drought, and irrigated conditions in Western Australia. These results differ from those found by Foulkes et al. (2007) in winter wheat under non water-stressed conditions in England. It seems that there is a trade-off between the stem WSCCa and some of the agronomical yield components. In fact, negative correlations exist with SM2 in all the environments, but the correlations were positive with KS and TKW (Table 3; Figure 5). The PCA analyses also showed a high association between WSCCa and TKW (Figure 7). This negative relationship between WSC and either number of stems or number spikes per m 2 at maturity has also been reported for other wheat genotypes (Rebetzke et al., 2008a;Dreccer et al., 2009Dreccer et al., , 2013. Why genotypes with lower number of stems present higher stem WSC concentration and content? A possible explanation is that genotypes with lower number of stems per unit area have bigger stems; if fact, our results indicated a significant (p < 0.001) negative correlation (r = −0.29 and −0.36 under WS, and −0.58 and −0.56 under FI, in 2011 and 2012, respectively) between SM2 and stem weight at anthesis. Thus, genotypes with lower number of stems have probably more light transmission through the canopy and therefore higher rates of photosynthesis per stem, leading to higher stem weight and WSC content (more reserves), and greater numbers of grains per spike and kernel size. A significant and positive correlation between accumulated WSC at anthesis and kernel weight has been also observed in recombinant inbred lines (RILs) from the Seri/Babax population (Dreccer et al., 2009). Another hypothesis (complementary of the previous one) may be that those genotypes able to produce less tillers (because poorer adaptation to growing conditions-such as water stress-) are those which accumulate more carbohydrate since these photoassimilates are not used for growth. Therefore, selecting for high stem WSC, either under near optimal agronomical conditions or under water stress, would probably lead to genotypes with lower tillering capacity and GY potential. The study conducted by Dreccer et al. (2013) in RILs of contrasting tillering and WSC concentration in the stem, and grown at different plant densities or on different sowing dates, indicates that genotypic rankings for stem WSC persisted when RILs were compared at similar stem density. Carbon Isotope Discrimination The genotypic differences in carbon isotope discrimination found among the 384 genotypes (Table 3) agree with other studies conducted in Mediterranean conditions. For example, higher 13 C (or lower carbon isotope composition, δ 13 C) in modern cultivars compared with old varieties has been found in bread (del Pozo et al., 2014) and durum wheats . The relationship between 13C and GY was positive under MWS or FI but was negative under WS ( Figure 5). Other studies in wheat (Araus et al., 2003(Araus et al., , 2008 and barley have also shown that 13 C in kernels can be positively or negatively correlated with GY depending on soil water availability. Positive relationships between 13 C (or negative with δ 13 C) and GY have been frequently reported for cereals under Mediterranean conditions (see Rebetzke et al., 2008b for bread wheat and Araus et al., 2003Araus et al., , 2013 for durum wheat), and this can be explained by the fact that genotypes maintaining a larger transpiration and thus water use during the crop cycle will be the most productive (Araus et al., 2003(Araus et al., , 2008Blum, 2005Blum, , 2009. In fact, negative relationships between kernel oxygen isotope composition (δ 18 O) or enrichment ( 18 O) and grain yield have been reported in bread wheat under fully irrigated conditions (Cabrera-Bosquet et al., 2011;del Pozo et al., 2014) as well as for durum wheat under Mediterranean conditions and subtropical maize under well irrigated and moderate stress (Cabrera-Bosquet et al., 2009). Indeed, carbon isotope composition can be used as a selection criterion for high water use efficiency (Condon et al., 2004;Richards, 2006), but also can provide an indirect determination of the effective water used by the crop (Araus et al., 2002(Araus et al., , 2008Blum, 2009). The effect of phenology on 13 C (earlier genotypes exhibiting higher 13 C) may be discarded, since heading date was not correlated with 13 C (P > 0.05) in none of the environments. Actually, the positive correlations between 13 C and GY was also found when the relationship were studied within subset of 212 genotypes with similar heading duration (80-85 days); r = 0.50 for WS and 0.42 for WI in 2012. CONCLUSIONS The identification of genotypic variability for agronomical and physiological traits under water stress conditions and full irrigation is of great interest for breeders because selected genotypes with favorable traits can be used as parents in future crosses. Among these, genotypes with higher numbers of fertile tillers would lead to higher numbers of kernels per m 2 and GY under terminal water stress and non-stress conditions. Additionally, genotypes with delay in leaf senescence (a higher SPAD index) would lead to higher KS and TKW, particularly under water stress, and to a lesser extent at full irrigation. In the case of yield potential conditions, this is probably the consequence of greater amounts of fixed carbon available for grain filling, whereas under water stress stay-green it is an indicator of resilience to stress conditions. In addition, genotypes with higher carbon discrimination values are associated with higher GY under MWS and full irrigation, indicating that more water is used by the crop. In addition, selection for a higher WSC at anthesis may bring negative consequences in terms of yield potential and adaptation to MWS conditions. This study clearly illustrates the importance of defining the target environment for wheat breeding before determining the set of phenotyping traits for selection. AUTHOR CONTRIBUTIONS AD and IM designed the experiments, selected the germplasm and participated on field evaluations. AY and GT were in charge of carbohydrate determinations. DC was in charge of the management of the experiments and evaluation of agronomic traits. LS and JA contributed to analysis of the data. AD was in charge of the writing up but all the authors contributed to the manuscript. ACKNOWLEDGMENTS This work was supported by the research CONICYT grants FONDECYT N • 1150353 and program "Atracción de Capital Humano Avanzado del Extranjero" N • 80110025. Participation of JA was supported through the Spanish project AGL2013-44147-R. We thank to CIMMYT and the National Research Program of Rainfed Crops of INIA-Uruguay for providing wheat germplasm, Alejandra Rodriguez and Alejandro Castro for technical assistance in field experiments, and Boris Muñoz for the analysis of soluble carbohydrates.
v3-fos-license
2018-11-30T12:47:57.404Z
2018-06-05T00:00:00.000
54170218
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://ocs.editorial.upv.es/index.php/ASCCS/ASCCS2018/paper/download/7165/4104", "pdf_hash": "1f9836ba7a18094efab2f9bddb6bcf0471781df9", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44064", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "1f9836ba7a18094efab2f9bddb6bcf0471781df9", "year": 2018 }
pes2o/s2orc
Seismic performance assessment of conventional steel and steel-concrete composite moment frames using CFST columns The research reported in this paper focuses on the assessment of the seismic performance of conventional steel moment-resisting frames (MRFs) and steel-concrete composite moment-resisting frames employing circular Concrete-Filled Steel Tube (CFST) columns. Two comparable archetypes (i.e. one steel MRF, with steel columns and steel beams; and one composite MRF, with circular CFST columns and steel beams) are designed, and used as the basis for comparison between the seismic performance associated with each typology. Both structures are designed against earthquake loads following the recommendations of Eurocode 8. The comparison of the obtained design solutions allows concluding that the amount of steel associated with the main structural members is higher for the steel-only archetype, even though the composite MRF has the higher level of lateral stiffness. This aspect is particularly relevant when one considers that a minimum level of lateral stiffness (associated with the P-Δ inter-storey drift sensitivity coefficient, θ), is imposed by the European code, which may ultimately govern the design process. The two case-studies are then numerically modelled in OpenSees, and their seismic performance is assessed through fragility assessment for a number of relevant limit states, and, finally, earthquake-induced loss estimation. In general, the results obtained clearly indicate that the composite MRF with circular CFST columns exhibits better seismic performance than the equivalent steel-only archetype. This is noticeably shown in the comparison of the fragility curves associated with the collapse limit state, which tend to show substantially higher probabilities of exceedance, at similar levels of 1st-mode spectral acceleration, for the steelonly case. Furthermore, seismic losses at several seismic intensity levels of interest tend to be higher for the steel MRF. Introduction Concrete-filled steel tubular (CFST) members have gained relevance in recent decades as an alternative solution for seismicresistant applications, in light of several advantages over conventional technologies (e.g.reinforced concrete, steel).Due to the synergy that stems from the efficient combination of concrete at the core of the member, and steel tubular sections as the encasing part, both the members' strength and ductility are improved significantly over the isolated behaviours of the parts, and energy dissipation characteristics of these composite members also tend to be attractive.In particular, the interaction between the core and the encasing tube may entail the development of multi-axial stress effects (e.g.concrete confinement), whilst hindering the development of local buckling phenomena of the steel part (i.e.inwards local buckling is prevented, outwards local buckling is delayed to larger levels of deformation).In line with these characteristics, the experimental study of the flexural behaviour of beam-column CFSTs has also gained some visibility in the last few decades (e.g.Elchalakani et al. [1], Varma et al. [2][3], Han et al. [4], Silva et al. [5] [6]), with good ductility and overall behaviour being exhibited by the composite members. Notwithstanding, the effect of employing CFST members on the seismic performance of moment framed systems still remains an open topic in the literature.Hence, this study specifically aims to provide a meaningful contribution, by, through fragility assessment and earthquake-induced loss estimation, gauging the effect of employing circular CFST columns in detriment of steel open-profile sections, in the context of momentresisting framed buildings. General description For this study, a 5-storey MRF building structure was considered, with the layout in plan and elevation shown in Fig. 1.In the longitudinal (X) direction the seismic resistance is provided by moment-resisting frames spaced at 6 meters.In the transverse (Y) direction the seismic resistance is assured by a bracing system.The investigation detailed in this paper focuses on the internal MRF.steel open sections with I shape (IPE) and H shape (HEB) were used for the steel beams and columns, respectively, and commercial steel tubular sections were adopted for the CFST members.A summary of the gravity loads considered is shown in Table 1, where g k and qk are the permanent and imposed loads, respectively.The transmission of the vertical loads to the central frame was considered through point loads applied at each storey level, in accordance with the layout of the secondary beams.The slabs were considered to act as rigid diaphragms, thus, each storey mass can be equally distributed by the three longitudinal frames, as shown in Table 1.The parameters required for the definition of the elastic response spectra for soil type B that are specified in the Portuguese National Annex of Eurocode 8 are shown in Table 2. Seismic design was conducted taking into account second-order effects, by limiting the maximum value of the inter-storey drift sensitivity coefficient, , to 0.2.The EC8 capacity design weak beam-strong column requirement was also considered in the design of the frames.The damage limitation performance requirement was considered by limiting the inter-storey drift to 0.75% of the storey height.All archetypes were designed based on the modal response spectrum analysis method.Two different alternatives were used for the design of the MRF, namely a steel-only solution (steel beams and columns) and a composite solution (steel beams and CFST columns).Both cases were considered equivalent, in the sense that the building and frame layout, gravity loads, seismic location, ductility class, design criteria (e.g.P-Δ 2018, Universitat Politècnica de València Silva, A., Jiang, Y., Macedo, L., Castro, J.M, Monteiro, R. 2018, Universitat Politècnica de València effects, capacity design, and damage limitation) and design method are shared. Comparison of design solutions The design solutions are provided in Table 3 and Table 4, and a summary is shown in Table 5.In Table 5, the designation of the member section is specified in terms of the relationship between the external diameter, d, and thickness, t, of the steel tube, as d x t.As denoted by the results shown in Table 5, the use of CFST columns allows, in detriment of conventional steel sections, for the same design conditions, for a reduction in steel quantity of the main structural members in the order of 30%.This is mainly due to the fact that the governing design criteria was compliance with the limitation of θ to 0.2.Since this parameter effectively imposes a minimum level of lateral stiffness on the structure, one can straight way see that using a composite member should be much more efficient that a conventional steel section: for the same quantity of steel, the CFST member can provide significantly higher levels of lateral stiffness.Hence, one ends up with a lighter (purely in terms of steel quantity) solution with the use of CFST columns, even though the maximum value of θ in both cases is fairly similar.It is also important to note that the system overstrength levels of the composite scenario are around 20% lower than the steelonly solution.One should recall that this parameter provides a rough notion of the amount of strength reserve the structure possesses against the design level earthquake.In reality, the ratio of Ω / q (under an idealized elastic perfectly-plastic response) provides an idea of the level of nonlinear response expected in the structure when subjected to the design earthquake: Ω / q < 1.0 entails that the structure is likely to enter the nonlinear range, whilst for Ω / q > 1.0 the structure should behave elastically when subjected to the design earthquake.Thus, one can easily conclude that the composite system should allow for a seismic response that explores more nonlinear behaviour of the structure.To conclude, one should also note that although the steel quantity was reduced in the composite case, this was attained by the introduction of some concrete in the solution.Notwithstanding, the considerable difference in material cost between concrete and steel results in an almost insignificant contribution of the infill of the CFST members to the overall structural cost.However, one should also note that the overall cost of the structure may actually increase with the use of CFST columns, given that member joints, foundations and construction time, are aspects that could become more complex and costly.Nonetheless, even if the overall cost of the composite frame is equivalent or higher than that of a steel frame, this may be justifiable if benefits are achieved from a seismic performance perspective. Simplified numerical modelling The seismic performance of the steel-only and steel-concrete frames described before was performed in OpenSees [8], by adopting a simplified numerical modelling approach.Both beam and column members were simulated with nonlinear behaviour allowed to take place at the members' ends, as per a concentrated plasticity (CP) approach.The CP model consists of one elastic beam-column element and two nonlinear rotational springs, which are lumped at the member ends.The cyclic response of the CP model is mainly governed by the hysteretic rule of the nonlinear spring.Thus, to make the CP model simulate the flexural behaviour of CFST members in an accurate manner, a suitable model for the nonlinear spring should be selected.Using CalTool [10], the numerical parameters of the rotational hinge model in OpenSees underwent an optimized calibration procedure.This process makes use of advanced full 3D numerical models of cantilever elements subjected to both monotonic and cyclic bending, from which the deterioration model parameters of the CP elements in OpenSees are calibrated.Whilst a bilinear hysteretic response was adopted for steel beams and columns, a peakoriented hysteretic response was considered for the simulation of the behaviour of CFST columns.The modified Ibarra-Krawinkler-Medina deterioration model with peak-oriented hysteretic response [9] was adopted as the nonlinear spring model for all members.Whilst bilinear hysteretic response was adopted for steel beams and columns, peak-oriented hysteretic response was utilized to simulate the behaviour of CFST columns.The advanced numerical modelling of the steel beams and columns was performed in ANSYS [11] and of the CFST elements in ABAQUS [12].Fig. 7 and Fig. 8 show two examples of the aforementioned calibration procedure, namely in terms of a comparison of the behaviour of both the detailed 3D model (ANSYS and ABAQUS, respectively) and the concentrated plasticity simplified model in OpenSees.Overall, a good correlation between both models was achieved with the use of a calibration procedure to determine the deterioration model parameters, allowing for a realistic simulation of the response of the moment-resisting frames in OpenSees. Site hazard and ground motion record selection A single location in Portugal (i.e.Lagos), was considered in this study, both for design and seismic performance assessment purposes.Probabilistic Seismic Hazard Analysis (PSHA) was performed for the site in question, using the open source software OpenQuake (Pagani et al. [13] and the seismic hazard model developed in SHARE (Woessner et al. [14]), whilst also including additional hazard sources (Vilanova and Fonseca [15]) and employing the ground motion prediction equations from Atkinson and Boore [16] and Akkar and Bommer [17], with a weight of 70% and 30%, respectively (Silva et al. [18]).Disaggregation of the seismic hazard (Bazurro and Cornell [19]) on magnitude, distance and  was performed.Record selection was conducted based on the disaggregation results and an average shear wave velocity for the first 30 meters of soil, V s30 , was considered.For this location, a suite of 40 ground motion records was selected and scaled to match the median spectrum of the suite to the codes' spectrum within a range of periods of interest.A similar technique was applied in the FEMA P695 project (FEMA [20]).As proposed by Haselton et al. [21], a general ground motion record suite was selected without taking into account the  values, with the results being post-processed to account for the expected  at a specific site and hazard level.Records were selected using SelEQ (Macedo and Castro [22]), which allowed for a very good correlation between the mean/median spectrum of the selected ground motions and the code spectrum.Fig. 4 shows the mean and median response spectra of the ground motion Silva, A., Jiang, Y., Macedo, L., Castro, J.M, Monteiro, R. 2018, Universitat Politècnica de València suite Lagos, together with the corresponding EC8 response spectrum for a hazard level of 10% in 50 years.Fig. 4. Mean and median response spectra of ground motion record set and EC8 elastic spectrum for Lagos. Simplified loss estimation approach Among the possible methodologies for loss estimation, the PEER-PBEE approach (Porter [23]) has become the reference procedure to estimate damage and economic losses resulting from an earthquake.In this research study, the 1 st mode spectral acceleration, S a (T 1 ), was used as the relevant intensity measure, IM, whilst the engineering demand parameters, EDPs, considered were the maximum and residual inter-storey drifts (RISDR), as well as the peak floor accelerations.The damage functions, DM, were derived from the HAZUS (Kircher et al. [24]) consequence and fragility models.Collapse probability was determined based on IDA (Vamvatsikos and Cornell [25]), assumed to occur if the slope of the IDA curve reduces to 10% of the initial value, or if the inter-storey drift ratio of any storey exceeds 20%.A simplified storey-based building-specific loss estimation method (Ramirez and Miranda [26]) was adopted to estimate the total losses based on the sum of the repair costs at each storey of the building.Moreover, at each storey the components were grouped into drift-sensitive structural and non-structural components, as well as acceleration-sensitive non-structural components.At each storey, these categories were weighted at 25%, 55% and 20%, respectively, a proportion that is in line with the construction practice in Portugal.By adopting the procedure proposed by Ramirez and Miranda [27], the storey fragility and consequence models have been derived from HAZUS generic data which, considering residential multi-family dwellings, designed for a "highcode" level.Combining the consequence models with the corresponding fragility functions, the storey damage functions could be obtained, and storey damage functions re-scaled with the component category weights assumed.In this research study, a single loss metric was considered, namely the expected losses conditioned on seismic intensity levels of interest, namely: SLS-1 (EC8-1 [7] Serviceability Limit State, Return period, RP, of 95 years), SLS-3 (EC8-3 [28] Damage Limitation limit state, RP of 225 years), SD (EC8-3 Significant Damage limit state, RP of 475 years) and NC (EC8-3 Near Collapse limit state, RP of 2475 years). Collapse fragility The first criterion that was used to assess the performance of the archetypes under seismic loads consists of the computation of fragility curves for the collapse limit state.As mentioned before, this limit state was defined via the flattening of the IDA curves.From this analysis, the collapse fragility curves, expressed as a function of S a (T 1 ), are shown in Fig. 5. Analysis of the results shown in both figures clearly shows a tendency for substantially higher probabilities of exceedance of this limit state, at similar levels S a (T 1 ), for the steel-only case.One particular point that is important to underline is that these frames were designed with capacity principles in mind, as per EC8, with the dissipative regions of the system being assigned to the beam ends and base of the 1 st storey columns.Given this fact, one could expect that using different column types (i.e.CFST or steel), with the same beam type (steel beam), would not affect the dissipative behaviour of the structure.Although this is true, capacity design was applied for the Ultimate Limit State intensity level (S e (T 1 ) ≈ 0.4g).However, for the structure to reach collapse, the intensity levels required are significantly higher than this, in which case plasticity should likely spread to other columns of the frame.Hence, if sections with more stable nonlinear response are assigned to the columns, this should also entail a more stable respons of the frame itself under extreme scenarios (e.g.collapse). Earthquake-induced losses The second criterion that was used to gauge the effect of using CFST columns against steel open-sections relates to the expected seismic losses, which, in this paper, were computed for a wide range of intensity levels, and are herein summarized in terms of the intensity levels considered in the framework of Eurocode 8.As mentioned before, the application of the loss estimation framework adopted allows for the disaggregation of the losses between the key contributors: losses due to structural and nonstructural damage, losses due to demolition due to excessive residual drift, and losses due to collapse of the building, as shown in Fig. 6.As shown in both cases, total losses range from 20%-50% of the buildings' replacement cost for the steel case, and are generally 5% lower than that for the composite case (with the exception of the CLS intensity level, in which the total losses are identical).Also, in both cases, the amount of losses due to collapse are null, indicating that the design against collapse seems to be successful, even at a CLS-compatible intensity level, which is roughly 80% higher that the intensity level at ULS, to which the structures were designed for.Demolition losses in the composite case at CLS were higher, indicating that residual deformations experienced by the structure are higher and/or more concentrated than for the steel archetype.This can be confirmed in Fig. 7, in which the distributions of several EDPs are shown for the intensities of interest used for loss computation.In the plots, the 2 nd , 3 rd and 4 th curves in each subplot correspond to the SLS-1, SD and CLS intensity level, respectively (the remaining curves corresponded to an elastic response -1 st -and maximum intensity level ran -5 th ).It is important to highlight that even though the levels of losses were generally lower for the composite case, the values of S a (T 1 ) at the different intensities of interest were actually 10% higher than for the steel case, which is, again, in line with the general message presented herein: CFST columns are a good alternative to steelonly open-section columns for seismic performance.Before concluding, the results shown before also merit another observation: for both cases, seismic losses are largely dominated by damage to non-structural components (both drift-and acceleration-sensitive), ranging between 20%-30% of the buildings replacement cost for the steel case and 15%-25% for the composite case, across the intensity levels considered.This is a crucial aspect to underline: current performancebased seismic design guidelines should undergo a shift towards stronger earthquake-induced loss control approaches, particularly regarding damage to non-structural components.The main objective of the past decades of seismic design methodologies (i.e.collapse prevention) is, nowadays, generally successful.However, significant levels of damage to non-structural components may actually compromise this success: the building does not suffer collapse, but the damages to the contents are somewhat uncontrolled at the design stage.In Eurocode 8, for example, some limits on lateral deformations at the SLS are imposed, but any control of floor accelerations is completely overlooked. Conclusions In this paper, the effects of using CFST columns in moment frames was assessed, through a comparison of the seismic performance in relation to a steel-only MRF.Two 5-storey equivalent archetypes were designed to EC8, in which some benefits of the composite approach were already visible: 30% less steel quantity overall and reduced overstrength (Ω) levels.By investigating the performance of both cases through collapse fragility assessment, the results indicated higher probabilities of exceedance of this limit state, at similar levels S a (T 1 ), for the steel-only case.Earthquake-induced loss levels were also estimated, from which the conclusion that generally lower levels of losses are expected to occur for the composite case.The underlining notion that the use of CFST columns, in detriment of steel open-section profiles, for moment frames was shown: savings in material quantity may be relevant (even if undermined by more complex member connections, foundations, construction processes), as so may be the improvements in expected seismic performance levels. Fig. 1 . Fig. 1.Building layout All frames were designed in accordance with Eurocode 8 [7], with the added recommendations set in the Portuguese National Annex.The frames were designed under the DCM (medium ductility) class of the code, with a behaviour factor of 4. The steel grade considered for all steel elements was S275, and a concrete class C30/37 was assumed for the concrete core of the CFST columns.European Fig. 2 . Fig. 2. Calibration of the concentrated plasticity model for a steel HEB340 member. Table 1 . Gravity loads and frame storey seismic masses. Table 3 . Design solution of the steel archetype. Table 4 . Design solution of the composite archetype. Table 5 . Design summary of the steel and composite archetypes.
v3-fos-license
2016-01-11T18:29:14.669Z
2013-05-17T00:00:00.000
15008430
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1186/2251-712X-9-12.pdf", "pdf_hash": "6d4bc8364d3e60995117f81ed5ecabe833b9cae0", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44065", "s2fieldsofstudy": [ "Business" ], "sha1": "dc8594bd2d03d7262b3443242fae136a86f7b385", "year": 2013 }
pes2o/s2orc
Phase II monitoring of auto-correlated linear profiles using linear mixed model In many circumstances, the quality of a process or product is best characterized by a given mathematical function between a response variable and one or more explanatory variables that is typically referred to as profile. There are some investigations to monitor auto-correlated linear and nonlinear profiles in recent years. In the present paper, we use the linear mixed models to account autocorrelation within observations which is gathered on phase II of the monitoring process. We undertake that the structure of correlated linear profiles simultaneously has both random and fixed effects. The work enhanced a Hotelling’s T statistic, a multivariate exponential weighted moving average (MEWMA), and a multivariate cumulative sum (MCUSUM) control charts to monitor process. We also compared their performances, in terms of average run length criterion, and designated that the proposed control charts schemes could effectively act in detecting shifts in process parameters. Finally, the results are applied on a real case study in an agricultural field. Introduction Control charts are used to detect anomalies in the processes. They are most often used to monitor production-related processes. In many business-related processes, the quality of a process or product can be characterized by a relationship between a response variable and one or more explanatory variables which is referred to as profile. The purpose of the analyzing of profile in phase I is to determine the stability of the process and estimate parameters; however, in phase II, analyzers are interested in rapidly detecting the significant shifts in the A r c h i v e o f S I D www.SID.ir process parameters. Phase I analysis of simple linear profiles has been investigated by a number of authors such as Stover and Brill (1998), Kang and Albin (2000), Kim et al. (2003) and Woodall 2004, 2007). Many authors including Kang and Albin (2000), Kim et al. (2003), Gupta et al. (2006), Zou et al. (2006), Saghaei et al. (2009), and Mahmoud et al. (2009) have investigated phase II monitoring of simple linear profiles. Noorossana et al. (2010a, b) investigated monitoring of multivariate simple linear profiles on phase II. Zou et al. (2007) and Kazemzadeh et al. (2009a, b) considered cases when the profiles can be characterized by multiple and polynomial regression models respectively. Mahmoud (2008) considered phase I monitoring of multiple linear profiles, and Kazemzadeh et al. (2008) proposed three methods for monitoring the kth-order polynomial profile in phase I. Ding et al. (2006), Moguerza et al. (2007), Williams et al. (2007), and Vaghefi et al. (2009) investigated nonlinear profiles. In these studies, it is implicitly assumed that the error terms within or between profiles is independently and identically normally distributed; however in some cases, these assumptions can be violated. Noorossana et al. (2010a, b) analyzed the effects of non-normality on the monitoring of simple linear profiles. Noorossana et al. (2008) and Kazemzadeh et al. (2009a, b) investigated autocorrelation between successive simple linear and polynomial profiles respectively. Soleimani et al. (2009) proposed a transformation to eliminate the autocorrelation between observations within a simple linear profile in phase II. Jensen et al. (2008) proposed two T 2 control charts based on linear mixed model (LMM) to account for the autocorrelation within linear profiles in phase I. They concluded that the linear mixed model is superior to the least square approach for unbalanced or missing data, especially when the number of observation within a profile is small and the correlation is weak. Jensen and Birch (2009) used nonlinear mixed model to account correlation within nonlinear profiles. Qie et al. (2010) investigated nonparametric profile monitoring with arbitrary design using mixed models. They proposed a control chart that combines the exponentially weighted moving average control chart based on local linear kernel smoothing and a nonparametric regression test under the assumption that observations within and between individual profiles are independent of each other. The present study acts as an extension of the work of Jensen et al. (2008) in applying a linear mixed model on the presence of autocorrelation within linear profiles on phase I control chart applications; conversely, our focus is on phase II of profile monitoring in which one could use the proposed control charts to detect any departures from the given profile parameters. The remainder of the paper is organized as follows. In 'Linear mixed model' section, the LMM is mathematically presented. In the 'Proposed methods' section, our methods including three modified multivariate control charts namely Hotelling T 2 (Hotelling 1947), multivariate exponential weighted moving average (MEWMA) and a multivariate cumulative sum control charts (MCUSUM) are illustrated. In 'Simulation studies'section, the results of simulation study to evaluate the performance of the methods are presented. In addition, a case study from an agriculture field is investigated on the section 'Case study.' The final section closes with concluding remarks. Linear mixed model Linear mixed models (Laird and Ware 1982) are popular for analysis of longitudinal data. A linear mixed model contains fixed and random effects and is linear in these effects. This model allows us to account autocorrelation within profiles. In matrix notation, a mixed model can be represented as , and X and Z are matrices of regressors relating the observations y to β and b. In the 1950s, Charles Roy Henderson provided the best linear unbiased estimate (BLUE) of fixed effects and best linear unbiased predictions (BLUP) of random effects. Subsequently, mixed modeling has become a major area of statistical research, including work on the computation of maximum likelihood estimates, nonlinear mixed effect models, missing data in mixed effects models, and Bayesian estimation of mixed effects models (West et al. 2007). Henderson's 'mixed model equations' (MME) are (Robinson 1991) as follows: The solutions to the MME, β , and b are BLUEs and BLUPs for β and b, respectively. Mixed models require somewhat sophisticated computing algorithms to fit. Solutions to the MME are obtained by methods similar to those used for linear least squares. For complicated models and large datasets, iterative methods may be needed. In profile monitoring, one could suppose that the jth response follows a LMM; therefore, where X i is a (n j × p) matrix of regressors, and Z j is a (n j × q) matrix associated with random effects. β is a (p × 1) vector of fixed effects, and y j is the (n j × 1) response vector for the jth profile. The coefficient vector of the random effect terms is b j ~ MN (0,D), and D is assumed to be a diagonal matrix; thus, the random effects are assumed not to be correlated with each other. In addition, it is assumed that and ε j is (n j × 1) vector of errors where If the errors are assumed to be independent, 2 j σ = R I , but correlated, the functional structure for the error terms may be used. As noted before, it is considered that β is an estimator of β, and j b is a predictor of b j , then j j = y X β is the population average, and j j j j + = y X β Z b is the profile specific prediction; so if D and R j are known, then it can be shown as follows: (Schabenberger and Pierce 2002). Proposed methods In this paper, we propose a linear mixed model approach for accounting the correlation within linear profiles in phase II. It is assumed that profiles are correlated based on first-order autoregressive (AR(1)) structure. If the errors follow an autocorrelated structure such as an AR (1) It is assumed that for the jth sample collected over time, our observations are (X i ,y ij ), i = 1,2,…,n and j = 1,2,…,m. We considered the case that all the fixed effects have a corresponding random effect, ( ) j j = X Z . If the process is in control, the problem can be formulated as follows: where ε ij are the correlated error terms and a ij are white noises as a ij ~ N(0,σ 2 ). The β 0 , β 1 ,…,β p − 1 are fixed effects that are the same for all profiles. The b 0j ,b 1j ,…,b p − 1j are random effects for the jth profile and they are normal random variables with zero mean and variance of 2 2 , respectively, which are not to be correlated with each other and also not to be correlated with the errors. The x values are fixed and constant from profile to profile. In this article we especially focused on phase II of the monitoring process, so all profile's parameters, process variance, and correlation coefficient are known in phase I. Accordingly, we utilized the modified Jensen et al. (2008) approach to monitor autocorrelation on phase II. The Hotelling's T 2 statistic control chart As a first proposed control chart, we use T 2 statistic to monitor the fixed effects for each sample. This statistic is given by where ( ) ( ) X V X X V y and β 0 denote the in-control value of β. In Equation 9 the variance covariance matrix of fixed effects is ( ) The upper control limit, UCL, is chosen to achieve a specified in control average run length (ARL). The MEWMA control chart Our second proposed control chart is based on MEWMA for monitoring the vector of j β . Here the MEWMA statistics is as follows: where z 0 = 0 and θ(0 < θ < 1) is the smoothing parameter. Therefore, the chart statistic denotes by MEWMA j is given by This control chart gives a signal when EWMA j > UCL, where (UCL > 0) is chosen to achieve a specified in control ARL. The MCUSUM control chart The third suggested method is based on the MCUSUM control chart. In this method, the statistic is given by A r c h i v e o f S I D www.SID.ir ∑ c s β β s β β and s 0 = 0 and k is a selected constant. The estimator of variance covariance matrix is The chart gives a signal if ( ) Simulation studies To show the performance of the proposed methods, we considered the underlying linear profile as Equation 14: and a ij ~ N(0,1),b 0j ~ N(0,.1),b 1j ~ N(0,.1). In our simulation investigation, we considered three significant different autocorrelation coefficients: a ρ = 0.1 to designate a weak type autocorrelated process, intermediate autocorrelation by ρ = 0.5, and strong autocorrelation by ρ = 0.9. The in-control ARL is roughly set equal to 200 and the ARL values were evaluated through 10,000 simulation replications under different shifts in intercept, slope, and errors (standard deviation). For MEWMA control chart, the smoothing parameter θ is chosen to be 0.2. As a general rule, to design MCUSUM control chart with the k approach, one chooses k to be half of the delta shift which is the amount of shift in the process that we wish to detect, expressed as a multiple of the standard deviation of the data points. Accordingly, we set k equal to 0.5. UCLs of control charts are designed to achieve a specified in control ARL of 200. The simulated UCLs for each proposed control chart are shown in Table 1. The three proposed control charts are compared on different scenarios of the example in terms of ARL, and the calculated amounts for the different changes in the intercept is shown in Table 2. According to the Table 2, under σ shift in the intercept, when autocorrelation is weak (ρ = 0.1), the MEWMA method performs relatively similar to MCUSUM control chart, and they also have better performance for detecting the small, moderate, and large shifts than the T 2 control chart. In the intermediate and strong autocorrelation circumstance (ρ = 0.5) and (ρ = 0.9), MCUSUM performs uniformly better than the other two methods. Moreover, MEWMA uniformly performs better than T 2 control chart. Figure 1 presented the derivative ARL under different shifts in intercept when autocorrelation is different in three levels. Table 3 shows the simulation results under different shifts in slope. From Table 3, under βσ shift in slope, while the autocorrelation is weak (ρ = 0.1), the proposed MCUSUM method uniformly performs better than MEWMA method. Also, MEWMA performs consistently better than T 2 method. In addition, similar results are obtained when the autocorrelation is intermediate (ρ = 0.5). Once the amount of autocorrelation coefficient is high, MCUSUM and MEWMA methods perform uniformly better than the T 2 method and also, MCUSUM method performs relatively similar to MEWMA method. Figure 2 illustrates ARL under different shifts in slope once autocorrelation be changed in the aforementioned levels. Next comparisons of the proposed three control charts in terms of ARL under δσ shift in the standard deviation followed. Table 4 shows that the proposed T 2 chart performs significantly better than MEWMA and MCUSUM charts in different amount of correlation coefficients. In addition for strong and intermediate autocorrelation condition, MEWMA and MCUSUM A r c h i v e o f S I D www.SID.ir have similar manners and when the autocorrelation is weak, MEWMA relatively achieves better performance. Derivative ARL under different shifts of standard deviation is presented in Figure 3 when autocorrelation is different. Based on the simulation results, it is evident that the proposed MEWMA and MCUSUM methods act relatively better than the T 2 chart in detecting shift in the parameters of profile; conversely, the proposed T 2 chart performs better than the MEWMA and MCUSUM in detecting shift in the variation. Case study Consider the case study carried out by Schabenberger and Pierce (2002). It was a real data set from ten apple trees which 25 apples are randomly chosen on each tree. Their focus was on the analysis of the apples in the largest size, with initial diameters exceeded 2.75 in. Totally there were 80 apples in aspiration size. Diameters of the apples were recorded in every 2 weeks during 12 weeks. Figure 4 shows 16 diameters out of 80 apples in the time domain. In their investigation, functional profile between time and diameter considered as quality characteristic that needs to be monitored over time. Schabenberger and Pierce (2002) and also later Soleimani et al. (2009) modeled such correlation between observations by a first-order autoregressive model of AR(1). Based on the preceding analysis, the following statements hold a linear mixed model equation for the declared case study: Consequences of simulation run in the previous section leads us to use MEWMA and MCUSUM which have relatively similar performance on detecting shift in the profile parameters rather than the T 2 method. Hence, the proposed MEWMA control chart was applied in monitoring the linear profile. The smoothing constant (θ) is set equal to 0.2. In order to achieve an in control ARL of 200, the upper control limit is set equal to 7 based on 10,000 simulation runs. In order to examine performance of the control chart, six random samples from the in control simple linear profile are initially generated. Formerly, three random samples are generated to show an out-of-control condition under the intercept shift coefficient of 0.6. Figure 5 illustrates sensitivity of the MEWMA control chart based on our proposed method which temperately depicts quick signal. Concluding remarks We have studied the sensitivity of three multivariate control charts to detect one-step permanent shift in any parameters of a mixed model linear profile. Our specially designed MCUSUM, MEWMA, and T 2 control charts were also studied as competitors of each other to depict shifts in intercept and slope parameters and also process variation while first-order autoregressive model describes correlations within observations. The performances of the methods were compared in terms of average run length criteria. Table 5 shows the summarized results. The following summary recommendations are made: 1 The proposed approach has good performance across the range of possible shifts and it can be used in phase II of linear profile monitoring on the presence of autocorrelation within observations. 2 The anticipated MEWMA and MCUSUM methods almost uniformly perform better efficiency than the T 2 Hotelling control chart under different step shifts in the intercept and slope parameters of linear profile.
v3-fos-license
2021-11-25T16:18:08.107Z
2021-11-01T00:00:00.000
244569937
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/10/11/2889/pdf", "pdf_hash": "fce387cefcda31545459d863b3a8e7e0d1902134", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44066", "s2fieldsofstudy": [ "Computer Science", "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "f1adb923bf4ba00f6e8f46477ca0af6e7830bc03", "year": 2021 }
pes2o/s2orc
Food Informatics—Review of the Current State-of-the-Art, Revised Definition, and Classification into the Research Landscape Background: The increasing population of humans, changing food consumption behavior, as well as the recent developments in the awareness for food sustainability, lead to new challenges for the production of food. Advances in the Internet of Things (IoT) and Artificial Intelligence (AI) technology, including Machine Learning and data analytics, might help to account for these challenges. Scope and Approach: Several research perspectives, among them Precision Agriculture, Industrial IoT, Internet of Food, or Smart Health, already provide new opportunities through digitalization. In this paper, we review the current state-of-the-art of the mentioned concepts. An additional concept is Food Informatics, which so far is mostly recognized as a mainly data-driven approach to support the production of food. In this review paper, we propose and discuss a new perspective for the concept of Food Informatics as a supportive discipline that subsumes the incorporation of information technology, mainly IoT and AI, in order to support the variety of aspects tangent to the food production process and delineate it from other, existing research streams in the domain. Key Findings and Conclusions: Many different concepts related to the digitalization in food science overlap. Further, Food Informatics is vaguely defined. In this paper, we provide a clear definition of Food Informatics and delineate it from related concepts. We corroborate our new perspective on Food Informatics by presenting several case studies about how it can support the food production as well as the intermediate steps until its consumption, and further describe its integration with related concepts. Introduction Scientists have been alerting the world about climate change for a very long time, such as the World Scientists' Warning to Humanity from 1992 and the more recent World Scientists' Warning to Humanity: A Second Notice in 2017. However, it required Greta Thunberg and Fridays for Future to raise the awareness about climate change and the necessity to protect the environment and society. One aspect that, on the one hand, impacts climate change but on the other hand is also highly influenced by it, is the production of food. Roughly 11% of the Earth's population was unable to meet their dietary energy requirements between 2014 and 2016, representing approximately 795 million people [1]. On the contrary, the food production for the population of industrial nations, especially, highly contributes to climate change due to a meat-focused diet, with the expectation of seasonal fruits throughout the entire year as well as a high waste of food [2]. Both situations will become more complex in the next decades as the global population is predicted to grow to 10 billion by 2050 according to the United Nations [1]. This might increase the number of people with insufficiently satisfied dietary energy requirements. The increasing welfare in emerging countries will lead to more people adopting the resource-demanding nutrition of the industry nations. Traditional food production approaches will not be able to deal with those issues sufficiently; hence, novel approaches are required. Especially the integration of current research advances in the Internet of Things (IoT) seems to be promising in supporting various aspects of food production including farming, supply chain management, processing, or demand estimation. Whereas a commonly accepted definition of IoT is not present in the literature, it is agreed that IoT refers to connected computational resources and sensors which often supplement everyday objects. The sensors support the collection of data which can be analyzed for identifying changes in the environment and the IoT system can react to accommodating those changes. Procedures from Artificial Intelligence (AI)-the idea that machines should be able to carry out tasks in a smart way-and Machine Learning (ML)-techniques for machines to learn from data-can complement the analysis and system controlling process in IoT systems. The actions of analyzing and controlling the IoT systems are also named as a reason for adaptation [3]. The purposeful application of those methods can complement and optimize the existing processes. The research in this field is distributed across several domains, such as Precision Agriculture, Smart Farming, Internet of Food, Food Supply Chain Management, Food Authentication, Industrial IoT (IIoT)/Industry 4.0 for food production, Food Safety, Food Computing, or Smart/Pervasive Health. Often, those concepts overlap and are not completely distinguished. Another research stream can be recognized under the notion of Food Informatics, which is often understood as data-centric research for supporting food production and consumption (e.g., [4][5][6][7]). However, research alone does not provide a clear concept of Food Informatics. In this review paper, we want to distinguish the various research streams related to the topics of food production and consumption. Further, we motivate our perspective on Food Informatics as a supportive research stream that can contribute to the wide field of applying IoT and AI/ML to optimize food production and, hence, can be seen as an underlying technological basement for the other ICT-related research streams that target aspects of the food supply chain. Additionally, we present several case studies related to the production of food, discuss how Food Informatics contributes to those applications, and highlight the relation to the other presented research streams. In summary, our contributions are threefold: • Delineation of concepts: We provide a delineation of various concepts related to the digitalization in the food science production; • Definition of Food Informatics: We review the state-of-the-art in Food Informatics and motivate a new understanding of Food Informatics as supportive discipline for food production and underlying technical basement for digitalization; • Application: We discuss the potential of IoT and AI/ML to support the process of food production and supply-in our understanding, the central role of Food Informaticswith regard to the socio-technical perspective of the various stakeholders. However, we do not aim at providing a fully-fledged survey as this would be not possible for a broad coverage of topics. Accordingly, we target providing a systematic mapping [8] approach to offer a cross section of the research landscape. The remainder of this paper is structured as follows: Section 2 compares research streams related to the production and consumption of food. Subsequently, Section 3 presents a new definition of Food Informatics. Then, Section 4 presents several research perspectives as well as research challenges when applying information and communication technology (ICT) in the food production domain. Section 5 discusses possible threats to the validity of our claims. Finally, Section 6 discusses related surveys before Section 7 closes this paper. Delineation of Concepts The production of food is a highly complex process. On the one hand, there is a high diversity in the combination of ingredients and intermediaries with many dependencies, for example, in the order of processing. Further, by-products, side-products, or co-products might arise, such as butter milk when producing butter to mention just one example. On the other hand, food has hygienic, olfactory, sensory, or preserving requirements. In general, the food production process can be divided into several phases: In this paper, we see this process as a sequential process. However, in practice, a circular economy might be favorable from a sustainability viewpoint. Further, the mentioned by-products, side-products, or co-products create a value-added network rather than a traditional value chain. However, in this paper we focus on how to support the different steps by ICT. Consequently, a sequential view on the food production will not limit the validity of our arguments. As a seizable example, we show the different phases of the process for the production of Spätzle, a German pasta (see Figure 1) The production starts with the planting and harvesting of wheat (crop cultivation) as well as the production of eggs (livestock production). Both ingredients are transported to the production facility, where the Spätzle are produced by adding water and salt. Subsequently, the product is delivered to wholesale trades, food retail markets, or directly to the consumer/restaurants, where the product is eventually consumed. In all phases, IoT devices can be integrated to either support data collection or actively control the processes through adaptation, that is, adjust the production process to handle machine faults or use traffic forecasts to re-calculate routes as well as react by adjusting production plans to the delay. Additionally, technology known from Smart Health research, such as wearables, can help to observe the consumption behaviour of consumers. The data collection and analysis is supported by Edge and Cloud technology. With Cloud resources, we refer to flexible server resources that can be used to complement the often limited computational resources of production machines. Those can be company-internal resources, shared by multiple factories, or external resources offered by Cloud providers such as the Google Cloud Platform, Amazon EC 2, or Microsoft Azure. Edge devices are additional computational resources within a factory that extend the computational resources of production machines. Several concepts apply methods and technology from computer science, mainly from IoT and AI/ML, in order to support the food production process. Those concepts often address only one phase of the production process. In the following, we discuss and compare the different concepts. The purpose of this section is a delineation of the different research streams rather than a detailed review of each of them. Precision Agriculture Clearly, the first step in the food supply chain is comprised by the cultivation of crops, husbandry of livestock, and the overall management of farmland. Besides the actual operations and business aspects, which is usually summarized by the term farming, the-from our point of view-more general notion of agriculture refers to all the tangent scientific and technological aspirations around it. We therefore use the notion of agriculture as an umbrella term in this article. The presence of variability and uncertainty inherent in many facets of agriculture has been recognized quite a number of decades ago [9]. With this increasing awareness and a focus on the "field" (in the sense of farmland)-that is, recognizing that, for instance, soil and crop might exhibit varying conditions-combined with technological innovations such as global positioning systems (GPS), microcomputers with increasing computational capacity as well as the advent of autonomous systems/robotics into agricultural machinery, a subarea of agricultural sciences-namely Precision Agriculture-can be defined. With the focus on the cultivation land in mind, Gebbers and Adamchuk [10] provide a concise definition of the term Precision Agriculture as "[...] a way to apply the right treatment in the right place at the right time." They further specify and summarize the goals of Precision Agriculture as threefold: (1) The optimization of required resources, for example, the utilized amount of seeds and fertilizers, for obtaining at least the same amount and quality of crops in a more sustainable manner; (2) The alleviation of negative environmental impacts; and (3) improvements regarding the work environments and social aspects of farming in general. An alternative, from the authors' point of view, very intuitive definition is provided by Sundmaeker et al. [11]. They describe the field of Precision Agriculture as "[...] the very precise monitoring, control and treatment of animals, crops or m 2 of land in order to manage spatial and temporal variability of soil, crop and animal factors." Smart Agriculture The advances in ICT-such as smart devices, Cloud and Edge Computing, near field communication (NFC)-observable over the last decades, as well as the resulting technological possibilities in nearly any branch of society and industry-summarized by the term IoT as will be introduced below-naturally also opens a wide variety of adoption scenarios for agriculture. Smart Agriculture appears as the most common notion in that respect. Wolfert et al. [12] review the application of big data in the context of Smart Farming. The survey further provides another concise definition of the term: "Smart Farming is a development that emphasizes the use of information and communication technology in the cyber-physical farm management cycle." As can be recognized, a new term has been introduced in the above definition: cyberphysical farm. As is often the case when new technologies are emerging, a variety of terms referring to the essentially same thing appear in the literature. Terms that also show up sometimes include: "Digital Farming". For the sake of completeness, we want to highlight that the notion Digital Farming/Agriculture is sometimes also conveyed to mean the integrated and combined utilization of both precision and smart agriculture concepts. The interested reader is referred to a recent position paper of the Deutsche Landwirtschafts Gesellschaft (DLG) (engl. German Agricultural Society) [13]. Since in this article the spotlight is set on the notion of Food Informatics and not on smart agriculture alone, we proceed without a further differentiation), "e-Farming" or the German term "Landwirtschaft (engl. Farming) 4.0" (the latter intended to relate to the German-coined notion of Industry 4.0). Throughout this work, we only carry the differentiation between Precision Agriculture and smart agriculture for the sake of simplicity. Industry 4.0/Industrial IoT The vision of Industry 4.0 is to integrate the cyber space and the physical world through the digitization of production facilities and industrial products [14]. This synchronizes the physical world and a digital model of it, the so called digital twin. The Industrial Internet, also known as Industrial Internet of Things (IIoT), enables a flexible process control of an entire plant [15]. The current interpretation of the term appeared with the rise of Cloud technologies. The central elements of both concepts-besides the digital twinare the smart factory, cyber-physical production systems as well an intelligent and fast communication infrastructure. The food production may benefit from Industry 4.0 approaches. Predictive maintenance can lead to production increase, especially as machine defects in the context of food production have a more serious impact due to the perishability of ingredients in contrast to tangible product elements in the production area. Further, the flexibility of Industry 4.0 approaches can help to facilitate the production of individual, customized food articles. Luque et al. review the state-of-the-art of applying Industry 4.0 technology for the food sector and propose a framework for implementing Industry 4.0 for food production centered around the activities of the supply chain [16]. Internet of Food The term Internet of Food was first used by Kouma and Liu [17]. They proposed to equip food items with IP-like identifiers for continuous monitoring them using technology known from the IoT. Hence, it is a combination of identifiers, hardware, and software to monitor food and allow an observation of the consumers for optimizing nutrition. Somewhat contrarily, other authors describe the use of IoT for food-related purposes rather than the identification aspect as the Internet of Food; an example being smart refrigerators [18]. Holden et al. [19] review current developments in the area of the Internet of Food with a focus on the support of sustainability. Food Computing Min et al. [20] present a definition of the term Food Computing in combination with a review of the current state-of-the-art. According to them, Food Computing is concerned with the acquisition and analysis of food-related data from various sources focusing on the perception, recognition, retrieval, recommendation and monitoring of food. Hence, Food Computing is a consumer-focused research stream with the objective to support the consumer with respect to optimal nutrition. Data sources can include pictures taken with smartphones, and data from web sites or social media data. Accordingly, the research integrates approaches from information retrieval, picture recognition and recommendation systems as well as prediction. For further information on the relevant approaches, the interested reader is referred to overviews on the current state-of-the-art (e.g., [20][21][22][23]). Smart Health/Pervasive Health According to Varshney [24], Pervasive Healthcare can be defined as "[...] healthcare to anyone, anytime, and anywhere by removing locational, time and other restraints while increasing both the coverage and the quality of healthcare". In a similar fashion, authors define the research for Smart Health or Mobile Health [25]. Applications in those areas include health monitoring, intelligent emergency management systems, smart data access and analysis, and ubiquitous mobile telemedicine. Often, those applications rely on wearables-that is, small devices with sensors attached to the body of users-for data collection and signaling of critical health conditions. This requires efficient communication technology, smart IoT devices, and intelligent data analytics. Nutrition monitoring might be a relevant aspect in the health monitoring as well as telemedicine. Vice versa, Smart Health apps might influence the consumption of food [26]. Additionally, somehow related to the this area are newer works that target the field of (personalized) nutrition, for example, smart food choices that support the choice for food of a consumer [27] as well as nutrition informatics, which "describes approaches to understand the interactions between an organism and its nutritional environment via bioinformatics-based integration of nutrition study data sets" [28]. Food Supply/Logistics Supply chain management describes the optimization of the intra and extra logistics. In the case of food production, this includes the transportation of ingredients to the production facility, the moving of ingredients and products in the facility as well as the transportation to retailers or customers. In contrast to other tangible goods, food has specific requirements concerning the temperature, hygienic aspects, and its storage, for example, avoiding pressure on the products. In the following, we focus on the extra logistics of food, that is, its transportation outside of a production facility. Current approaches try to integrate IoT technology for monitoring of the logistics, especially the monitoring of the temperature and air quality. The application of RFID improves the tracking of food and furthers the information handling [29]. Currently, approaches propose to integrate Blockchain technology into the food supply chain to guarantee traceability [30,31], that is, food provenance. Introini et al. [32] provides an overview on the traceability in the food supply chain. Food Safety/Food Authentication According to a recent overview by Danezis et al. [33], "[...] food authentication is the process that verifies that a food is in compliance with its label description". Food Authentication is one part of the Food Safety area, which comprises the monitoring and control of food to guarantee its quality throughout the value chain. Some authors present works that integrate IoT technology, mainly based on sensors for monitoring (e.g., [34,35] to achieve food safety). Recent approaches propose integrating Blockchain technologies to achieve a high reliability and availability of information [30,31]. This might help to increase the security of the stored information; however, one common issue for data-related analysis, the "Garbage In, Garbage Out" principle-which says that the quality of the output of an analysis is determined by the quality of the input-is not solved by the Blockchain technology as it just acts as secured data storage. Summary The presented concepts share some similarities. First, the presented approaches can be grouped along the mentioned phases of the food production process: agriculture, logistics, production, and consumption. For retailing, we focus on the logistics part. Hence, we did not explicitly discuss retailing specifics. Precision and smart agriculture is mainly concerned with the operational (and scientific) aspects of crop and livestock production as well as farmland husbandry and management. IIoT and Internet of Food approaches concentrate on supporting the production of food. The consumer-centering research domains, Smart Health and Food Computing, target the optimization of the food consumption behavior. The logistics aspects of food supply links the different phases of the process. Food Authentication spans the whole process chain as it provides a continual monitoring of food; however, it is limited to the activity of monitoring the process to guarantee the authenticity of the ingredients and products. Accordingly, those concepts provide customized mechanisms for specific tasks; however, they are not generically applicable or reusable in several phases of the food production process. Second, the presented research streams rely on advances in IoT (mainly on sensors for data collection) and AI (mostly autonomous robotics and ML). However, researchers mostly try to integrate or customize existing technology instead of developing new methodologies optimized for the requirements specific to food production. Furthermore, often the suggested technology is customized to very specific purposes instead of providing more generic and flexible frameworks that can be used in several phases of the entire food production process with only minor adjustments. Third, some research streams are related. Smart agriculture and Precision Agriculture both address the agricultural process part and can be integrated to maximize their benefits. The Internet of Food research stream overlaps with food supply as it addresses the monitoring of food. Further, as monitoring of food is an inevitable element for the Food Authentication, Internet of Food is also related to Food Authentication and food safety. Lastly, Food Computing and Smart Health overlap in their purpose as well as some methods, for example, data extraction from pictures captured with smartphones. Consequently, we propose the development of generic approaches relying on IoT and AI that can support various process steps. This seems especially beneficial for data analytics procedures to analyze sensor data or forecast future system states, as those implement generic ML mechanisms. In the next section, we present how Food Informatics could step into the breach by means of proposing a new definition, which comprises our notion of the term. A Revised Definition of Food Informatics A particular research direction from the food-related literature that sets the incorporation of concepts from computer science as an enabling technology in the spotlight is summarized under the notion of Food Informatics. As shown in Figure 2, Food Informatics can be vaguely defined by integrating the different perspectives and research streams as delineated above. The authors of [4] understand and motivate Food Informatics as a mainly data-driven perspective. This includes the development of tools and technologies to enable the application of ontologies for sharing knowledge specific to the food production process [5][6][7]. Similarly, according to some authors [36,37], Food Informatics deals with collecting information and documenting health and medicine related information. On the contrary, the following definition [38] also includes the reaction on the analysis of the collected information while limiting the application to the end users: "Food informatics is a specific eHealth area for the prevention and management of overweight and obesity." Lastly, Martinez-Mayorga and Medina-Franco [39] relate chemoinformatics-the use of computers to collect and manipulate chemical information-to Food Informatics. They define Food Informatics as the application of chemical information to food chemistry. Martinez-Mayorga et al. [40] present an overview of databases and software for chemoinformatics. The large diversity of definitions demonstrates that the meaning of the term "Food Informatics" has not yet converged to a consensus. Still, all definitions at least focus on the data collection and use of the data related to food. However, while some works focus on the food production [4,5,39], others highlight the importance of integrating consumers [36,38]. This shows a large diversification and spans almost the whole process of food production. Additionally, the application of the collected information differs from providing ontologies [4,5], integrating technology for data collection [5], the use of informatics to analyze the collected data and reacting accordingly [36,38], or even integrating other nature science disciplines for information retrieval [39]. Summarizing, no currently available definition for Food Informatics covers all relevant aspects. The existing definitions target the phases of food production and data management as well as Smart Health. As the production of food is an interplay of many different processes in agriculture, production systems, supply chain management, and Smart Health with obvious interdependencies, we propose to also include the data/information acquisition from the very beginning; hence, during crop and livestock production (smart agriculture), and to also take information collection for logistics and transportation into consideration. We deem a span over the entire process important, as issues in one process step might impact other process steps. For instance, insufficient handling of food during the transportation can negatively impact the food quality for the customers. Accordingly, a holistic information perspective is important. Various technologies can support the collection of such information, especially IoT technology. Furthermore, the analysis of the collected data can highly benefit from (Deep) ML and data analytics techniques. Approaches from the research domains concerned with adaptive systems, for example, self-adaptive systems [3], self-aware computing systems [41], or Organic Computing [42], can support the implementation of mechanisms that allow for adequate reactions according to the analyzed information. A robust self-reconfiguration to react to unexpected events, such as machine defects in the food production facilities, constitutes an example for that. However, due to the hygienic, taste-related, or legal constraints, the area of food production has many domain-specific requirements that must be satisfied. Hence, we propose the customization of computational approaches optimized for the specifics of the food domain. This is exactly what, from our point of view, should be the central task of Food Informatics. To reflect all considerations from above, we therefore suggest a new definition: Food Informatics is the collection, preparation, analysis and smart use of data from agriculture, the food supply chain, food processing, retail, and smart (consumer) health for knowledge extraction to conduct an intelligent analysis and reveal optimizations to be applied to food production, food consumption, for food security, and the end of life of food products. This new definition stresses the relevance for integrating computer systems and ICT into the food production process. It is related to the other concepts presented in Section 2, as those concepts can be seen as specialized subfields of Food Informatics. The definition covers all aspects of the food production process and can also include relevant aspects from a circular economy perspective. It very much benefits from recent advances in the field of artificial intelligence, as those contributions support the intelligent reasoning, that is, the analysis of current and forecasted system states and situations to optimize the food production processes through adaptations and adjustments. The intelligent and purposeful application of informatics opens a variety of use cases concerning food production and consumption. This can also support the transformation from linear supply chains to a circular economy as the digitization of information supports the analysis of data and the optimization of side streams and the end of life of products, and hence, support to create a feedback loop, that is, circular loop. The next section presents such use cases. Food Informatics in Pratice: Today and Tomorrow As discussed in Section 3, we define Food Informatics as the purposeful application of methods from different areas of computer science to the food production process. This is a rather technology-oriented and also holistic view. However, this is what was intended by us: we claim that Food Informatics provides the underlying technological basement, that is, representing the digitalization of the food industry, and its specific facets can be seen in many different manifestations of scientific concepts (see Section 2) that address specific concerns in the food supply chain. As ICT always includes a socio-technological perspective, this section presents several case studies that show how Food Informatics can support all the consecutive phases of the food supply and how stakeholders interact as well as how Food Informatics is delineated from but also complements the other research streams presented in Section 2. The case studies are ordered "from the field to the customer", that is, in the chronological order of the production steps. Figure 3 provides an overview of these use cases and integrates them along the food production chain. In the following, we explain each case study in detail and describe how Food Informatics can contribute to the use cases and discuss how it is related to the research streams presented in Section 2. Autonomous Robotics in Precision Agriculture As we already defined in Section 2, Precision Agriculture is concerned with handling the spatial and temporal variability inherent in many facets of agricultural processes. For instance, autonomous land machines or robots are utilized to monitor soil quality via the attached soil sampling equipment (sensors) and precisely apply a site-specific amount of fertilizers to compensate for nutrient-deficiency. This methodology is called Variable Rate Nutrient Application (VRNA). Here, AI methodology can be applied to infer so-called prescription maps with the most effective and cost-efficient soil-sampling schemes, as presented by Israeli et al. [43]. Needless to say, cost-efficiency plays a central role when creating such field mappings to predict crop yield or make use of VRNA. According to Boubin et al. [44], computation costs for frequent yield mappings might consume a large fraction of the profits obtained by the farmers for crop cultivation. Therefore, fully autonomous aerial systems (FAAS), that is, drones not operated by human pilots, are deemed more cost-efficient. FAAS, however, demand a non-negligible amount of computing resources in order to leverage powerful vision capabilities and AI technology. This is where swarms of drones enter the field, together with Edge to Cloud-based Computing infrastructures [44]. As a collective of FAAS, tasks such as achieving a complete field coverage can be distributed among the swarm. For instance, within the current research project called SAGA, fully autonomous drones operate on different levels of altitude to partition the monitored field into sectors and instruct lower flying drones to inspect the crop sectors for weed or plant diseases [45,46]. The utilization of ensembles of self-integrating heterogeneous autonomous/robotic systems, where FAAS collaborate with mobile ground robots equipped with sensors and actuators, for example, for precise weed treatment or fertilizer application, bears great potential for modern Precision Agriculture, but also presents technological challenges that need to be overcome [47]. In the context of Food Informatics, as depicted in Section 3, it becomes apparent that access to Food IoT services hosted in the Cloud constitutes a key aspect. As a result, Business Intelligence or other data analytics applications can be leveraged. This leads to potential Food Informatics use cases such as: 1. Demand-based supply from the input industry (fertilizers, herbicides, pesticides) in line with current field conditions (soil nutrients, plant health) and environment factors (droughts, long winters); 2. Crop condition-aware and treatment-specific adaptive pricing models for wholesale and, in turn, final retail; 3. Exact site-specific crop/livestock treatment information (using GPS or NFC technology) to allow for food traceability "from field to fork". Furthermore, the deployed swarm robots or autonomous land machines can be equipped/retrofitted with special-purpose sensors to continually monitor their systemhealth status. Using the acquired data, predictive services can adequately plan maintenance works and consequently reduce down times and, therefore, possible yield losses or food waste. AI/ML-Supported Smart Agriculture The rise of AI technology and especially deep learning solutions-mainly the increasing amount of available big data and continually progressing advances in high-performance computation capabilities for their processing [11]-offer various potentials for the application of ML to agriculture. Recent surveys on the use of (Deep) ML applications for smart agriculture can be found (e.g., [48,49]). Wahby et al. [50] present an intriguing example of ML applied in a smart gardening scenario, which appears seamlessly adoptable to crop plant growth in the agricultural context. They train an ML model based on recurrent LSTM networks which predicts the underlying plant growth dynamics, that is, the stiffening and motion behavior of a bean plant as a response to controllable light stimuli. This model is subsequently used to evolve a controller for an entire bio-hybrid setup, which allows the modification of the plant's growing behavior by exploiting the phototropism property. Such sensor-actor (robotic) systems will attract more attention in the future and will prove crucial for robust indoor-cultivation of crops in urban areas (urban/indoor farming). Further, applications of Organic Computing [42] target livestock management [51] and autonomous off-highway machines [52]. Since AI and ML both constitute two of the most investigated subfields of computer science these days, they clearly also play a central role in smart agriculture and, thus, in Food Informatics. Scenarios are imaginable where urban greenhouses, equipped with self-adaptive bio-hybrid systems (as delineated above), support a sustainable and robust crop cultivation regardless of the season and current weather conditions in order to ensure food security. Connected to Cloud and IoT services, demand and weather forecasts can be incorporated to approach intelligent food production systems that are more cost-effective and at the same time minimize food waste while still satisfying current needs. This would allow, for example, for site-specific productions of crops on-demand which bears the potential of reducing logistic costs and pollution. Internet of Things and Blockchain-Supported Food Supply The food supply chain integrates all process steps and supports a continuous tracking of the food throughout the production process. Hence, many parties work together. Such a cooperation requires reliable data exchange. However, a central shared data repository constitutes a single point of failure as well as a potential performance bottleneck. Further, the diversity of actors triggers the question about where to establish such a central data repository. Accordingly, distributed data management solutions might be beneficial, as those reduce data duplication and increase the robustness of the data access. Carrefour is among the first industry companies relying on the Blockchain technology for the purpose of food supply chain data management. However, so far the roll-out of this technology is limited and mainly serves as an experimental marketing use case for a specific product. Several authors (e.g., [30,31]) propose to integrate the Blockchain for traceability purposes, as the complete documentation of the origin of ingredients and food is highly important and often a legal obligation. Kamilaris et al. [53] provide an overview of the use of blockchains in the agri-food supply chain. A key task in the food supply chain is the logistics. Contrary to the logistics of common products, food entails several requirements due to its perishability. This includes cooling, hygienic constraints, or avoiding pressure on the surface of food. RFID and NFC technology might support the traceability of the items [35]. IoT technology, mainly intelligent sensors, can improve the monitoring of the conditions during the transportation of goods [29]. Further, ML-supported analysis of data can help to optimize the process, for example, by forecasting the arrival of items in the production facility and, thus, reducing delays regarding subsequent processing steps. Food Informatics can contribute on several ways. The definition of common data description and knowledge representation formats, for example, in the form of ontologies [5][6][7]. Further, it can support the data exchange with generic services to store and access data in the Cloud or the Blockchain. Additional services can offer generic interfaces to store data sensed by IoT devices into the shared data storage or generic tools for MLsupported data analytics. Such services will further contribute to various activities in the food supply chain. Items-Focused Data Collection in Food Production Industry 4.0 and IIoT approaches promise a flexible production by means of collecting and analyzing data. The reconsideration that a product itself should determine its production steps rather than the processing machines constitutes one key aspect for instance. Therefore, Industry 4.0 and IIoT approaches integrate intelligent data analytics. So far, the collection of the required data mainly focuses on the state of machines or the quality of the intermediate or final products w.r.t. pre-defined quality ranges. However, for a detailed analysis of products' quality issues the collection of machine data alone might not be sufficient to identify production issues; this also requires the collection of product-related data. Maaß, Pier and Moser [54] describe the design of a smart potato. Using IoT technology and sensors, a dummy potato can deliver information from the harvesting process, for example, the pressure of the harvesting machine on the potatoes. In several studies, the authors captured the effects of different acceleration patterns on the skin of a potato. Using these data, they trained deep learning algorithms to automatically analyze whether the pressure of a harvesting machine can damage a potato. Such an approach might be plausibly transferred to the food production. Using IoT dummy food items throughout the production in order to collect data from the products' viewpoints can complement the purely machine-centered data. With this food item related data perspective, quality issues such as too much exerted pressure on the ingredients can be straightforwardly identified. Again, Food Informatics can contribute with generic data collection based on sensors from the IoT and ML-driven data analytics services. An Adaptive, Flexible Food Production One of the main objectives for Industry 4.0 and IIoT is to provide a flexible production that supports the individualization of products [15,55,56]. Examples are cars, furniture (such as tables or cabinets), or personalized books. Consequently, a targeted lot size of 1 requires a flexible product design as well as an adaptive production process. A recent study in the German food industry [57] identified that two third of the companies pursue a lot size of 1 by 2030. Hence, it seems beneficial to integrate mechanisms known from the areas of self-adaptive systems [3], self-aware computing systems [41], or Organic Computing [42] to support a flexible, robust and adaptive food production. Further, such a robust adaptive production process is able to tolerate fluctuations in the quality/size of the ingredients. Food Informatics can provide a powerful framework for supporting the adaptivity of intelligent production systems which are customized to the specifics of the food industry. Furthermore, it can support the integration of emerging technologies that can foster the individualization of food items, such as additive manufacturing via 3D printers [58]. Predictive Maintenance in the Food Production Predictive maintenance is based on the idea that certain characteristics of machinery can be monitored and the gathered data can be used to derive an estimation about the remaining useful lifetime of the equipment [59]. This can help to predict potential machine defects in advance and reduce or even eliminate delays in the production process as a result of machine defects and downtimes. The underlying problem hereby is the detection of anomalies in the machine data [60]. Although it is clearly understood that such production delays imply monetary losses in the production of normal goods, the consequences of such unexpected production downtimes are even worse for the production of food due to its perishability. Accordingly, the utilized prediction and forecasting methodologies demand for customized algorithms and, thus, advanced development and domain knowledge. Recommendation systems (such as [61]) can aid the process of automatic identification of the most adequate forecasting algorithm fitting the underlying data patterns. The selection of the most appropriate algorithm might then be combined with automatic algorithm configuration or hyperparameter tuning [62] for optimizing the parameter setting of the algorithm to be utilized. Food Informatics should contribute here by means of conducting research in both areas. That is, to provide predictive maintenance automatically optimized to the specific requirements of food production, for example, by focusing on forecasts of machine defects with time horizons that consider the foods' perishability and cooling requirements. Further, those recommendation systems can be re-used for other forecasts, for example, forecasting the transportation time or the demand for specific food. Demand-Driven Food Production For particular industries, it is common to start the production just after an incoming order, for example, for cars. This reduces the likelihood of overproduction but on the other hand results in waiting time for customers. For the case of food, such a policy bears additional benefits due to the perishability of the produced food items. Additionally, such forecasts help to identify trends early: given the time required from planting ingredients to the final products, the forecasts help to change the supply chain early in advance to accommodate the trends. A sensible trade-off between a production in stock, as well as a purely demanddriven production, could be the integration of demand forecasting by identifying food consumption trends. Research streams as Food Computing [20] and Smart Health [26] can contribute to the analysis of consumption behaviors and forecasting of food demands due to their methods for information extraction. Embedding such demand forecasts into a feedback loop can optimize the various aspects from the food production to the consumption behavior and eventually reduce food waste. Coupled with adaptive food production systems as outlined above, this constitutes a promising way for achieving sustainable food chains. Food Informatics can contribute by offering services of knowledge extraction regarding food trends, for example, from social media and Smart Health technology. This can be combined with powerful data analytics and forecasting techniques, such as the already proposed forecasting recommendation systems for choosing the prediction algorithms. Threats to Validity In this paper, we target providing a systematic mapping [8] approach to offer a cross section of the research landscape. Consequently, we do not follow a systematic approach to identify all relevant works for an area. On the one hand, this is hardly feasible. Our aim is to provide an overview paper on the application of ICT on the agri-food industry. This is such a broad field, so that it is just impossible to cover each facet in detail. On the other hand, this is not our intention; we want to focus on the application of the term "food informatics" and position this concept in the research landscape. We omit in this paper a detailed analysis of the identified approaches. Again, this is not our purpose; we rather want to span the scope of the research landscape. Accordingly, we do not analyze approaches in detail. Several other surveys with a more narrow scope provide those information (see Section 6). Instead of providing a fully-fledged survey, we aim to present an overview including a broad coverage of topics. Still, it is feasible that we miss topic. Further, at some point we had to limit the granularity of topics, for example, when talking about food safety it would also be possible to cover the related topics' shelf-life prediction of HACCP or food logistic might include topics as cold chain and live animal transportation. Again, as we do not want to go into detail, we had to cut at some point and narrow our analysis for the covered topics. Related Work Several surveys and overview articles focus on one of the presented research areas. Min et al. [20] review approaches from information retrieval, picture recognition and recommendation systems as well as prediction for their applicability in Food Computing. Zhong et al. [63] discuss and compare systems and implementations for managing the food supply chain. Verdouw et al. [64] and Tzounis et al. [65] review systems and challenges for supporting agriculture with IoT. [12] emphasize the chances for integrating Big Data concepts for analyzing agricultural processes. Holden et al. [19] review approaches for the Internet of Food and discuss how those contribute to sustainability. However, none of the aforementioned reviews target several aspects of the food production to consumption chain as is deemed essential in our perspective on Food Informatics. Other review articles focusing on IoT/IIoT present the application of those topics in the food industry. Al-Fuqaha et al. [66] present an overview on technologies and protocols for the IoT and discuss their applicability in an eating order use case. Similarly, Javed et al. [67] and Triantafyllou et al. [68] review recent IoT technology and describe its application in the context of smart agriculture. Xu et al. [15], Sisinni et al. [55] and Liao et al. [56] review approaches for the IIoT and explicitly describe how to adopt them for food production. Ben-Daya et al. [69] review supply chain management approaches and identified that many approaches target the delivery supply chain process and the food supply chains. Food production constitutes one among further aspects in all of those overviews, but is not treated as the central issue there. Further, those works focus on only one aspect of the food production process. Conclusions The production and consumption of food highly benefits from the application of IoT and AI technology. This can especially reduce the waste of food by optimizing the production according to the customer demands. So far, various research streams focus on different aspects of the production process. However, they miss the methods and approaches that can be applied across several steps along the food production process. Further, they often use generic IoT technology and data analytics methods rather than devising methods that are customized for the food production sector. Consequently, we propose to extend the often data-driven perspective on Food Informatics to a generic ICT-fueled perspective, which comprises the application of ICT-mainly IoT and AI/ML-in order to optimize the various aspects and processes concerning food production, consumption and security. This paper provides a motivation and revised definition for Food Informatics and corroborates our perspective with potential use cases. As next steps, we will provide a comprehensive framework based on the revised definition and the envisaged applications. Furthermore, we will present how to adopt existing IoT and AI-based procedures and tools, and subsequently demonstrate their applicability in use cases of digital farming (i.e., precision and smart agriculture) and the processing of food in the context of Industry 4.0. Additionally, in this paper we focus the traditional economy model. For future work, we plan to further elaborate the application of food informatics to support the transition towards a circular economy and also extend the perspective towards the bio-based industry beyond food products. Author Contributions: Conceptualization, C.K. and A.S.; methodology, C.K.; validation, C.K. and A.S.; investigation, C.K. and A.S.; data curation, C.K. and A.S.; writing-original draft preparation, C.K. and A.S.; writing-review and editing, C.K. and A.S.; All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2024-01-19T05:06:00.660Z
2024-01-16T00:00:00.000
267029455
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-023-50770-5.pdf", "pdf_hash": "940d187c5637fb59871d6c4827d97057ee2e1bd4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44067", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "940d187c5637fb59871d6c4827d97057ee2e1bd4", "year": 2024 }
pes2o/s2orc
Peripheral oxytocin levels are linked to hypothalamic gray matter volume in autistic adults: a cross-sectional secondary data analysis Oxytocin (OXT) is known to modulate social behavior and cognition and has been discussed as pathophysiological and therapeutic factor for autism spectrum disorder (ASD). An accumulating body of evidence indicates the hypothalamus to be of particular importance with regard to the underlying neurobiology. Here we used a region of interest voxel-based morphometry (VBM) approach to investigate hypothalamic gray matter volume (GMV) in autistic (n = 29, age 36.03 ± 11.0) and non-autistic adults (n = 27, age 30.96 ± 11.2). Peripheral plasma OXT levels and the autism spectrum quotient (AQ) were used for correlation analyses. Results showed no differences in hypothalamic GMV in autistic compared to non-autistic adults but suggested a differential association between hypothalamic GMV and OXT levels, such that a positive association was found for the ASD group. In addition, hypothalamic GMV showed a positive association with autistic traits in the ASD group. Bearing in mind the limitations such as a relatively small sample size, a wide age range and a high rate of psychopharmacological treatment in the ASD sample, these results provide new preliminary evidence for a potentially important role of the HTH in ASD and its relationship to the OXT system, but also point towards the importance of interindividual differences. a key factor leading to unsuccessful outcomes of clinical trials for ASD pharmacotherapy 16 .Recent approaches to comprehend the neurobiology of ASD and characterize individuals with ASD more effectively have employed structural neuroimaging studies, revealing diverse structural brain differences among autistic individuals compared to controls.While it has become increasingly evident that there is no single defining neuroanatomical feature of ASD, meta-analyses and reviews have suggested that there may be neuroanatomical alterations that are, at least in parts, characteristic for ASD 5,17,18 .Consequently, there is hope that regional patterns of neuroanatomical differences could serve as diagnostic, prognostic, and treatment-determining markers 18 .With regard to a potential brain structural marker of the OXT system, the hypothalamus (HTH) is of particular interest.In the central nervous system, OXT and the related hormone vasopressin (VP) are synthesized in the HTH in the magnocellular and parvocellular neurons of the supraoptic nucleus (SON) and paraventricular nucleus (PVN).Apart from axonal transport via the hypothalamo-hypophysial tract to the posterior pituitary lobe, the nuclei also project to a variety of brain regions such as the hippocampus, the amygdala and the nucleus accumbens 19 .In addition, distribution of OXT also occurs in the form of neurosecretion directly from dendrites and somata and likely also by secretion into the cerebrospinal fluid of the adjacent 3rd ventricle [20][21][22] .The OXT receptor has been reported to be expressed throughout the brain, particularly in the HTH and in structures of the limbic system, as well as in various cortical areas associated with social and emotional processes [23][24][25][26] .Thus, the anatomy of the OXT system is well compatible with its ascribed function to orchestrate socioemotional processes.Lesions of the HTH have been associated with a range of behavioral and emotional symptoms such as aggressiveness, depression, and social withdrawal 27 .For example, craniopharyngeoma patients, who frequently suffer from a lesion of the HTH caused either directly by the tumor or indirectly by therapeutic resection of the tumor, were found to have a high prevalence of socio-behavioral impairments 28,29 .In accordance with the hypothesis that disruptions in OXT regulation may contribute to these symptoms in craniopharyngeoma patients, there have been reports of reduced OXT levels correlating with the extent of hypothalamic damage 30 along with significantly heightened levels of autistic traits and increased difficulties in rapid emotion recognition compared to controls 31 .This raises the question whether the socioemotional characteristics found in ASD could be similarly related to OXT and structural alterations in the HTH.Indeed, three studies which have reported structural findings of the HTH in ASD, have concordantly reported a reduced gray matter volume (GMV) or concentration in autistic children, adolescents and young adults compared to neurotypical control subjects in the corresponding age range [32][33][34] .In line with structural HTH alterations in autistic individuals, studies in healthy carriers of OXTR variants associated with an increased likelihood of ASD have found a significant decrease in GMV in the HTH in healthy carriers of the rs53576 and rs2254298A alleles 35,36 , for review see 37 . Based on these previous findings, the current study sought to investigate the morphological characteristics of the HTH and its relationship to OXT, as well as its relationship to autistic traits in a sample of autistic adults without intellectual impairment and matched neurotypical controls.As a secondary analysis of cross-sectional data obtained from a previous study that investigated OXT and cortisol levels in both autistic and non-autistic adults 38 , our objectives in this study were threefold: Firstly, we aimed to assess whether there were differences in hypothalamic volume between autistic and neurotypical adult individuals.Secondly, we sought to determine whether potential distinctions in the OXT system in autism were mirrored in the structure of the HTH and whether any variations in hypothalamic volume could be attributed to OXT.Thirdly, our goal was to explore potential correlations between hypothalamic volume and autistic traits.To accomplish these objectives, we conducted a region-of-interest (ROI) analysis using voxel-based morphometry (VBM) within the hypothalamic region.We employed three distinct models: the first to compare GMV between autistic and non-autistic adults, the second to investigate potential differences in the relationships between hypothalamic GMV and peripheral OXT levels, and the third to examine the associations between hypothalamic GMV and Autism Spectrum Quotient (AQ) scores as a measure of autistic traits. Participants Data were assessed as part of a previously published study examining OXT and cortisol levels in autistic and nonautistic adults 38 .Recruitment sources included the "Outpatient and Day Clinic for Disorders of Social Interaction" at the Max Planck Institute of Psychiatry for autistic individuals and an online study application system on the Institute's website, as well as public advertisements for participants in the CG (Comparison Group).The study protocol followed the guidelines of the Declaration of Helsinki and was approved by the ethics committee of the Ludwig-Maximilians-University of Munich.All participants gave written informed consent before participating in the study and received fixed monetary compensation at the end of the experiment.General exclusion criteria were severe somatic illness, a current or previous schizophrenia diagnosis, breastfeeding, pregnancy, hormonal contraception, and a contraindication to MRI.Of the sixty-four participants included in the primary study 38 , fifty-nine underwent structural MRI.After exclusion of three scans following a quality check protocol (see Supplementary information), the final dataset included brain scans from fifty-six adults aged 18-60 years: twentynine autistic adults in the ASD group (17 men; mean age = 36.03± 11.0 years) and twenty-seven non-autistic adults in the CG (9 men; mean age = 30.96± 11.2 years).Demographic data are reported in the results section. Diagnostic procedure for ASD Autistic individuals met DSM-5 criteria for ASD and were diagnosed in accordance with current guidelines 39 .This included a diagnostic interview by a psychologist or psychiatrist with experience in diagnosing ASD that focused on DSM-5 criteria for ASD across the lifespan and, when possible and with the patient's consent, included anamnestic information from third parties (e.g., parents, siblings).In addition, autistic participants Questionnaires For the quantification of autistic traits, the Autism Spectrum Quotient AQ 41 was used.The AQ is a well-established, self-report questionnaire that provides a scaled measure of the characteristics associated with ASD on a scale of 0 to 50.The autistic traits themselves are regarded as a dimensional construct, which reflects both the autistic and the non-autistic population 42,43 .AQ scores were available for n = 55 participants (ASD: n = 28; CG: n = 27).Assessment further included completion of the Edinburgh handedness inventory 44 , a test of verbal IQ (the Wortschatztest, WST 45 ) as well as a basic questionnaire to assess medication, body-mass-index (BMI) and a dichotomous assessment of lifestyle factors such as regular exercise, regular alcohol or nicotine consumption (each defined as 'yes' for a reported frequency of > 1/week). Oxytocin quantification While the primary study 38 assessed plasma OXT levels before and after a physical exercise, we here focused exclusively on OXT levels in plasma at rest (i.e., baseline).For a more detailed description of sample acquisition and OXT extraction please refer to the relevant publication 38 and in the supplementary information.In brief, the participants were asked to abstain from food (> 12h), water (> 1h) and sports the day before the study.If autistic participants took any regular psychiatric medication, they were asked not to take the medication in the morning before the OXT measurements, but after the experiments.After arriving at the outpatient unit of the MPIP at 8:30 am, blood samples were obtained at rest.OXT concentrations were quantified in an external laboratory (RIAgnosis, Sinzing, Germany) using radioimmunoassay (RIA) as previously described 46 .Data on OXT levels were available for n = 53 participants (ASD: n = 26; CG: n = 27). Image pre-processing and voxel-based morphometry (VBM) All images were processed and analysed using the CAT12 toolbox (C.Gaser, Structural Brain Mapping Group, Jena University Hospital, Jena, Germany; http:// dbm.neuro.uni-jena.de/ cat/) implemented in SPM12 (Wellcome Trust Centre for Neuroimaging).Pre-processing was carried out using the standard pipeline and pre-set parameters as suggested in the CAT12 manual (http:// www.neuro.uni-jena.de/ cat12/ CAT12-Manual.pdf) and involved bias field inhomogeneity correction and denoising, using the Spatially Adaptive Non-Local Means (SANLM) Filter 47 , segmentation into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in accordance with the unified registration approach 48 and spatial normalization and affine registration to MNI space using a template for high-dimensional DARTEL registration derived from 555 healthy subjects of the IXIdatabase (http:// brain-devel opment.org/) with a final voxel size of 1.5 × 1.5 × 1.5 mm.In version 12.7 of CAT12 used here, this process is extended by refined voxel-based processing using adaptive maximum a posteriori (AMAP) estimation, a Markov Random Field approach (MRF) 49 and accounting for partial volume effects 50 .Finally, segmentations were modulated by multiplication with the Jacobian determinant derived from spatial registrations.This step preserves the original volumes within a voxel, which are altered during registration and is recommended by default 51 .For a more detailed description of the individual steps in Cat12 we refer to the publishers website.Prior to smoothing, images were checked for correct pre-processing in accordance with the quality check protocol suggested in the Cat12 manual (see supplementary information).Following suggestions of applying comparably small kernels for analyses in the HTH due to its small size and size of expected effects 33,52 images were smoothed with a Gaussian kernel of 4 mm (FWHM).An absolute gray matter threshold masking of 0.1 was applied to account for a possible misclassification of tissues.For statistical analyses the smoothed and modulated GM images were used.The HTH mask for the region of interest (ROI)-based VBM analyses was derived from the subcortical brain nuclei atlas (https:// neuro vault.org/ colle ctions/ 3145/) 53 .The mask was resliced to fit the template space in SPM12 and encompassed 1085 voxels. Statistics Data processing and statistical analyses of demographic data, questionnaire data, OXT levels and extracted estimates from MRI images (described in the section below) were performed in SPSS (IBM Corp. Released 2020.IBM SPSS Statistics for Windows, Version 27.0.Armonk, NY: IBM Corp).Test for normal distribution of demographic data using a Shapiro-Wilk test revealed a non-normal distribution for age and the Edinburgh handedness score.Group comparison for these variables was, therefore, performed with a Man-Whitney U test, whereas Chi 2 tests were used for categorical variables and t-tests for continuous variables.To confirm that the present subsample of participants showed OXT characteristics similar to the sample included in the primary study 38 , univariate analyses were performed to test for effects of diagnostic group on OXT levels, including age and sex as covariates.To explore possible correlations of OXT with ASD symptomatology, a pearson correlation with the plasma OXT levels and AQ scores was computed.www.nature.com/scientificreports/ Statistical image analysis VBM analyses were performed using the general linear model (GLM) implemented in the CAT12/SPM12 statistical module.Potential variance due to age, sex and total intracranial volume (TIV) was corrected for in all analyses.Clusters were regarded as significant when falling below an initial uncorrected voxel threshold of 0.001 and an FWE-corrected cluster threshold of 0.05.Since our goal was to focus on the HTH, we computed three VBM analyses inside the HTH-ROI: We first tested for group differences, i.e. increases and decreases of hypothalamic GMV using a t-statistic.Second, to test for group differences in associations of hypothalamic GMV and OXT, i.e. interaction effects, OXT concentrations were included in a full-factorial model and tested for significant positive and negative contrasts, i.e. (GMV[ASD] × OXT > GMV[CG] × OXT) and the other way round.Third, to test if hypothalamic GMV was associated with autistic traits in autistic and non-autistic individuals, a regression analysis on all subjects was performed with AQ scores as covariate of interest and tested for significant positive and negative associations.Since we did not find a significant results in this analysis, we subsequently tested associations of GMV and AQ scores in both groups separately.To get a better impression of the nature of associations in significant clusters, we used Marsbar 54 to extract the mean contrast estimate in significant clusters (FWE p < 0.05 at cluster level) to plot the distribution of contrast estimates across participants.Finally, to examine the reliability of significant results, we performed two additional analyses: First, to check if significant results observed at the voxel level were consistent at the level of overall mean hypothalamic GMV, we extracted the unadjusted eigenvariate inside the HTH-ROI for each participant to re-ran significant statistical models in SPSS.Second, to assess the regional specificity of VBM findings inside the ROI, we re-ran significant models in a whole-brain analysis.Following recommendations in the Cat 12 manual, correction for age, sex and TIV was achieved by including these variables as nuisance parameters in the respective model designs and subsequent check for design orthogonality.Since in the second model (including OXT levels) the check for design orthogonality pointed towards a co-linearity between TIV and OXT (cos (θ) = r = − 0.35) we again adhered to the CAT12 manual's guidance and implemented TIV correction in this model using global scaling with TIV. Ethics approval and consent to participate All study participants provided written informed consent.Ethical approval was granted by the Ethics Committee of the Ludwig-Maximilians-University (LMU) Munich (Project number: 712-15).All procedures were performed in accordance with the Declaration of Helsinki.Participants could withdraw from the study at any time and were financially compensated for their time. Demographic data Demographic and clinical data are summarized in Table 1.There were no significant differences (all p > 0.05) between groups with regard to handedness, verbal IQ, BMI, or lifestyle factors.The ASD group had more males and slightly older participants compared to the comparison group, however these differences did not reach statistical significance (p = 0.06 for both).Thirteen subjects in the ASD group took psychiatric medication regularly. Autistic traits Comparison of AQ scores between the ASD group (M = 35.07± 10.27) and the CG (M = 13.89 ± 5.47) showed significantly higher scores in the ASD group, t(53) = 9.5, p < 0.001 (Table 1).Scores were in line with the corresponding reference norms for autistic and non-autistic individuals 43 . Correlation of peripheral OXT concentrations and autistic symptomatology There was no significant correlation between OXT levels and AQ scores in the overall sample (r = − 0.05, p = 0.70). Group comparison of hypothalamic GMV Voxel-wise group comparison of hypothalamic GMV showed no significant differences. Correlation of hypothalamic GMV and peripheral OXT levels Voxel-wise group comparison of associations between hypothalamic GMV and OXT revealed significant differences in associations for the contrast (GMV[ASD] × OXT > GMV[CG] × OXT) in a cluster inside the HTH-ROI at peak-MNI coordinates [5, − 9, − 2] (FWE corr.p = 0.017, T = 3.85, Z = 3.57, k = 46).Plotting of the extracted GMV estimates indicated significance due to a negative association of GMV and OXT in the CG opposed to a positive association in the ASD group (Fig. 1).Similarly repetition of the statistical model using the mean hypothalamic GMV was significant for the interaction term of group and OXT (F(1, 46) = 5.675, p = 0.021, η 2 = 0.110). Exploratory whole brain analysis to asses regional specificity of this finding using the same model revealed a larger cluster including the HTH and extending to the thalamus with peak-MNI coordinates at [5, − 21,9] (FWE corr.p = 0.005, T = 4.69, Z = 4.23, k = 1373) (Supplementary Fig. 1).No other region reached significance here. Correlation of hypothalamic GMV and autistic traits There was no significant correlation between GMV and AQ scores within the HTH across the groups.Groupspecific correlation analysis revealed a significant positively correlated cluster in the ASD group at peak MNI coordinates [ − 2,2,9] (FWE corr.p = 0.014, T = 4.74, Z = 3.92, k = 46) (Fig. 2), while there was no significant correlation in the CG.Likewise, repetition of the model using mean hypothalamic GMV showed a significant association in the ASD group (F(1, 23) = 12.78, p = 0.002, η 2 = 0.357) while there was no significant correlation in the CG group (F(1, 22) = 0.215, p = 0.648).To explore if the cluster in the ASD group matched the cluster from the previous analysis including the OXT levels, the two clusters were plotted together on the structural mean image (Fig. 2, left).Visual comparison showed only a marginal overlap between the two clusters.Explorative correlation of the GMV cluster estimates with plasma OXT levels was not significant.Exploratory whole brain analysis revealed no significant findings in the hypothalamic region, but symmetrical clusters in both cerebellar hemispheres (Supplementary Fig. 2).No other region reached significance here. Discussion Guided by previous research that has indicated potential links between OXT and ASD as well as between the HTH and ASD, the current study employed a hypothesis-driven approach to examine structural characteristics of the HTH and explore its possible link to peripheral OXT levels and autistic traits in autistic and non-autistic adults.Three main study aims were hereby addressed: First, we compared hypothalamic GMV between groups, but found no group-related differences.Second, we examined possible group differences in associations of hypothalamic GMV and peripheral OXT levels.Here, we found a positive association in the ASD group opposed to a negative association in the comparison group.Third, we examined a possible association of hypothalamic GMV with autistic traits.When we examined autistic and non-autistic subjects together, we did not observe any association, but upon conducting a separate analysis of the two groups, we found a positive association in the ASD group. Differences in hypothalamic GMV Beyond the well-established central role of the HTH in regulating socio-emotional processes through OXT synthesis and release, evidence from both animal and human studies suggests a link between structural alterations in the HTH and 'autistic' behaviour.For instance, a study utilizing MRI-based neuroanatomical phenotyping to investigate twenty-six distinct mouse models for ASD consistently identified the HTH as one of the brain regions displaying abnormalities 55 .In line with this, Cntnap2 mutant mice, another mouse model for ASD, have been reported to display a reduction in the quantity of OXT immunoreactive neurons within the PVN of the HTH.Interestingly, the acute administration of OXT was found to ameliorate the social deficits observed in this mouse model 56 .Animal lesion studies in rats, cats and marmosets targeting the HTH have reported symptoms including antisocial and aggressive behaviors, as well as the suppression of sexual behavior [57][58][59] .Similarly, lesions of the HTH in humans, as seen in craniopharyngioma patients, have been linked to socio-behavioral and emotional impairments 27,28 .Despite this evidence indicating a close link between structural abnormalities in the HTH and some of the core symptoms of ASD, only a limited number of studies have explored its structure in autistic individuals.However, three studies have consistently reported decreased hypothalamic GMV observed in autistic children 34 , adolescents 32 and young male adults 33 .Unlike these earlier studies we did not detect any group differences in our sample of adults.While our study is the first, to the best of our knowledge, to explicitly report no volumetric differences in the HTH between individuals with and without autism, it is important to note that whole-brain VBM is a method that does not rely on a priori spatial hypotheses.As such, VBM studies in participants with ASD that have found no abnormalities in the HTH could be interpreted as non-significant results with regard to this structure.However, as previously pointed out by Schindler et al. ( 2012), the sensitivity of VBM to changes in structures as small as the HTH depends largely on the hypothesis-dependent parameter settings 60 .Hence, a lack of discussion of the HTH in previous studies does not automatically imply a lack of effect in this region.While the large age range (18-60 years) of adults tested in our study may provide a good overview of robust structural effects, it risks overlooking age-specific effects.Taking into account the earlier research mentioned, which indicates that younger individuals with ASD may exhibit smaller hypothalamic volumes compared to control subjects, the lack of differences observed in our study involving adults may indicate a potential age-related normalization in hypothalamic GMV.Conducting an age-related subgroup analysis in our study was unfeasible due to the resulting reduction in sample size.Further, preferably longitudinal, studies are needed to test the hypothesis of age-related differences in hypothalamic GMV.However, it is worth noting that the notion of intricate growth patterns in regional brain volumes in ASD, which may be particularly prominent during specific phases of neurodevelopment, is not a novel concept [61][62][63] .For instance, prior research has indicated a growth pattern characterized by volume reduction followed by a convergence toward normal volume in the striatum 64 .Conversely, a growth pattern marked by age-related volume decrease has been proposed for structures such as the amygdala [65][66][67] .The mechanisms underlying these structural changes and whether they reflect a primary inherent growth pattern in ASD or possibly represent secondary compensatory adaptations remain unclear 68 .Based on the findings presented here we will discuss OXT as a potential factor underlying the apparent volumetric normalization of the HTH in autistic adults in the following section.www.nature.com/scientificreports/ A link between the HTH and OXT in autism The involvement of OXT in regulating socio-emotional behavior has been extensively demonstrated across a wide range of studies in humans and animals 69 .The notion that differences in the OXT system might contribute to the core symptoms of ASD was initially introduced over two decades ago 70 .Human research has since brought to light various potential variations within the OXT system associated with ASD, encompassing areas such as the processing of OXT peptides 71 , genetic variations within the oxytocin receptor gene (OXTR) 72 and the structural gene responsible for OXT (neurophysin-I) 73 , and epigenetic modifications 74,75 .Another frequently examined indicator of an altered OXT system in ASD is the concentration of peripheral OXT.Following the initial observation of reduced basal OXT levels in children with ASD 76 , numerous studies mainly in pediatric populations have yielded diverse and partially incongruent findings 77 .The data are particularly inconclusive for adults.While there are reports of lower 78 and higher levels 79 in autistic adults, our group recently added to the existing literature by reporting no differences in basal OXT levels 38 , which was also true for the subsample included here.In accordance with this finding and highlighting the importance of age in this context, two recent meta-analyses have concordantly reported basal concentrations of OXT to be lower in autistic children, while showing no significant differences in autistic adults when compared to controls 77,80 .This is suggestive for relevant developmental changes in the OXT system in ASD and possibly for a normalization of OXT levels in adulthood. The importance of developmental effects in this regard has also recently been shown with regard to OXTR expression patterns 81 .In the present study, we have found indications that raise the possibility of age-related normalization in OXT levels potentially translating into normalization in hypothalamic GMV.This hypothesis finds support in the observation that, despite the absence of volumetric differences in the HTH between the groups, we found a positive association between peripheral OXT levels and hypothalamic GMV among autistic adults when compared to non-autistic adults.Additionally, the exploratory whole-brain analysis indicated that the cluster was statistically significant not only within the HTH but also extended to the thalamus, with no other regions showing statistical significance in this context.Although the latter observation warrants further validation in larger samples, it aligns with reports suggesting a potential involvement of the thalamus in OXT release in the HTH 82 .Partially aligning with our findings, prior research has indicated a positive relationship between VP and hypothalamic GMV.However, it's worth noting that this study did not include a comparison of this association with a control group 34 .The regulatory influence of OXT on neuronal plasticity and its involvement in both inhibitory and proliferative cellular processes in specific regions, including the HTH, has been firmly demonstrated in both animal and human studies [83][84][85][86][87][88] .Hence, it seems plausible that the structural properties of HTH are related to OXT levels.Taking into account earlier research on OXT levels and structural analyses of the HTH in ASD, our results suggest the possibility of a (compensatory) increase in OXT production after childhood, potentially aligning with an increase in GMV within the HTH.As a result, this mechanism could potentially contribute to the normalization of OXT levels and hypothalamic volume in adulthood.While a straightforward explanation might involve an increase in the volume or number of OXT-producing cells, the precise tissue characteristics responsible for the observed GM signal within the HTH in VBM studies, including our own, remain uncertain.Beyond cell volume, factors such as nuclear volume, local cell count, and the spatial arrangement of neurons, glia, blood vessels, and neuropil could potentially contribute to these alterations 89 .Remarkably, variations in all these morphological and cytoarchitectonic features have been documented in various brain regions in autistic individuals 90 .To this point, it remains unclear whether the here observed associations in the HTH are directly attributable to OXT or mirror other factors related to peripheral OXT concentrations.For example, it remains controversial whether and to what extent peripheral OXT measurements can effectively mirror the central regulation or dysregulation of the OXT system.In particular, a coordination of peripheral and central OXT levels at baseline conditions has been called into question 91 .The inconclusive picture regarding the informativeness of peripheral OXT levels has prompted researchers to investigate genetic variation in the human OXTR gene.In this context several single nucleotide polymorphisms (SNPs) such as rs53576 and rs2254298 have been linked to ASD 72 .Interestingly, Tost et al. reported an association of these variants with structural differences in the HTH in healthy carriers 35,36 .Furthermore, rs2254298, along with other OXTR variants with increased likelihood for ASD, have been reported to be closely linked to peripheral OXT levels 92 .This raises the question of whether changes in the HTH and OXT levels are modulated by common factors such as variations in OXTR.It also remains to be clarified to what extent other genetic variations of the OXT system such as the structural gene for OXT (oxytocin-neurophysin I) and CD38 (associated with OXT release) play a role 93 . A link between the HTH and autistic traits Behaviour, including social behaviour has been shown to affect structural properties of the brain 94,95 .The AQ measures autistic traits in both autistic and non-autistic individuals and has been shown to correlate with overall GM variations in autistic individuals 96 , as well as with multiple metrics of regional GM including volume, cortical thickness, surface area, gyrification and cortical thickness in autistic as well as non-autistic people [97][98][99] .However, none of these previous reports includes the HTH.As mentioned above, for reasons specific to the technique of investigation, the absence of reports in these studies on a structure as small as the HTH does not automatically imply an absence of effects in this region.Expanding upon these prior reports, our initial analysis involved examining correlations of hypothalamic GMV and autistic traits across autistic and non-autistic participants.This analysis did not yield statistically significant outcomes.Subsequent group-specific analysis, however, showed a positive correlation in the ASD group, whereas no such correlation was evident in the CG.This finding suggests a potential link between the HTH and autistic traits in ASD, while also hinting at individual differences in this regard.It is important to note that the interpretation of this finding is somewhat constrained by the fact that the subsequent exploratory whole-brain analysis did not yield significant results in the hypothalamic region.This raises questions about the regional specificity of this finding, highlighting the necessity for further in-depth investigations.As is generally the case in VBM association studies, a causal interpretation of the nature of this association (increase in GMV alongside heightened autistic traits) is only possible to a limited extent.Ecker et al. (2012) have previously emphasized the need for extensive longitudinal studies to differentiate neuroanatomical changes primarily associated with the disorder from those that might occur as secondary, possibly compensatory mechanisms 68 .Given the composition of our sample of adults with HFA, it is conceivable that the observations made here reflect the result of an atypical brain development rather than representing primary neuropathological characteristics of ASD.Assuming a secondary causation in these findings, the observed correlation could imply a compensatory growth of the HTH in response to increased autistic traits.Consistent with this hypothesis, one might speculate that there exists a intricate interplay between OXT release and behavior, with potential repercussions on observable brain structural changes in the HTH 100 .Building upon this idea, we tested whether the hypothalamic area associated with autistic traits (as indicated in model 3) might also show a link to OXT (as per model 2).However, subsequent exploratory analyses did not uncover any computational or visual correspondence between the two analyses.Furthermore, there was no significant correlation between peripheral OXT levels and AQ scores.The inconsistent findings in previous research in this regard underscore the limitations of a simplistic OXT deficit or excess model for ASD.For example some studies have shown no correlation between OXT levels and autistic traits 79,[101][102][103] while others have even reported on a negative correlation between higher OXT levels and social skills in the autistic population 76,[104][105][106] , indicating greater social impairment with elevated OXT levels in autistic individuals.This counterintuitive finding has been hypothesized to reflect broader abnormalities at multiple levels of the endocrine OXT system, including the OXT gene, disruptions in the processing of the OXT molecule, and OXTR abnormalities that result in compensatory but insufficient increases in OXT levels 104 .Based on this hypothesis, it seems plausible that the observed positive correlations within the ASD group between hypothalamic GMV and both OXT levels and AQ scores could be construed as a compensatory phenomenon.Once more, additional research is required to elucidate the intricate interrelationships, particularly concerning the variations in OXTR and their association with OXT levels in ASD. Implications for clinical practice and research Presently, no approved pharmaceutical treatments exist for the core symptoms of ASD, and reported findings regarding the efficacy of OXT are inconsistent in both child and adult participants 107 .The shortcomings of clinical trials in OXT pharmacotherapy can be attributed to various factors, including a limited understanding of the biological basis of ASD, the absence of clinically meaningful markers to identify homogeneous patient subgroups, and the consequent absence of targeted therapeutic options 16 .The results of this study point to a potentially important role of the HTH as a neurobiological correlate of ASD.However, further investigation is needed to assess the clinical relevance of these findings, particularly with regard to how structural and functional changes in the HTH manifest in ASD over a lifespan and whether alterations in the HTH can help characterize ASD subtypes.Further exploration of the intricate connections between genetic variations in the OXT system, morphofunctional alterations in the HTH and associated behavior seem essential in this context.Furthermore, obtaining a more comprehensive understanding of the structure and function of the HTH in ASD may have broader implications for other psychiatric conditions characterized by socioemotional impairment.For example, a study conducted by Mielke et al. 112 , using a methodology similar to the one employed here, investigated women with a history of early childhood maltreatment.Their hypothesis-driven approach was centered on the notion that deficits in reward processing in adults who experienced childhood maltreatment might be correlated with OXT levels and structural alterations in the HTH.Intriguingly, compared to our findings, their findings revealed a contrasting relationship between peripheral OXT levels and HTH volume in patients versus controls, suggesting the possibility of a distinct association between OXT and the HTH compared to ASD.Although it is premature to draw conclusions from that, conducting comparative analyses of the relationship between OXT and the HTH across various psychiatric conditions presents a promising avenue for future research.In addition to craniopharyngioma patients mentioned above 108 , hypothalamic abnormalities have also been reported in schizophrenia 109 and mood-disorders 60 .Given the heightened prevalence of depression in individuals with ASD, a comparative analysis between depressive and autistic people would be particularly relevant to assess the specificity of structural effects in the HTH. Limitations The results presented here should be interpreted cautiously and in the context of some methodological considerations and limitations.Given the well-established role of the HTH for the OXT system and the potentially important role of OXT in ASD, a surprisingly small number of studies have focused on this brain structure.This is partly due to the methodological difficulties in proper identification of the HTH in MRI images.Here, we took a relatively straightforward approach by using a HTH mask created on the basis of 168 typical adults 53 .Since this mask was not created specifically for our sample, it can only be considered a rough regional reference.Manual delineation remains the gold standard, but comes at the cost of a high degree of expertise and time investment 52,110 .Other studies have used a HTH mask based on the WFU pick atlas 111,112 , which we decided against since it covered the HTH much less accurately based on visual comparison.Currently, none of the atlases implemented in Cat12, such as AAL3 or Neuromorphometrics, include the HTH, which prevented us from using automated ROI analysis in the native subject space 113 .The implementation and improvement of such atlases as well as the use of deep-learning approaches 114 are promising developments in terms of accurate and user-friendly volumetry in this brain region for future studies.Further limitations concern sample characteristics: In our study we included autistic adults with HFA.This naturally limits the generalizability of findings to the entire autism spectrum.However, this limitation may also prove advantageous, as there are reports suggesting a strong link between hypothalamic abnormalities and intellectual impairment 115 .This association could pose challenges in www.nature.com/scientificreports/attributing findings in the HTH solely to the autistic phenotype in autistic individuals with intellectual impairment.Although the groups were not significantly different, they were not optimally balanced in terms of age and sex distribution.Particularly in light of reported sex-dependent differences of the HTH in neurotypicals 116 and in associations with OXTR 35,117 , this might warrant sex-specific analyses in a larger sample.Furthermore, while we corrected for sex and age in all analyses, we did not account for possible differences in OXT levels with respect to post-partum effects in women aside from the exclusion of pregnancy and breastfeeding.While large increases in OXT levels around parturition are known, the temporal extent to which these post-partum changes are detectable is poorly studied 118 .In this sample, 45% of autistic participants received psychiatric medication on a regular basis.This adequately represents the high degree of comorbidities in autistic adults in terms of a naturalistic study design 4 .Due to the various pharmaceutical substances (antipsychotics, antidepressants, stimulants) and different dosages, it did not seem feasible to include medication as confounding variable.While the impact of psychopharmaceuticals on brain structure has been shown in a range of studies, studies to date have not reported on specific structural alterations in the HTH due to medication in this regard 119,120 .Given these limitations, the results reported here should be interpreted with caution and warrant further validation in a larger and ideally unmedicated sample. Conclusion Bearing in mind the limitations, this study provides evidence that hypothalamic GMV does not differ between autistic and non-autistic adults.Although this study does not provide insight into a causal relationship, findings further suggest a potentially important role of the HTH in relation to OXT and autistic traits in ASD.Moreover, our results underscore the relevance of individual variations in this context.Taking previous research into account, our findings raise new questions about possible developmental changes in the structure of the HTH and its link to OXT in ASD, encouraging further exploration.Specifically, a better understanding of the interplay between genetic variations in the OXT system, OXT levels, and brain structure could significantly enhance our understanding of OXT's role in ASD, both as pathophysiological factor and potential therapeutic agent. Vol https://doi.org/10.1038/s41598-023-50770-5 Figure 1 . Figure 1.Group-comparison of associations between GMV and OXT within the HTH revealed a significant cluster (at peak-MNI coordinates [5, − 9, − 2], FWE corr.p = 0.017, T = 3.85, Z = 3.57, k = 46) positive for autistic adults and negative for non-autistic adults.Left: T-score overlay on the mean structural normalized image of all participants.Right: Scatterplot of the extracted GMV cluster illustrates the association with peripheral OXT in this region.Regression lines show a negative association of GMV and OXT in the CG opposed to a positive association in the ASD group.ASD Autism spectrum disorder; GMV Gray matter volume; OXT Oxytocin; CG Comparison group. Figure 2 . Figure 2. Left: Illustration of clusters from both association analyses and HTH outlines (grey).Green: association between GMV and AQ scores in the ASD group.Red: Cluster from previous analysis using OXT levels.Right: Scatterplot of the extracted GMV estimate within the cluster illustrates the positive association with AQ scores in the ASD group.ASD Autism spectrum disorder; AQ Autism spectrum quotient; GMV Gray matter volume. https://doi.org/10.1038/s41598-023-50770-5
v3-fos-license
2022-04-24T15:22:14.694Z
2022-04-22T00:00:00.000
248351376
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/19/9/5099/pdf?version=1650612171", "pdf_hash": "5d6b3620eea82c695ec149f7d788f67b41b531be", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44075", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Mathematics" ], "sha1": "835327d45dc3e8fba981a4aec216e357e4549707", "year": 2022 }
pes2o/s2orc
Machine Learning, Deep Learning, and Mathematical Models to Analyze Forecasting and Epidemiology of COVID-19: A Systematic Literature Review COVID-19 is a disease caused by SARS-CoV-2 and has been declared a worldwide pandemic by the World Health Organization due to its rapid spread. Since the first case was identified in Wuhan, China, the battle against this deadly disease started and has disrupted almost every field of life. Medical staff and laboratories are leading from the front, but researchers from various fields and governmental agencies have also proposed healthy ideas to protect each other. In this article, a Systematic Literature Review (SLR) is presented to highlight the latest developments in analyzing the COVID-19 data using machine learning and deep learning algorithms. The number of studies related to Machine Learning (ML), Deep Learning (DL), and mathematical models discussed in this research has shown a significant impact on forecasting and the spread of COVID-19. The results and discussion presented in this study are based on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Out of 218 articles selected at the first stage, 57 met the criteria and were included in the review process. The findings are therefore associated with those 57 studies, which recorded that CNN (DL) and SVM (ML) are the most used algorithms for forecasting, classification, and automatic detection. The importance of the compartmental models discussed is that the models are useful for measuring the epidemiological features of COVID-19. Current findings suggest that it will take around 1.7 to 140 days for the epidemic to double in size based on the selected studies. The 12 estimates for the basic reproduction range from 0 to 7.1. The main purpose of this research is to illustrate the use of ML, DL, and mathematical models that can be helpful for the researchers to generate valuable solutions for higher authorities and the healthcare industry to reduce the impact of this epidemic. Introduction The outbreak of a deadly disease called coronavirus (COVID-19) has had a significant global impact. As such, the World Health Organization (WHO) has declared it a pandemic [1]. It has affected all spheres of life; moreover, people from poor nations to developed nations are trapped indoors by the pandemic. In this situation, information and communication technologies (ICT) play an important part in connecting communities, implementing the policies, and guiding the communities by analyzing the large datasets generated from COVID- 19. Within a few months after the first COVID-19 case was discovered in Wuhan, China, several researchers published articles, discussing this virus and its impact on society [2][3][4][5]. Moreover, the use of computing technologies has generated substantial support to deal with the virus. Current technological developments such as smart applications [6], Artificial Intelligence (AI) [7], Machine Learning (ML) [8], Deep Learning (DL) [9], and big data analytics [10] have led to numerous solutions, epidemiology analysis, and other clinical findings from the collected data sets. These computing technologies are also assisting healthcare and governmental agencies in controlling the spread of the virus, creating social distancing awareness, and predicting potential growth, positive cases, and mortality rates. To understand the current situation, this study mainly focused on reviewing the published papers related to ML and DL techniques. In addition, we integrated some other factors such as epidemiology, reproduction number, and virus doubling time factors in this study, which make it a different SLR than presented previously [11]. Researchers are trying to make good use of the datasets related to COVID-19 patients such as patients' demographic data, clinical information, chest X-rays (CXR), and Computed Tomography (CT) images. For example, ML techniques assisted in preparing a learning system, and predicting the future concerns about COVID-19, using a training data set to acquire knowledge from the collected dataset [12]. It is also helpful to estimate the future trend and potential infection rate [13]. On the other side, DL implementation is providing more support by predicting the clinical findings using CXR and CT scan images [14,15]. For instance, analyzing medical images can provide irregularities in those images by highlighting different spots and predicting infected and normal patients [16]. Therefore, these computing strategies are assisting medical and governmental agencies to generate multiple findings using COVID-19 dataset, for example, severity detection, virus spreading and control, creating policies and guidelines for the communities, helping in medicine and vaccine development. Previously, computing scholars proposed productive health solutions to deal with different diseases and treatments [17][18][19][20][21][22]. Similarly, integration of the computing and health industries led to ideas for controlling the spread of the virus, suggestions for future virus containment, and pattern identification from real-world data. In addition, the COVID-19 pandemic has also opened many challenges that have ultimately triggered further development and integration of medical and technology fields. Whereas, ML and DL techniques helped to overcome those challenges by providing various solutions to assist the medical industry and higher authorities. This research provides a systematic literature review and analysis of ML, DL, and mathematical models for different purposes such as predicting future cases, analyzing previous infected cases, estimating basic reproduction numbers and virus doubling time. This research discussed the number of developments and solutions provided by multiple scholars around the world. Furthermore, we discussed a number of common datasets, statistical models, and techniques to understand different factors such as infection growth rates, reproduction rates, and doubling time. The main motivation for this paper is to present a comprehensive review for the research and medical community on the current development and future challenges of ML and DL approaches for COVID-19. The summary of ML and DL techniques for prediction, detection, and treatment of COVID-19 are some of the major findings of this study. Overall, this study reviewed selected studies and contributed in the following ways: • The main research categories can be identified in this area of study; • Review of machine learning and deep learning techniques for understanding previous data and predicting future cases; • Review of different mathematical models for time series analysis and estimating epidemiological factors; • Identification of validation strategies and evaluation metrics have been used for model performance. Accordingly, the paper is organized as follows: Section 2 discusses the methodology and search strategy applied in this study. Comprehensive analysis of ML, DL, and mathematical models applied on COVID-19 dataset is presented in Section 3. Finally, Section 4 concludes this study by highlighting future work. Methodology and Search Strategy This research is mainly focused on SLR methodology. SLR is a systematic approach to organize, present, and synthesize previously published papers that can help readers to understand the current situation and potential developments in a specific field of research. Therefore, this research identified published papers that describe the COVID-19 epidemiology, use of ML and DL approaches for prediction and identification, basic reproduction rate, and virus doubling time in different regions. The subsequent sections are further describing the step-wise approach used in this article. Protocol and Registration The systematic approach used in this study is based on the PRISMA guidelines [23]. The paper title and abstract are written as per the pre-defined guidelines. The review objectives in the introduction section were defined accordingly. The main inclusion and exclusion criteria are also discussed in Section 2.2, whereas the representation of the SLR used in this study is depicted in Figure 1. Accordingly, the paper is organized as follows: Section 2 discusses the methodology and search strategy applied in this study. Comprehensive analysis of ML, DL, and mathematical models applied on COVID-19 dataset is presented in Section 3. Finally, Section 4 concludes this study by highlighting future work. Methodology and Search Strategy This research is mainly focused on SLR methodology. SLR is a systematic approach to organize, present, and synthesize previously published papers that can help readers to understand the current situation and potential developments in a specific field of research. Therefore, this research identified published papers that describe the COVID-19 epidemiology, use of ML and DL approaches for prediction and identification, basic reproduction rate, and virus doubling time in different regions. The subsequent sections are further describing the step-wise approach used in this article. Protocol and Registration The systematic approach used in this study is based on the PRISMA guidelines [23]. The paper title and abstract are written as per the pre-defined guidelines. The review objectives in the introduction section were defined accordingly. The main inclusion and exclusion criteria are also discussed in Section 2.2, whereas the representation of the SLR used in this study is depicted in Figure 1. Search Strategy We performed the searching process using different digital libraries, such as: (i) Web of Science; (ii) Scopus; (iii) Google Scholar; and (iv) Medline, up to the beginning of April 2022. This process was mainly applied under the supervision of one researcher and one clinician. Both researchers performed this task together to perform the initial screening process from computing and medical perspectives. At the first step, the following keywords were used: "COVID-19", "novel coronavirus", epidemiological features", "ML or DL model prediction for COVID-19". An enormous number of articles are available on these databases due to the large interest of researchers in this area of study. Therefore, papers were selected on the bases of explained inclusion and exclusion criteria. In the next step, the papers refined by excluding out of the scope topics, for example, social network analysis, virtual education, or work from home focused papers. Inclusion and Exclusion Criteria We included the number of studies using specific inclusion criteria. As this research area has recorded an enormous list of publications, therefore, the inclusion criteria are important to be defined, and are also mentioned in the PRISMA guidelines document. The inclusion criteria were applied as follows: (1) the selected studies should be published in English; (2) the article must have applied and measured any of the epidemiological factors (i.e., size of estimation, epidemic doubling time, basic reproduction number, demographic features, clinical characteristics); and (3) the implementation of a ML or DL approach to identify, analyze previous cases, and predict future rate of infection and recovery. In addition, some articles were excluded due to several reasons as follows: (1) duplicate entities; (2) title, keywords, and abstract screening; (3) non-peer reviewed articles; and (4) opinion or conceptual framework focused articles. Identified Research Questions As per the above discussion, this SLR will answer the following research questions: • What are the main research categories that can be identified in this area of study? • Which machine learning and deep learning techniques were proposed for predicting the future COVID-19 cases? • Which mathematical models were used for time series analysis and for calculating different epidemiological factors? • What validation strategies and evaluation metrics were used for measuring the model performance? Quality Assessment Finally, the quality check process was applied by two researchers to assess the quality of the contents presented in selected studies. The main purpose of this step was to measure the quality of papers and their impact on this SLR. We used eight quality evaluation questions [24] to evaluate each article as follows: (i) objective relevance; (ii) usefulness; (iii) experimental procedure; (iv) model validation and efficiency; (v) dataset importance; (vi) availability of research limitation; (vii) discussion on future aspects; and (viii) presentation of model evaluation metrics. Results and Discussion After reviewing and analyzing the selected case studies, this section describes the major findings and discussion, as presented in different sub-sections. Characteristics of Selected Articles The first section elaborates on the major characteristics of reviewed articles. After going through the long procedure, we short-listed 57 studies out of 218 (first search) based on their relevance to the main objectives of this study. Prior to answering the main research questions, the following are some highlights of selected articles. Journal-Wise Categorization Given the large number of publications in this area of research, the selection process was not basically dependent on journal venue, rather it was based on the inclusion criteria. Therefore, the researchers' main focus was to include articles on the bases of defined rules without considering the journal venue. However, all searching databases are well-known for academic and applied research publications. Figure 2 illustrates the selected paper's publishing venues. Most of the selected papers were published in Elsevier (20), which is one of the prominent venues for publishing quality papers. Furthermore, 10 selected articles belong to MDPI, which is one of the largest publishing venues in academic research. In the other category, we put remaining journals such as Frontiers, Wiley, IEEE, and others. Journal-Wise Categorization Given the large number of publications in this area of research, the selection process was not basically dependent on journal venue, rather it was based on the inclusion criteria. Therefore, the researchers' main focus was to include articles on the bases of defined rules without considering the journal venue. However, all searching databases are well-known for academic and applied research publications. Figure 2 illustrates the selected paper's publishing venues. Most of the selected papers were published in Elsevier (20), which is one of the prominent venues for publishing quality papers. Furthermore, 10 selected articles belong to MDPI, which is one of the largest publishing venues in academic research. In the other category, we put remaining journals such as Frontiers, Wiley, IEEE, and others. Country-Wise Statistics We usually selected papers that proposed, implemented, and validated the prediction model using ML, DL, mathematical, or regression techniques and applied the model to the real datasets. The population of the selected case studies belonged to 19 different countries, where the COVID-19 dataset had in particular been collected and applied for different purposes, as depicted in Figure 3. Mainly, most of the studies were associated with the population of China (22%), which has been the focal point of this disease. The researchers from that region have published a number of articles related to predicting techniques [25], estimation of disease-related factors [26], and impact of prevention strategies [27]. The number of studies selected from the United States of America (USA) and the Indian regions constituted 15% and 6%, respectively. In addition, we put some studies under the public dataset category. This category represents the used dataset that either belongs to multiple regions or has been collected from an online portal (i.e., Kaggle, GitHub, and others). A large number of countries and real-world data provided a suitable ground to review the current scenario and future aspects in this area of research. Country-Wise Statistics We usually selected papers that proposed, implemented, and validated the prediction model using ML, DL, mathematical, or regression techniques and applied the model to the real datasets. The population of the selected case studies belonged to 19 different countries, where the COVID-19 dataset had in particular been collected and applied for different purposes, as depicted in Figure 3. Mainly, most of the studies were associated with the population of China (22%), which has been the focal point of this disease. The researchers from that region have published a number of articles related to predicting techniques [25], estimation of disease-related factors [26], and impact of prevention strategies [27]. The number of studies selected from the United States of America (USA) and the Indian regions constituted 15% and 6%, respectively. In addition, we put some studies under the public dataset category. This category represents the used dataset that either belongs to multiple regions or has been collected from an online portal (i.e., Kaggle, GitHub, and others). A large number of countries and real-world data provided a suitable ground to review the current scenario and future aspects in this area of research. Research Domain Most of the selected studies applied prediction strategies using different kinds of models. In brief, we avoid putting most of them under the prediction category and presented them in five categories based on the main research questions mentioned in those articles. Table 1 represents the five domains classification of selected articles as follows: (i) Automated Detection; (ii) Estimation of Disease Related Factors; (iii) Impact of Quarantine and Traveling; (iv) Reporting on COVID-19 Numbers; and (v) Virus Reproduction and Doubling Time. For instance, the "Automatic Detection" category combines different prediction models implemented for automating the process of diagnosing and treatment [28]. In addition, the number of studies that belongs to this category are helpful for automatic feature extraction and improving the learning process. For the most part, those articles used CT and CXR images that played a vital role in the early diagnosis and treatment of COVID-19 disease [29]. Furthermore, the category "Estimation of Disease-Related Factors" comprises multiple studies that demonstrated other factors and their correlation with COVID-19 disease. For example, a study defined the prevalence of depression and anxiety and its associated risk factors in the patients already infected by COVID-19 [30]. High temperature & humidity [31], and geo-location [26], are some other external factors used in the selected studies to measure their impact on COVID-19 spread or control. This classification table is useful for the researchers to find a group of research papers associated with the mentioned domain. Predicting COVID-19 is handled in different ways and perspectives, from its detection to prevention there are so many areas where researchers have proposed computing solutions. The categories shown in Figure 4 portray the percentage of selected articles in different domains. "Virus Reproduction and Doubling Time" is the third largest category in this SLR and comprises 20% of the 57 articles. These articles reported epidemic doubling time and basic reproduction rate using previous data [73]. Overall, these estimates were Research Domain Most of the selected studies applied prediction strategies using different kinds of models. In brief, we avoid putting most of them under the prediction category and presented them in five categories based on the main research questions mentioned in those articles. Table 1 represents the five domains classification of selected articles as follows: (i) Automated Detection; (ii) Estimation of Disease Related Factors; (iii) Impact of Quarantine and Traveling; (iv) Reporting on COVID-19 Numbers; and (v) Virus Reproduction and Doubling Time. For instance, the "Automatic Detection" category combines different prediction models implemented for automating the process of diagnosing and treatment [28]. In addition, the number of studies that belongs to this category are helpful for automatic feature extraction and improving the learning process. For the most part, those articles used CT and CXR images that played a vital role in the early diagnosis and treatment of COVID-19 disease [29]. Furthermore, the category "Estimation of Disease-Related Factors" comprises multiple studies that demonstrated other factors and their correlation with COVID-19 disease. For example, a study defined the prevalence of depression and anxiety and its associated risk factors in the patients already infected by COVID-19 [30]. High temperature & humidity [31], and geo-location [26], are some other external factors used in the selected studies to measure their impact on COVID-19 spread or control. This classification table is useful for the researchers to find a group of research papers associated with the mentioned domain. Predicting COVID-19 is handled in different ways and perspectives, from its detection to prevention there are so many areas where researchers have proposed computing solutions. The categories shown in Figure 4 portray the percentage of selected articles in different domains. "Virus Reproduction and Doubling Time" is the third largest category in this SLR and comprises 20% of the 57 articles. These articles reported epidemic doubling time and basic reproduction rate using previous data [73]. Overall, these estimates were useful for governmental authorities to prepare a number of guidelines for breaking the chain of COVID-19 infection. useful for governmental authorities to prepare a number of guidelines for breaking the chain of COVID-19 infection. Types of Modeling Applied for Modeling COVID-19 Cases The number of research domains discussed above has applied ML, DL, mathematical, or regression models. For the medical image classification task, DL techniques are considered feasible and suitable for automatic feature extraction and finding out the hidden patterns from those images. On the other side, a large number of ML algorithms are applied for the classification, identification, and analyze of COVID-19 cases. Figure 5 represents that 28% of the selected papers applied ML techniques, whereas 36% implemented DL, or other mathematical models, respectively. The mapping of each article with modeling techniques is shown in Table 2. It can be evident from this table that all kinds of models are almost equally important and proposed several solutions while dealing with COVID-19. It summarizes that 21 out of the 57 selected articles used DL approaches, 16 out of the 57 employed ML, and a final 21 articles used other regression or mathematical models. Whilst the regression model is one of the ML techniques, we put regression models in the "Others" category, due to their dynamics, Types of Modeling Applied for Modeling COVID-19 Cases The number of research domains discussed above has applied ML, DL, mathematical, or regression models. For the medical image classification task, DL techniques are considered feasible and suitable for automatic feature extraction and finding out the hidden patterns from those images. On the other side, a large number of ML algorithms are applied for the classification, identification, and analyze of COVID-19 cases. Figure 5 represents that 28% of the selected papers applied ML techniques, whereas 36% implemented DL, or other mathematical models, respectively. useful for governmental authorities to prepare a number of guidelines for breaking the chain of COVID-19 infection. Types of Modeling Applied for Modeling COVID-19 Cases The number of research domains discussed above has applied ML, DL, mathematical, or regression models. For the medical image classification task, DL techniques are considered feasible and suitable for automatic feature extraction and finding out the hidden patterns from those images. On the other side, a large number of ML algorithms are applied for the classification, identification, and analyze of COVID-19 cases. Figure 5 represents that 28% of the selected papers applied ML techniques, whereas 36% implemented DL, or other mathematical models, respectively. The mapping of each article with modeling techniques is shown in Table 2. It can be evident from this table that all kinds of models are almost equally important and proposed several solutions while dealing with COVID-19. It summarizes that 21 out of the 57 selected articles used DL approaches, 16 out of the 57 employed ML, and a final 21 articles used other regression or mathematical models. Whilst the regression model is one of the ML techniques, we put regression models in the "Others" category, due to their dynamics, The mapping of each article with modeling techniques is shown in Table 2. It can be evident from this table that all kinds of models are almost equally important and proposed several solutions while dealing with COVID-19. It summarizes that 21 out of the 57 selected articles used DL approaches, 16 out of the 57 employed ML, and a final 21 articles used other regression or mathematical models. Whilst the regression model is one of the ML techniques, we put regression models in the "Others" category, due to their dynamics, variety, and association with mathematical and statistical approaches. A detailed review of each type of modeling is presented in the subsequent sections. Of the selected studies, 28% of the studies implemented ML techniques to propose learning procedures or to develop prediction models. As shown in Figure 6, over 23% of the articles employed support vector machines (SVM), whereas 17% Decision Trees (DT), 15% Boosting, 12% Naïve Bayes (NB) and Random Forest (RF), 9% Artificial Neural Net (ANN) and K-Nearest Neighbor (KNN), and MLP implemented recorded the lowest %, at 3%. Previous studies highlighted the importance of the ML algorithm for multi-purpose solution building, which was further justified through measured accuracy of the models. For instance, research was applied to the multi-region datasets for (i) predicting the spread of virus in different regions; (ii) virus transmission rate; (iii) ending point; (iv) weather conditions and their association with the virus [55]. variety, and association with mathematical and statistical approaches. A detailed review of each type of modeling is presented in the subsequent sections. Of the selected studies, 28% of the studies implemented ML techniques to propose learning procedures or to develop prediction models. As shown in Figure 6, over 23% of the articles employed support vector machines (SVM), whereas 17% Decision Trees (DT), 15% Boosting, 12% Naïve Bayes (NB) and Random Forest (RF), 9% Artificial Neural Net (ANN) and K-Nearest Neighbor (KNN), and MLP implemented recorded the lowest %, at 3%. Previous studies highlighted the importance of the ML algorithm for multi-purpose solution building, which was further justified through measured accuracy of the models. For instance, research was applied to the multi-region datasets for (i) predicting the spread of virus in different regions; (ii) virus transmission rate; (iii) ending point; (iv) weather conditions and their association with the virus [55]. Early assessment and identification of COVID-19 is helpful for effective treatment and it can also reduce the healthcare cost. A study used multi-ML models for predicting the infection status in different states of India [67]. Overall, 5004 patients were recorded with a cross-validation approach used for model implementation. For this, the ensemble model proposed using different classifiers such as SVM, DT, and NB. The model outperformed (accuracy: 0.94) as compared to other studies 0.85 [79] and 0.91 [80]. The use of ML approaches for COVID-19 disease recorded several frameworks. One study analyzed the multiple symptoms to identify risk factors for clinical evaluation of COVID-19 patients [49] Early assessment and identification of COVID-19 is helpful for effective treatment and it can also reduce the healthcare cost. A study used multi-ML models for predicting the infection status in different states of India [67]. Overall, 5004 patients were recorded with a cross-validation approach used for model implementation. For this, the ensemble model proposed using different classifiers such as SVM, DT, and NB. The model outperformed (accuracy: 0.94) as compared to other studies 0.85 [79] and 0.91 [80]. The use of ML approaches for COVID-19 disease recorded several frameworks. One study analyzed the multiple symptoms to identify risk factors for clinical evaluation of COVID-19 patients [49]. The study used 166 patients of different age groups including demographic features, disease history, and other test information. The study applied a multi-model (ANN, SVM, and Boosting) approach, in which ANN outperformed other classifiers with 96% accuracy. Moreover, it is also useful for real-time forecasting purposes, as discussed in a study applied to the time series data collected from Johns Hopkins [56]. The model provided predictions for the next 3 weeks and the results were suitable for the higher authorities to plan resources and prepare policy accordingly. In the same way, another study proposed a model using SVM and DT that forecasted the next six months in Algeria [57]. Scholars suggested ideas to support the government by predicting numbers on potential virus growth using different variables. In the study, factors such as weather, temperature, pollution, gross domestic product, and population density were used to develop a prediction model [25]. The collected dataset was associated with the different states of the USA. SVM, DT, and regression-based models were applied in this study to forecast the spread of the virus. SVM performance showed 95% more variation than other models. The study further suggested that population density can be a critical factor to analyze the size of the spread. The author explored a good factor, but comparing this factor in high and low population regions can provide better results. In addition, the impact of quarantine was measured using data collected from three countries (Italy, South Korea, and the USA) [51]. The study recommended that strict government policies for isolation played a significant role in halting the virus' spread. The review process in this study identified several facts about ML techniques. According to the studies selected in this paper, the most useful model is SVM, which has been used in 23% of articles. DT (17%) and Boosting (15%) stand in second and third place. Based on the review performed on selected case studies, the ML approach is useful to predict future growth [55], severity detection [47], analyzing CT radiomic features [63], CT images' classification [37], measuring the impact of social restrictions on virus spread [27], the importance of travel restrictions in reducing virus spread [52], measuring depression and anxiety in COVID-19 infected people [30], and using population density as the main factor for prediction [25]. The model evaluation has shown extraordinary performance in different studies, such as for severity detection (Classifier: SVM, Accuracy: 81%, China) [47], CT images classification accuracy (Classifier: SVM, Accuracy: 99.68%, China) [37] (Classifier SVM, Accuracy: 92.1%, Multi-region) [55], and spatial visualization (Boosting, R 2 : 0.72, China) [26]. Deep Learning Models Another major development presented in this SLR study is to review published papers that performed DL techniques to automate the COVID-19 detection process and predict a number of cases. Fast diagnostic methods and deep analysis can help and control COVID-19 spread and that is strongly supported by DL methods. In this SLR, based on the review performed on the selected cases, Figure 7 elaborates on the DL models and the number of times they are used in selected studies. The figure explains the usefulness of the Convolutional Neural Network (CNN) model as it has been used in 10 different articles from the selected studies. Although LSTM is the modified version of Recurrent Neural Network (RNN), to be more specific, we kept them separated and used the same name as mentioned in the studies. Altogether LSTM and RNN were used in nine different articles. The use of a CNN-based deep neural system for medical image classification has been known for its better feature extractions' capabilities [15,29]. A research team proposed and used 10 different types of CNN-based models to classify the images into infected and non-infected groups [32]. For this, 1020 CT images, and 108 patients' records were used for the model implementation and validation process. ResNet-101 and Xception showed the best performance with accuracy measured as 99.51% and 99.02%, respectively, although high accuracy could be tested by adding more images from different classes. In addition, research applied the CNN technique to distinguish the infected and non-infected person using their CXR images. For better accuracy and automatic feature detection, transfer learning with CNN approach applied which helped to achieve accuracy, sensitivity, and specificity as 96.78%, 98.66%, and 96.46%, respectively [28]. As per the recommendations collected from different studies, DL approaches could be helpful in several situations. Commonly, different studies used CNN methods to classify CT and CXR images (Classes: COVID-19 infected, viral pneumonia patients, normal patients) [34,36,44], whereas model accuracy recorded more than 90%. In addition, these strategies most of the time used a split validation approach. Another study proposed CNNbased architecture (STM-RENet) to analyze and identify radiographic patterns and textural variations in CXR images of COVID-19 infected people [39]. The proposed model achieved an accuracy of 96.53%, which can be adapted for detecting COVID-19 infected patients. COVID-Net, a CNN-based network system for automation in clinical decisions [35], detection of COVID-19 using SVM classifiers [42], and predicting severe and critical cases based on clinical data of patients using SVM classifiers [33] are some other valuable researches that can provide potential feedback to the medical and higher authorities. The use of a CNN-based deep neural system for medical image classification has been known for its better feature extractions' capabilities [15,29]. A research team proposed and used 10 different types of CNN-based models to classify the images into infected and noninfected groups [32]. For this, 1020 CT images, and 108 patients' records were used for the model implementation and validation process. ResNet-101 and Xception showed the best performance with accuracy measured as 99.51% and 99.02%, respectively, although high accuracy could be tested by adding more images from different classes. In addition, research applied the CNN technique to distinguish the infected and non-infected person using their CXR images. For better accuracy and automatic feature detection, transfer learning with CNN approach applied which helped to achieve accuracy, sensitivity, and specificity as 96.78%, 98.66%, and 96.46%, respectively [28]. As per the recommendations collected from different studies, DL approaches could be helpful in several situations. Commonly, different studies used CNN methods to classify CT and CXR images (Classes: COVID-19 infected, viral pneumonia patients, normal patients) [34,36,44], whereas model accuracy recorded more than 90%. In addition, these strategies most of the time used a split validation approach. Another study proposed CNN-based architecture (STM-RENet) to analyze and identify radiographic patterns and textural variations in CXR images of COVID-19 infected people [39]. The proposed model achieved an accuracy of 96.53%, which can be adapted for detecting COVID-19 infected patients. COVID-Net, a CNN-based network system for automation in clinical decisions [35], detection of COVID-19 using SVM classifiers [42], and predicting severe and critical cases based on clinical data of patients using SVM classifiers [33] are some other valuable researches that can provide potential feedback to the medical and higher authorities. The idea of providing a more robust forecast is presented in a research paper with the help of the LSTM framework and mathematical epidemic model [64]. The paper proposed a model that can predict the number of cases on daily bases for the next 15 days with reasonable interpretation. Similarly, another integration was presented using LSTM and Auto-Regressive Integrated Moving Average (ARIMA) techniques, that can forecast for the next 60 days [65]. LSTM has been applied in another study that used time series analysis, evaluated the model, and forecast the number of cases for the next 15 days, ap- The idea of providing a more robust forecast is presented in a research paper with the help of the LSTM framework and mathematical epidemic model [64]. The paper proposed a model that can predict the number of cases on daily bases for the next 15 days with reasonable interpretation. Similarly, another integration was presented using LSTM and Auto-Regressive Integrated Moving Average (ARIMA) techniques, that can forecast for the next 60 days [65]. LSTM has been applied in another study that used time series analysis, evaluated the model, and forecast the number of cases for the next 15 days, applied to the Moscow dataset [59]. The implementation of DL models assisted positively in this epidemic situation to encounter the issues related to automatic infection detection using CT or CXR [43], finding out hidden features [48], forecasting for the next few days [68], and correlating external factors with COVID-19-like social restrictions [27], or spatiotemporal data [50]. According to the selected studies, the range of forecasting provided was from 15 to 60 days. The most common evaluation metrics used were RMSE and MAPE. In addition, for classification tasks the common evaluation metrics used were sensitivity, specificity, and accuracy, which most of the time measured more than 90% [38,41]. Others (Regression and Mathematical Models) This category combines different mathematical, statistical, regression, and compartmental models that provided a number of solutions in this epidemic situation. These compartmental models use groups of populations and employ mathematical equations using different disease-related factors [24]. These models are also helpful for early prediction, growth rate, number of deaths, and recoveries, which ultimately can provide assistance to higher authorities in controlling the situation. Figure 8 represents the number of models covered in this category and used in selected case studies. Regression analysis (15) is at the top, which has been proved several times to apply time series analysis and forecast for future infections. In addition, the exponential growth model, the SIR Model (Susceptible, Infectious, Recovered), and its extended version such as SEIR (Susceptible, Exposed, Infectious, Recovered), SIRF (Susceptible, Infectious, Recovered, Fatalities), and SIMLR (Susceptible, Infected, Machine Learning, Recovered) are used in selected cases. This category combines different mathematical, statistical, regression, and compartmental models that provided a number of solutions in this epidemic situation. These compartmental models use groups of populations and employ mathematical equations using different disease-related factors [24]. These models are also helpful for early prediction, growth rate, number of deaths, and recoveries, which ultimately can provide assistance to higher authorities in controlling the situation. Figure 8 represents the number of models covered in this category and used in selected case studies. Regression analysis (15) is at the top, which has been proved several times to apply time series analysis and forecast for future infections. In addition, the exponential growth model, the SIR Model (Susceptible, Infectious, Recovered), and its extended version such as SEIR (Susceptible, Exposed, Infectious, Recovered), SIRF (Susceptible, Infectious, Recovered, Fatalities), and SIMLR (Susceptible, Infected, Machine Learning, Recovered) are used in selected cases. SIMLR is an extension of the basic epidemiological SIR model that is integrated with the ML approach, applied to track the changes in policies and guidelines applied by governmental authorities [58]. The main purpose of this model was to forecast one to four weeks in advance in Canada and the United States. The results generated and presented a comparison of MAPE in different states. Using a dataset up to July 6, 2021 (India and Israel) the SIRF model was proposed, which extended the basic SIR model by adding fatalities data and can forecast for the next 100 days [60]. In addition, the third extended version found in the selected studies is SEIR, integrating with the "exposed" parameter. This study proposed a simulation-based approach applied to the past 300 days' data from China to see the impact of prevention strategies [53]. Multiple regression models were applied in a study to predict the number of positive cases in the next few days [25,30]. The idea was to strengthen government policies in order to reduce the number of infected people [66]. For forecasting purposes, the study collected data (22 January 2020, to 12 July 2021), where the study suggested that if the current number of cases are 5000, it can be doubled in the next 5 days. Similarly, the linear regression method was applied to estimate the basic reproduction rate based on the data (1 March-18 May 2020) collected from different regions of the United States [46]. The main idea of this study was to analyze the impact of face-coverings in different states. The result estimated that the total number of infections at the end of May could reach up to 252,000, which shows the positive impact of face coverings. The regression model was applied in SIMLR is an extension of the basic epidemiological SIR model that is integrated with the ML approach, applied to track the changes in policies and guidelines applied by governmental authorities [58]. The main purpose of this model was to forecast one to four weeks in advance in Canada and the United States. The results generated and presented a comparison of MAPE in different states. Using a dataset up to July 6, 2021 (India and Israel) the SIRF model was proposed, which extended the basic SIR model by adding fatalities data and can forecast for the next 100 days [60]. In addition, the third extended version found in the selected studies is SEIR, integrating with the "exposed" parameter. This study proposed a simulation-based approach applied to the past 300 days' data from China to see the impact of prevention strategies [53]. Multiple regression models were applied in a study to predict the number of positive cases in the next few days [25,30]. The idea was to strengthen government policies in order to reduce the number of infected people [66]. For forecasting purposes, the study collected data (22 January 2020, to 12 July 2021), where the study suggested that if the current number of cases are 5000, it can be doubled in the next 5 days. Similarly, the linear regression method was applied to estimate the basic reproduction rate based on the data (1 March-18 May 2020) collected from different regions of the United States [46]. The main idea of this study was to analyze the impact of face-coverings in different states. The result estimated that the total number of infections at the end of May could reach up to 252,000, which shows the positive impact of face coverings. The regression model was applied in different studies and highlighted multiple factors, such as higher temperature, which would help to reduce the transmission rate in China and the USA [31], while the study conducted in Brazil did not support the same idea [45]. Some other time series forecasting models such as FB Prophet applied in Bangladesh (estimation size: 8 March 2020 to 14 October 2021) [61], India and Israel (estimation size: July 6, 2021) [60], ARIMA in China (estimation size: 22 January 2020 to 7 April 2020) [69] are some useful models that can help their country's representatives to prepare guidelines and prevention strategies. Model Validation Strategy In this section, we elaborate on the number of validation strategies applied in selected case studies and their ratio, to understand the most favorable validation method in the current situation. As shown in Figure 9, most of the selected studies employed split validation (77%) strategies. One of the reasons behind split validation could be the availability of a smaller number of datasets. As per the importance and quality of cross-validation strategy discussed in previous studies [81], it could be a critical point for the future researchers to: (i) encourage dataset availability on the public platforms; (ii) assess the difference between both validation strategies. Model Validation Strategy In this section, we elaborate on the number of validation strategies applied in selected case studies and their ratio, to understand the most favorable validation method in the current situation. As shown in Figure 9, most of the selected studies employed split validation (77%) strategies. One of the reasons behind split validation could be the availability of a smaller number of datasets. As per the importance and quality of cross-validation strategy discussed in previous studies [81], it could be a critical point for the future researchers to: (i) encourage dataset availability on the public platforms; (ii) assess the difference between both validation strategies. Quality Evaluation Metrics Used in Selected Studies The evaluation metrics allowed researchers to quantify the work presented in any study. It also allowed the author to present the results in an efficient manner. However, the selection of the evaluation metrics is an important aspect, which is based on the type of model employed in that study. The list of quality metrics used for model evaluation in selected studies is depicted in Figure 10. The important thing to mention here, these numbers are not representing the best or worst evaluation metric, they are just presented to highlight the number of potential metrics that could be used, based on the type of forecasting model. Commonly, after reviewing all papers, we can say that growth rate, doubling time, R0, R 2 , MAPE, MAE, MSE, and RMSE are evaluation metrics that are useful (but not limited to) for time series, regression, compartmental models, or for other mathematical models. The remaining are possible evaluation metrics when we employed other ML or DL methods. Cross Validation 23% Split Validation 77% Cross Validation Split Validation Quality Evaluation Metrics Used in Selected Studies The evaluation metrics allowed researchers to quantify the work presented in any study. It also allowed the author to present the results in an efficient manner. However, the selection of the evaluation metrics is an important aspect, which is based on the type of model employed in that study. The list of quality metrics used for model evaluation in selected studies is depicted in Figure 10. The important thing to mention here, these numbers are not representing the best or worst evaluation metric, they are just presented to highlight the number of potential metrics that could be used, based on the type of forecasting model. Commonly, after reviewing all papers, we can say that growth rate, doubling time, R 0 , R 2 , MAPE, MAE, MSE, and RMSE are evaluation metrics that are useful (but not limited to) for time series, regression, compartmental models, or for other mathematical models. The remaining are possible evaluation metrics when we employed other ML or DL methods. Epidemiologic Characteristics and Transmission Factors This section describes epidemiological and transmission factors reviewed from the selected case studies. We present the major findings in two sub-sections: (i) Epidemic Doubling Time; and (ii) Basic Reproduction number as presented in subsequent sections. Estimated Period and Doubling Time The epidemic's exponential growth within a short period is reported from all over the world. Different studies proposed solutions to reduce, control, and mitigate the impact of COVID-19. The main purpose of those studies was to provide some useful numbers to Epidemiologic Characteristics and Transmission Factors This section describes epidemiological and transmission factors reviewed from the selected case studies. We present the major findings in two sub-sections: (i) Epidemic Doubling Time; and (ii) Basic Reproduction number as presented in subsequent sections. Estimated Period and Doubling Time The epidemic's exponential growth within a short period is reported from all over the world. Different studies proposed solutions to reduce, control, and mitigate the impact of COVID-19. The main purpose of those studies was to provide some useful numbers to the higher authorities for preparing controlling strategies as illustrated in Table 3. Therefore, research conducted in India using the data collected from February 2020 to March 2021, estimated that the epidemic doubled in size every 1.7 to 46.2 days. The minimum and maximum numbers were calculated based on the infected cases in different districts [70]. Using linear regression and SVM approaches, an analysis was conducted on multi-region data, where the mathematical model estimated the size on the basis that if the number of positive cases is 5000, it will double in size every 5 days, whereas 163,840,000 cases would be doubled in 140 days. The equation presented multiple scenarios using different datasets, to make the government aware about the severity level of the epidemic [66]. Using a similar strategy (the exponential growth model) estimated the doubling size in China was every 3.6 days [75], whereas another Chinese study concluded that the doubling size was every 4.2 days [74]. The number of studies presented and reviewed in this study conducted in different regions highlighted multiple factors for the governmental agencies. According to the selected cases, the interval for doubling time occurs between 1.7 to 140 days, based on the number of infected people and estimation size. The recommendations list collected from different articles are compiled and presented in the following table. Basic Reproduction Number (R 0 ) Basic reproduction number estimation plays a significant role and directly impacts different factors such as procedures, guidelines, travel restrictions, quarantine process, and other related factors. Table 4 represents the R 0, identified in selected case studies. Generally, a larger reproduction rate would have a large number of infected people in the future. Mainly, the exponential growth model, SIR, ARIMA, and other mathematical models are used for measuring the rate of reproduction number. In addition, the interval of ranges based on the given studies occurs between 0 to 7.1. In which, 0 is the ideal case discussed in the paper related to some districts in India, which recorded less than 40 isolated cases and no local transmission of infection reported [70]. The highest R 0 estimate of 7.1 was measured for the New Jersey, USA, in a study published recently [72], which indicates the virus transmission varies in different states. Another recent study used SIR and applied it to the dataset collected from Spain with ranges for R 0 from 0.48 to 5.89. As mentioned in the study, the minimum value is clearly identifying the impact of lockdown as the R 0 dropped from 5.89 (before lockdown) to 0.48 (after lockdown) [73]. Conclusions As we are aware, the pandemic has had an impact on the entire world. This research discussed the role of ML and DL techniques that can assist medical and governmental agencies. This SLR reviewed a number of papers to identify ML, DL, and mathematical models that can predict the potential impact, transmission growth rate, and virus identification. The research identifies that understanding epidemiology and forecasting models are important to mitigate the impact of this epidemic situation. As for now, the virus transmission is continuing to spread around the world, and the integration of multiple strategies can help to control the situation. In the future, we need to select the most recent papers, while presenting the work using different SLR tools. We discussed a number of key findings that can be helpful for policymakers and future researchers. This type of study should be conducted in the future to understand, analyze, and collect the recent advancement in this area of research.
v3-fos-license
2023-07-12T08:25:44.654Z
2023-06-22T00:00:00.000
259667470
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.35765/forphil.2023.2801.05", "pdf_hash": "a4280f873a1999593716faf6605a8cb45f9a1207", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44078", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "314bab87398d3f840affdba3eaefe63e42037f70", "year": 2023 }
pes2o/s2orc
Greek Philosophy as a Religious Q uest for the Divine 1 A bstract Philosophy has always been parasitic on other bodies of knowledge, especially religious thought. Greek philosophy in Italy emerged as a puri f ication of Orphic religious traditions. Orphic votaries adopted various disciplines in the attempt to become divine, which led Pythagoras and Empedocles to de f ine philosophy as a path to divinity. According to Plato and Aristotle, the goal of philosophy is to become “as much like a god as is humanly possible.” Classical Greek philosophy is not the study of the divine but the project of becoming divine, a project which it shares with Christianity. Greek philosophy and Christianity have different paths to the divine, but they share a common aspiration. J B M Even eminent philosophers have claimed that philosophy has always been parasitic on other disciplines. As Nelson Goodman memorably put it: "Scientists run the business, but philosophers keep the books. " The "business" is the generation of ideas, hypotheses, empirical indings, concepts, and theories by mathematicians, physicists, and biologists, but also literary theorists, historians, and theologians. Only afterwards do philosophers arrive to "keep the books" by certifying whether the claims of the other disciplines actually constitute knowledge. Philosophers, like accountants, do not generate revenue; they only attempt to determine whether the business is pro itable. All of this is rather obvious in the realm of what might be called "applied philosophy, " such as the philosophy of physics, the philosophy of religion, the philosophy of history, and so on. But what about the core ields of "pure philosophy, " such as logic, epistemology, and metaphysics. Surely these ields of inquiry are autonomous? Yet some leading historians of philosophy, including R.G. Collingwood, have argued, I think persuasively, that even the core of philosophy is indirectly parasitic on the other disciplines: they claim that developments in logic track developments in mathematics, developments in epistemology track developments in sciences ranging from physics to history, and developments in metaphysics track developments in many sciences, especially theology. Without the raw material provided by inquiry in the other disciplines, philosophy deteriorates into sterile logomachy. There are almost as many de initions of philosophy as there are philosophers, and a whole branch of the discipline is devoted to this question: "metaphilosophy. " 2 Perhaps the least controversial de inition was offered by the Catholic apologist, G.K. Chesterton: "Philosophy is thought that has been thought through. " In every other ield of inquiry, ideas are presupposed but not examined. What R.G. Collingwood calls "absolute presuppositions" are assumptions so basic that they cannot be themselves proven because they make possible all scienti ic inquiry. They are the lenses by which scholars and scientists see the world. In Collingwood's example, if you ask a pathologist "Why do you assume that every disease has a cause?, " the pathologist "will probably blow up right in your face, because you have put your inger on one of his absolute presuppositions, and people are apt 2. Philosophers have always debated the nature and justi ication of their activities, but the word "metaphilosophy" seems to be very recent. A follower of the later Wittgenstein, Morris Lazerowitz (1970), de ines it thus: "The investigation of the nature of philosophy, with the central aim of arriving at a satisfactory explanation of the absence of uncontested philosophical claims and arguments. " On Plato and the origins of metaphilosophy, see (Griswold 1988, 144). G P R D to be ticklish in their absolute presuppositions" (Collingwood 1998, 31). Biology, like every other science, makes progress by taking some ideas for granted. Scienti ic thought, in Chesterton's expression, is not fully thought through. Only philosophers ask whether all things have causes and, by the way, what is a cause, anyway? Whether or not one inds these sweeping claims about philosophy in general to be plausible, there is no doubt that philosophy has always had a special relationship to religious thought. Whether in ancient Greece, China, or India, philosophy always grows out of religious speculation. We see this evolution in the famous theories of the stages of history: philosophy always succeeds religion. Hegel's theory of the development of absolute mind proceeds from art to religion to philosophy; in his view, philosophy transcends in form but also includes the content of art and religion. Auguste Comte describes an evolutionary progress from the theological to the metaphysical to the positive sciences; that is, from religion to philosophy to natural sciences and mathematics. These epic theories of history are certainly correct that philosophy arose from earlier religious modes of thought. Comte was wrong to suppose that philosophy would ever replace religion or that the sciences would ever replace philosophy. Even today, philosophical speculation about possible worlds or about free will reveals the continuing in luence of religious ideas. Philosophical inquiries about the in inite, about possible worlds, and about freedom of the will are all developments of religious ideas. Philosophy can help us clarify and sharpen our religious speculation on these ultimate matters. Immanuel Kant concludes his Critique of Pure Reason by posing the fundamental questions his whole philosophy aims to answer: "What can I know? What should I do? What may I hope?" (A805/B833). I think it self-evident that these are the basic presuppositions of religious life and thought. 3 The liminal philosophical question is: why is there something rather than nothing? Here philosophy becomes a disciplined kind of religious speculation. Philosophy and religion could both be said to be oriented to matters of ultimate human concern, matters about the meaning of life and death, time and eternity, origins and destiny. That is why bookstores usually lump together philosophy and religion, or even philosophy, religion, and the occult! To those philosophers with scienti ic aspirations, this association of philosophy with religion is an embarrassment, which is why the early J B M logical positivists, such as Rudolf Carnap, rejected the name "philosophy" for their logical and linguistic inquiries. One leading contemporary philosopher, Colin McGinn, wants to rename philosophy "ontics"-that is, the science of being, just as "physics" is the science of nature. No one would confuse "ontics" with religion, which is the point of the new name. In the wake of modern ideas of progress, scholars often tell the story of Greek thought as an evolution from myth to reason, from mythos to logos. 4 On this view, Greek philosophy replaced the bizarre tales of the gods in Homer and Hesiod with a rational and secular science of nature. 5 But this view of the development of philosophy cannot be squared with actual history. If we take myth to mean stories expressing beliefs about ultimate questions, then myth is a permanent part of human thought and culture. We shall always rely upon myths to make sense of these liminal matters. But, like all modes of thought, including modern physics, mythic ideas need to be thought through before we can assess their validity. As Werner Jaeger rightly observes, without philosophy, myth is blind, but without myth, philosophy is empty. 6 The greatest works of speculative philosophy-think especially of Hegel-elaborate a powerful mythical narrative in the language of abstract conceptual argument. Plato alerts us to his uses of myth, whereas modern scienti ic and philosophical discourse disguises its myths under a highly technical terminology. The danger of ignoring the mythic elements of modern thought is that we are at risk of swallowing myths whole, as is so often the case with readers of Marx,Freud,and Heidegger. 7 4. See, for example, John Burnet's Early Greek Philosophy, which warns us not to "fall into the error of deriving science from mythology" (1892, 14); similarly, Jonathan Barnes's Early Greek Philosophy (2001, xviii-xxv) contrasts the "rationality" of the philosophers to the "arbitrary caprice" and "fantasy" of mythology; see: Wilhelm Nestle's Vom Mythos zum Logos: "Mythisches Vorstellen und logisches Denken sind Gegensätze" (1975, 1); "There is no real continuity between myth and philosophy" (Vernant 1982, 107). On the tendency of historians of philosophy to equate the religious with the irrational and the secular with the rational, see (Tor 2017, 10-9). 5. Thus, according to Walter Burkert, with the rise of the philosophers "Myth is left behind. The word mythos, obsolete in Attic, is now rede ined and devalued as the sort of story that the old poets used to tell and that old women still tell to children" (Burkert 1985, 312). 6. "Mythical thought without the formative logos is blind, and logical theorizing without living mythical thought is empty" (Jaeger 1939, 150). 7. "The danger begins when men believe they have left all that behind [namely, myth] and are relying on a scienti ic method based solely on a combination of observation and logical inference . . . Today it [myth] is even more heavily overlaid than in ancient Greece with the terminology of rational disciplines. This makes it more dif icult to detect and therefore more dangerous" (Guthrie 1962, 1:2). G P R D If philosophy is the logically rigorous exploration of the presuppositions of religious thought, then what are those basic presuppositions? To answer this, I would have to de ine religion, about which there is no agreement. I will attempt to be modest and uncontroversial. The etymology of the word "religion, " in its Latin root, is disputed but certainly makes no reference to anything supernatural. 8 When we say that someone practices yoga "religiously" we mean they practice yoga assiduously, conscientiously, and rigorously. Within Christianity, a "religious" vocation traditionally meant joining an order of monks, friars, or nuns, so that one's whole life would be uni ied around Christian ideals. A religious life is at least a disciplined life. According to some philosophers, this task of unifying all the major pursuits in a life around an ideal of the good is suf icient to make a doctrine religious, no matter how otherwise secular. 9 A religious life may or may not be oriented to a god, but it cannot be a mere hobby or temporary fancy. In this sense, the great philosophical systems are religious in the sense that they aspire to unify the pursuit of knowledge, virtue, and aesthetic experience-in short, the true, the good, and the beautiful. In our age of hyper-specialization, it seems ludicrous to attempt to treat so many areas of inquiry. Why do the great philosophers attempt to theorize logic, nature, beauty, ethics, politics, and god? 10 Are they merely attempting to cover all topics, to be encyclopedic? No, philosophy aspires to be comprehensive for practical, not merely theoretical, reasons. Since a complete human life includes thinking logically, understanding nature, appreciating beauty, acting ethically, being a good citizen, and knowing God, philosophy cannot lead us to live our lives well unless it shows us how to integrate all the major goods into a coherent whole. The French historian of Hellenistic philosophy, Pierre Hadot, argues that all the great ancient thinkers saw philosophy as a distinctive and uni ied way of life rather than a mere body of knowledge. 11 The irst person to be called a "philosopher" was Pythagoras, and he is said to have founded 8. Ever since Cicero, scholars have debated whether the Latin noun religio stems from the verb religare "to bind or obligate" or the verb relegere "to go over again" (in thought, word, or deed). 9. For the argument that religion essentially uni ies a human life by giving it a focus but need not involve anything supernatural, see (Dewey 2013;Dworkin 2013). 10. I capitalize "God" only when referring to the biblical divinity-not to honor the biblical God but because "God" is a proper name only of the biblical God (Yahweh). 11. For the argument that the ancient schools of philosophy were each devoted to a distinctive way of life, see (Hadot 2002). About ancient philosophy, he says: "The real problem is therefore not the problem of knowing this or that, but of being in this or that way" (2002,29). Plato describes philosophy as a way of living at Theaetetus 174a. J B M a religious cult, with its own diet, rituals, and god. 12 Socrates is a betterknown exemplar of philosophy as a coherent, integrated way of life. The latter's aim was never to teach a doctrine, but always to turn around someone's life. Philosophy, for him, was a divine mission literally to save souls. 13 Because the heroic virtues of Socrates were consistent with differing philosophical interpretations, he became the ideal sage not only for Platonism but also for Stoicism, Skepticism, and Cynicism. According to Hadot and other scholars, the various schools of ancient philosophy resembled different religious orders, each with its own characteristic customs, disciplines, and styles of living. 14 Hadot is certainly right about the practical orientation of the great philosophers, ancient, medieval, and modern, who aspired not merely to change minds but to change lives. It is no accident that Spinoza calls his great metaphysical study of God and nature the Ethics. Philosophical inquiry into physics, cosmology, and logic was always in the service of the acquisition of the virtues, both moral and intellectual. The goal of philosophy was less the perfection of knowledge than the perfection of the knower. Plato says we cannot be certain that philosophy will save us-but believing in philosophy is well worth the risk. 15 Religion means more than a life integrated around the pursuit of some ideal; a religious life is integrated around some transcendent or divine ideal. Hadot is unduly reticent about the ultimate goal of what he calls "philosophy as a way of life. " 16 The reason why Plato and Aristotle aim at the perfection of the moral and intellectual virtues is so that human beings might become like a god-or at least as much like a god as is humanly possible. 17 12. For the argument and evidence that Pythagoras (or, at least, a Pythagorean) was the irst person to be called philosophos, see (Moore 2020, chaps. 2-4). David S. du Toit concurs: "Dadurch wird Pythagoras zum ersten richtigen Philosophen gemacht" (1997,237). 13. Hadot describes Socratic philosophy as "a way of life, intended to ensure a good life and thereby the salvation of the soul" (2002, 65). 14. "Ancient philosophy was also a way of life, an exercise in self-discipline, a process of self-transformation which expressed itself not only in the theories one propounded but also in the clothes one wore, the food one ate, and the way one behaved with regard to gods, animals, and other men" (Most 2003, 305). 15. "No sensible man would insist that these things [heaven and hell] are as I have described them, but I think it is itting for a man to risk the belief-for the risk is a noble one-that this, or something like this, is true about our souls and their dwelling places" (Phaedo, 114d). 16. Hadot focuses on the disciplines of the philosophical way of life rather than on the goal, on the means rather than on the end; but he does mention the Platonic goal of "becoming like god" (2002,262). 17. "The goal of the philosopher is to become as much like this god as a human possibly can: by devoting himself to the study of all that is divine" (Most 2003, 311). G P R D Ancient Greek philosophy clearly reveals its origins in religious thought and practice. Ever since Aristotle, historians have distinguished an Ionian from an Italian tradition of Greek thought. 18 During the sixth century, Ionian cosmologists from Asia minor-Thales, Anaximander, and Anaximenesdeveloped their thought in relation to the speculations about the origins of the cosmos in Homer and Hesiod. 19 Aristotle sometimes calls these early Ionian thinkers "philosophers"-but they did not call themselves philosophers. The irst Greek thinkers to call themselves "philosophers" seem to have been Pythagoras and Empedocles, who lived in southern Italy. 20 If the Ionian physicists take Apollonian religion in Homer and Hesiod as their starting point, then Pythagoras and Empedocles were inspired by Bacchic and Orphic mystery cults active in southern Italy. If the Ionian physicists respected the gulf between gods and humans, the Italian philosophers, by contrast, claimed to have transformed themselves into gods. The Ionian physicists aspired to understand the divine causes of the cosmos; the Italian philosophers aspired to themselves become gods. By calling themselves "philosophers, " Pythagoras and Empedocles created the image of a philosopher as a sage with wisdom about the meaning of life and death. Pythagoras, a follower of Orpheus, is often credited with the Orphic belief in the transmigration of souls through plant, animal, and human bodies. 21 In the case of Pythagoras and Empedocles, the souls that migrate from life to life retain their personal memories: indeed, Pythagoras was famous for remembering his prior incarnations. 22 In reaction to the 18. According to Diogenes, the Ionian tradition extends from Thales to Theophrastus while the Italian tradition extends from Pythagoras to Epicurus. See Diogenes, Lives of the Eminent Philosophers, I (Introduction), X. 19. On the Homeric and Hesiodic texts relevant to Ionian natural philosophy, see (Kirk et al. 1983, chapter 1). On the reliance of the Ionian thinkers on Homer and Hesiod, see (Kahn 1960, 119-65). 20. Philosophy arrived in southern Italy when Xenophanes and Pythagoras emigrated from Ionia; they lourished at the end of the sixth century. On Pythagoras as the irst person to call himself a philosopher, see Diogenes Laertius, The Lives of the Philosophers, 1.12 and 1.13. And see Iamblichus, On the Pythagorean Way of Life, 12.58. "There are good grounds for thinking that Pythagoras introduced and made familiar a new meaning of the words philosophos and philosophia" (Guthrie 1962, 1:204). According to Leonid Zhmud, many modern scholars do credit the Pythagoreans with coining the word "philosophy" (see Zhmud 2012, 18). For a book-length argument that philosophos emerged as an accusation against the Pythagoreans, only to be adopted by them, see (Moore 2020). On whether Heraclitus (fragment 35) claimed that Pythagoras called himself a "philosopher, " see (Kirk, Raven, and Scho ield 1983, 218). 21. Xenophanes reports that Pythagoras stopped someone from beating a puppy on the grounds that he recognized a friend's voice in the dog's yelp; see Pythagoras, fragment 260. 22. "Pythagoras commands a unique ability to recall facts about his earlier incarnations (as probably re lected in Empedocles DK 31 B129)" (Tor 2017, 275). J B M complacent story of Greek philosophy as the triumph of secular reason over religious myth, some contemporary scholars have described Pythagoras and Empedocles as magicians, healers, and shamans. 23 Greek philosophy in Italy is not mysticism but something much weirder: a blend of mysticism and science, like alchemy. 24 What are we to make of a igure like Empedocles, who is the pioneer of physical chemistry but also a magician and healer? 25 In the same poem, Empedocles sets forth his theory of the elements and then proceeds to claim that he will provide his students with the powers to control the winds and resurrect the dead. 26 No wonder that he also claimed to be immortal himself. 27 The irst philosophers in Italy resemble medieval alchemists more than modern scientists. Pythagoras, Empedocles, and Parmenides were not only the irst thinkers to describe themselves as philosophers, but Pythagoras and Empedocles actually claimed to be gods, while Parmenides claimed to have become god-like. 28 Here we see the in luence of the mystery cults and the Dionysian aspiration for union with a god. 29 Empedocles was an early follower of Pythagoras, and he insisted that Pythagoras was a divine being. 30 Aristotle tells us that Pythagoras claimed to be either a god or, at least, a being between gods and men. 31 Empedocles also claimed to be 23. See the work of Peter Kingsley: Ancient Philosophy, Mystery, and Magic (1995); In the Dark Places of Wisdom (1999). 24. Shaul Tor shows how scholars continue to assume that Greek philosophy must rest either on reason or on revelation but not both (2017, 10-8). 25. According to Aristotle, Empedocles developed the immensely in luential theory that all matter could be analyzed into the four basic elements of earth, air, ire, and water; see Empedocles's fragments 346 and 347. Compare these elements to the modern theory of the four possible states of matter: solids, gases, plasma, and liquids. According to Charles Kahn, Empedocles's elements are more abstract and general than the familiar cosmic masses; his elements are the "roots" of earth, air, ire, and water (see 1960, 124-5). 26. See Empedocles, fragment 345. 27. "Empedocles-who was plainly a magician, who considered his immortalization a fundamental prerequisite for his effectiveness as a magician, and who in his description of his own immortality comes closer than any other person or text to the references to ritual immortalization preserved on the gold plates [i.e. tablets]" (Kingsley 1995, 314). 29. "One aspect of the earliest Greek philosophy may be described as a revolt against the privileges of the gods" (Eriksen 1976, 120). 30. According to Empedocles, Pythagoras "easily saw everything of all the things that are, in ten, nay twenty lifetimes of men" (fragment 259; Empedocles fragment 129). 31. Aristotle is said to have written a treatise, "On the Pythagorean Philosophy, " of which we only have fragments; see fragments 191 and 192 (Rose). Iamblichus agrees that Pythagoras was seen as an incarnation of Apollo in his On the Pythagorean Way of Life, 27.133 and 28.135. G P R D a god. 32 Yes, the irst Greeks to call themselves philosophers claimed to be divine beings. Pythagoras himself brought Ionian natural science to Italy during the late sixth century; the new Italian philosophers continued the cosmological inquires of the Ionians but subordinated knowledge to the practical goal of becoming like a god. The Ionians, one could say, were pure scientists, while the Italians were also charismatic sages. Yet the contrast between the Ionians and the Italians is not a contrast between secular science and religious alchemy. Far from being materialists, these Ionian physicists identi ied their irst principles with a god. 33 Aristotle noted the continuity from myth to reason in Ionian natural science: just as Homer had identi ied Oceanus as the origin of the gods and all terrestrial waters, so Thales identi ied water as the origin of all things. 34 According to Aristotle, these early physicists were right to identify their irst principles with the divine. 35 Ionians and Italians differed not about the essential causal role of divinity but about our relation to the divine. I emphasize the contrast between the Ionians and the Italians because only the Italians called themselves "philosophers, " and only the Italians aspired to become divine. Yes, the Ionians pioneered rational inquiry into the origins of the cosmos; but the Italians pioneered what they called "Pythagoras himself quickly achieved the status of a daimon, intermediate between man and god, or even an incarnation of the Hyperborean Apollo" (Guthrie 1962, 1:231). 32. "Friends . . . I give you greetings. An immortal god, mortal no more, I go about honoured by all . . . by men and women, I am revered, " Empedocles, (fragment 399). "In Empedocles, being immortal means not existing forever, but detachment from the cycles of deaths and births and living, for a long but inite time, as a god" (Long 2019, 31). 34. On Oceanus as the origin of all the gods, see Homer, Iliad 14.201 and 14.246; as the origin of all terrestrial waters, see Thales,fragment 85;Met.,983b 20. In other places, however, Aristotle sees less continuity between Homer and the philosophers: Homer and Empedocles, he says, have nothing in common apart from their meter (Poetics 1447b 18-19). 35. Aristotle explains why irst principles in physics are reasonably described as divine: "as it is a principle, it is both uncreatable and indestructible . . . they [natural philosophers] identify it with the Divine, for it is deathless and imperishable, " (Aristotle,Physics,(14)(15). "Now all causes must be eternal, " (Met., 1026a 17). "For the Greek philosophers, a god frequently functions as a hypothetical entity, analogous to the hypothetical entities of modern science such as black holes, neutrinos, or the unconscious" (Gerson 1990, 2). A irst cause must have a wholly different nature from what it causes, otherwise it must itself have a cause. In Aristotle's terms, the irst cause must be uncaused, the irst mover must be unmoved-otherwise we have an in inite regress of causes and movers. If the cosmos rests on a turtle, what does the turtle rest on? Aristotle wants to avoid the answer that it is turtles all the way down. J B M "philosophy" as a path to salvation through knowledge of the cosmos. Socratic philosophy, as we shall see, blends Ionian cosmology with Italian aspiration to divinity. According to Aristotle, Plato was primarily a disciple of the Italian philosophers, especially the Pythagoreans. 36 That would explain why Plato was so centrally concerned with "becoming like a god. " 37 Plato and Aristotle promoted this religious conception of philosophy when they both asserted that the goal of human life is "to become as much like a god as is possible. " The idea that becoming like a god (homoiōsis theōi) is the goal of the philosophical life for Plato and Aristotle was a commonplace among the Platonists of antiquity, but much less often asserted by modern scholars. 38 Because of the recent revival of interest in ancient commentators on Plato and Aristotle, several scholars have noted the striking neglect of this theme, especially in English-language scholarship. 39 So unfamiliar today is this idea of becoming like a god, that some scholars even deny that it is Platonic. 40 This neglect has seriously distorted our modern understanding of the Socratic Greek philosophers. That the Socratic philosophers see happiness as a goal is well known; what is not well known is that happiness was understood by them as becoming godlike. 41 36. See Met., 987a 29-31. 37. According to the Neoplatonist Arius Didymus, Pythagoras was the irst thinker to propose homoiōsis theōi as the telos of all human striving; Plato's contribution was to restrict this ambition kata to dynaton. See (Merki 1952, 1;Roloff 1970, 1). 38. On the theme of becoming like God in Middle Platonism (and beyond), see (Torri 2017a(Torri , 2017b. For Plotinus, see (Zovko 2018). For the most up-to-date bibliography of scholarship on this theme in ancient philosophy, see the dissertation of Paoli Torri (2017a, 232-48). 39. Speaking of the Platonic doctrine of "becoming as much like a god as possible, " Julia Annas observes: "Given its fame in the ancient world, the almost total absence of this idea from modern interpretations and discussions of Plato is noteworthy" (Annas 1999, 53). David Sedley agrees: "Homoiōsis theōi, universally accepted in antiquity as the of icial Platonic goal, does not even appear in the index to any modern study of Plato known to me . . . [yet] its in luence on Plato's successors, above all Aristotle, is so far-reaching that we risk seriously misunderstanding them if we do not make due allowance for it" (2000,309). Finally, John M. Armstrong builds on both Annas and Sedley in his "After the Ascent: Plato on Becoming Like God" (2004). A pioneer of English-language attention to this theme is Culbert Rutenber (1946). 40. Sandra Peterson claims that when Socrates says in the Theaetetus (176b-c) that we should "become as much like a god as humanly possible, " he is not speaking for Plato, in part because "the recommendation to aim at becoming like God strikes me as the worst idea I have ever heard in philosophy" (2011, 74-85)-which is saying a lot! Even setting the Theaetetus aside, the idea of becoming like a god appears in several Platonic dialogues and in different contexts. 41. European scholarship never lost sight of this central Platonic theme: "Es besteht also kein Zweifel, dass die homoiōsis theōi als ein wichtiges Stück platonischen Lehre galt. " (Merki 1952, 2). Dietrich Roloff concurs in his chapter "Ausblick auf die platonische Angleichung an Gott, " (1970, 198-206). Salvatore Lavecchia concurs that "La homoiōsis theōi constitusce il G P R D Ever since Plato and Aristotle entered the medieval universities, their overarching visions of human life were obscured when their writings were divided into separate bodies of knowledge, such as logic, metaphysics, ethics, politics, and theology. Twentieth-century analytic philosophers have remade Plato and Aristotle in their own image and likeness by dissolving their thought into a miscellaneous array of conceptual puzzles. When it comes to the philosophy of the Socratics, truly we murder to dissertate. For, as we shall see, in the thought of these philosophers-what we call metaphysics, ethics, politics, and theology-are all merely aspects or phrases of one aim: to become like a god. 42 The quest to become divine helps to explain the curiously ascetic character of most philosophical ethics. Socrates insisted that we should care for our souls more than for our bodies. Plato's philosophical rulers possess no private property and renounce family life: they live like soldiers in common barracks. Aristotle argues that the supreme pleasure in life is contemplation. The Stoics were famously stoic and advised us to escape the grip of the passions. Even the Epicureans, in theory devoted to pleasure, advocated an abstemious regime designed to avoid all pain: the pleasure of wine, they insisted, is not worth the hangover. There is something downright inhuman about much philosophical ethics, which may explain why so many of the great philosophers were unmarried and childless. The whole philosophical tradition, as Nietzsche observed, seems bent on the denial of the body and the suppression of mere life. All of this makes sense only if the goal is to cultivate what is most divine in ourselves: namely, our intellects. Once we see that ancient Greek philosophy was oriented toward the question of how to become as much like a god as is humanly possible, then we see the possibility for an illuminating encounter with Christianity, which, according to the Bible, promises to make the followers of Jesus into "partakers of the divine nature. " 43 In the words of Athanasius, "God become man so that we might become like God. " No doubt, the Christian centro e la sostanza della ilosophia platonica" (2006,1). He sees Plato's thought as culminating in a mystical union with the divine: "Il telos della iloso ia platonica consiste nella piena e cosciente esperienza del divino. Il rapport diretto con il divino pervade il pensiero et l'azioine del ilosofo" (2006, 287). Lavecchia's study is the only book-length treatment of our theme in Plato; his splendid book ranges from minute semantic analysis to the speculative lights of Neoplatonism. Lavecchia focuses resolutely on the metaphysical ascent to the good, drawing on thinkers ranging from Proclus to F.W.J. Schelling. 42. "One might say that the irst principle of Platonic ethics is that one must 'become like a god'" (Gerson 2005, 34 J B M path to divinity is quite different from that of Greek philosophy, but the shared goal reveals a striking commonality between Athens and Jerusalem. When we compare dei ication in classical Greek philosophy to dei ication in the Bible, here are some contrasts that emerge. For the Greek philosophers, a god is an object we seek to know; for the Bible, God is a person whom we seek to encounter. For the Greek philosophers, a god is a concept of the divine; for the Bible, God is a proper name (Yahweh). For the Greek philosophers, we become like a god by assimilating our thoughts to the timeless rationality of divine order; for the Bible, we become like God by surrendering to loving union with God. In the Bible, we become gods by becoming God's (see Meconi 2008). For the Greek philosophers, the cosmos is the only image of a god; for the Bible, a human being is the only image of God. For the Greek philosophers, we become divine by contemplating the heavens; for the Bible, we become divine by hoping for salvation in the future. B
v3-fos-license
2018-04-03T03:18:34.697Z
2016-06-08T00:00:00.000
2444012
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.160404", "pdf_hash": "97cb8c47c1c2981a19b5db27804ab3ae0821c4bf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44080", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "b1c3df00a43b51ad9efba5ce25a2a9cc66a1fc87", "year": 2016 }
pes2o/s2orc
The secret life of ground squirrels: accelerometry reveals sex-dependent plasticity in above-ground activity The sexes differ in how and when they allocate energy towards reproduction, but how this influences phenotypic plasticity in daily activity patterns is unclear. Here, we use collar-mounted light loggers and triaxial accelerometers to examine factors that affect time spent above ground and overall dynamic body acceleration (ODBA), an index of activity-specific energy expenditure, across the active season of free-living, semi-fossorial arctic ground squirrels (Urocitellus parryii). We found high day-to-day variability in time spent above ground and ODBA with most of the variance explained by environmental conditions known to affect thermal exchange. In both years, females spent more time below ground compared with males during parturition and early lactation; however, this difference was fourfold larger in the second year, possibly, because females were in better body condition. Daily ODBA positively correlated with time spent above ground in both sexes, but females were more active per unit time above ground. Consequently, daily ODBA did not differ between the sexes when females were early in lactation, even though females were above ground three to six fewer hours each day. Further, on top of having the additional burden of milk production, ODBA data indicate females also had fragmented rest patterns and were more active during late lactation. Our results indicate that sex differences in reproductive requirements can have a substantial influence on activity patterns, but the size of this effect may be dependent on capital resources accrued during gestation. . Male and female arctic ground squirrels differ in their timing of seasonally recurring life cycle events. Males (a) terminate heterothermy earlier than females, and (b) spend an average of 15-25 days below ground undergoing gonadal growth and spermatogenesis while they consume a food cache. Following emergence, (c) males establish territories and exhibit agonist interactions with other males as they compete to (d) mate with females. Following the mating season, (e) males undergo testicular regression and have several months during which they do not exhibit agonistic interactions before they (f,g) fatten in preparation for (h) resuming hibernation; a second interval of male-male aggression occurs in late summer/autumn. In contrast, females (i) hibernate longer but (j) mate within a few days of emergence; (k) gestation lasts approximately 25 days and (l) lactation lasts approximately 28 days. Females, but not males, (m) exhibit vigilance for predators while their newly emergent young are foraging but also (n,o) fatten rapidly prior to (p) initiating hibernation in mid-to late-August. Diagram based on data from [21][22][23]. mid-to late-August; in contrast, males fatten and cache food in September and do not immerge into their hibernacula until early-to mid-October (figure 1 [21,22]). Based on these differences in timing, we predicted males would spend more time on the surface than females in April and early May as they defend territories and search for mating opportunities, whereas the high cost of gestation, lactation and pre-hibernation fattening would result in females being more active and spending more time above ground between mid-May and mid-August. Males, in contrast, might spend less time above ground when females are lactating and fattening as their own energy demands are low given they do not begin caching food or fattening until after females have initiated hibernation. Our predictions are based on the underlying assumption that when reproductive opportunities (for males) and energy demands (for both sexes) are low, individuals will decrease their risk of predation by spending more time below ground [3,6]. Alternatively, if being above ground serves functions beyond that of energy acquisition and seeking mating opportunities, then differences in activity levels between the sexes might be observed without a concomitant change in time above ground. Study species and area We investigated above-ground activity in two nearby populations of arctic ground squirrels living above 68°N, north of the Brooks Range, Alaska. The first site, located near Atigun River (hereafter: Atigun; 68°27 N, 149°21 W; elevation 812 m), lies approximately 20 km south of the second site, located adjacent to Toolik Lake (hereafter: Toolik; 68°38 N, 149°38 W; elevation 719 m). Population density is higher at Atigun, owing, in part, to the sandy substrate that is well-drained and suited to burrowing; suitable substrate for burrowing is more dispersed at Toolik. Earlier loss of snow cover at Atigun typically results in earlier (9-13 days) timing of spring emergence from hibernation and reproduction, relative to Toolik [22,24]. Risk of predation for adult ground squirrels at our study sites is higher on the surface where predators include red fox (Vulpes vulpes), grey wolf (Canis lupus), golden eagles (Aquila chrysaetos), northern harrier (Circus cyaneus) and short-eared owls (Asio flammeus); common ravens (Corvus corax) will predate juveniles but no attempted predation of adults has been observed. Ermine (Mustela erminea) will attack ground squirrels on the surface but may also consume juveniles, and possibly adults, in their burrows. Other ecological site characteristics are described in [22]. Following the termination of heterothermy in spring, male arctic ground squirrels remain below ground for a three to five week interval during which they consume a food cache to regain body mass lost during hibernation and undergo testicular growth and maturation (figure 1 [21,25]). Males emerge in mid-April in anticipation of female emergence, which typically occurs 11-14 days later [21,22,26]. Male-male aggression, physical confrontations and wounding are common during the mating season [27]. Males intercept and mate-guard newly emergent females that become pregnant within a few days of emergence; gestation lasts for approximately 25 days, and lactation is another approximately 28-35 days [23,28]. Unlike males, females do not cache food and, with the exception of early gestation when they continue to lose body fat, they appear to fuel their reproduction using energy gained concurrently through foraging [21]. Although DEE has not been measured in arctic ground squirrels, DEE of females in other ground squirrel species peaks during late lactation and exceeds DEE of males at any time during the active season [1]. In addition, females are delivering energy to pups as milk during lactation, but this energy is not included in the DEE measurement. Once their young have been weaned, females undergo a moult and fatten; autumn immergence occurs in August (figure 1 [21,22]). Fattening in arctic ground squirrels is not associated with a decrease in lean mass-specific resting metabolic rate (RMR) [29] which suggests their foraging effort is likely to be higher at this time of year relative to males that fatten and cache food later in the autumn and immerge in early-to mid-October [21,22]. Light loggers We used the light loggers to determine whether individuals were above ground (light) versus below ground (dark) from first emergence in the spring of 2014 until immediately prior to when females begin to immerge in autumn. In the second year of our study, we have data from first emergence until early August; however, we do not include data beyond 9 June in our models (see below) owing to low sample sizes. Two types of loggers were deployed; BAS model MK7290 light loggers (Biotrack Ltd, Dorset, UK), which record light levels every 2 min, and Intigeo-C56 light loggers (1 g; Migrate Technology Ltd, Cambridge, UK), which record light every minute and then save the highest measured value per 5 min interval. For further details on methods using light loggers, see [16]. We measured above-ground versus below-ground activity patterns in 23 females (13 Atigun; 10 Toolik) and 15 males (nine Atigun; six Toolik) in 2014 and 24 females (19 Atigun; five Toolik) and 17 males (10 Atigun; seven Toolik) in 2015. Not all animals were tracked throughout the entire interval, as we continuously deployed and downloaded collars; on average, we obtained 52 ± 25 (s.d.) days of data per individual during the active season in 2014 and 39 ± 12 days in 2015. Data on timing of emergence were obtained from 16 females (10 Atigun and six Toolik) and nine males (five Atigun and four Toolik) in 2014; these individuals were equipped with either collars or implanted body temperature (T b ) loggers (for details on assessing timing using T b loggers, see [23]). In 2015, we obtained emergence data for 14 females (nine Atigun and five Toolik) and nine males (four Atigun and five Toolik). We captured animals every four to six weeks throughout the active season to download light collars. Following capture, animals were anaesthetized by a 3-5 min exposure to isoflurane vapours, identified using unique ear and passive-integrated transponder tags, weighed and assessed for sex and reproductive status. Blood samples were also collected to investigate the relationship between activity and thyroid hormone level, which we report elsewhere [30]. In 2014, we measured body mass at four different stages of the breeding season including emergence (20 April-3 May), early-to mid-lactation (2 June-16 June), post-lactation (1 July-7 July) and late in female fattening (8 August-14 August). In 2015, we obtained measurements at the same time of year during emergence and mid-lactation, but not during the post-lactation and female fattening intervals. To guard against effects of handling/anaesthesia on behaviour, we excluded above/below-ground data from individuals on the days when their collars were deployed or downloaded. Accelerometers In 2014, we found little difference between the sexes in time spent above ground each day across the active season, with the exception of parturition/early lactation when females spent less time above ground (see Results). This lack of sex differences in time above ground might indicate that differences in reproductive requirements do not influence daily above-ground activity or it may indicate that time above ground is not reflective of movement or foraging activity on the surface. To differentiate between these two possibilities, we deployed collar-mounted accelerometers (less than 3 g, axy-3 loggers, TechnoSmart Europe srl., Rome, Italy) on squirrels at the Atigun site in 2015; we successfully recaptured and obtained data from six males and six females, all of which were also equipped with light loggers (collar with epoxy-mounted light logger and shrink-wrapped accelerometer: approx. 8 g, less than 5% of body mass). Accelerometers were deployed from 29 April (early gestation) to 10 June (late lactation) and were programmed to record in the X-, Y-and Z-axis once per second. For each axis, the static effect of gravity on acceleration was removed from the acceleration data by subtracting the 11 s running mean. We then calculated ODBA using the method of [18] that involves summing of the absolute values of the calculated dynamic acceleration for each axis. Although ODBA is typically calculated using measurements of acceleration at a frequency of 10 Hz or higher, sampling frequencies as low as 1 Hz provide reasonable estimates of energy expenditure, even in small animals [31]. Environmental data We measured a variety of environmental parameters known to influence operative temperature, which includes the effects of convective and radiant heat transfer, at both of our study sites using weather stations. At Atigun, we collected environmental data (incident solar radiation, ambient temperature, wind speed and rainfall), using a Hobo U30-NRC weather station (Onset Computer Corporation, Bourne, MA). For Toolik, we acquired the data on these same parameters from the Toolik Field Station Environmental Data Center (http://toolik.alaska.edu/edc/index.php) weather station. Precipitation data were collected using tipping buckets at both sites that do not record precipitation that falls as snow. However, we included a categorical variable (yes/no) to account for major snowfall events that occurred during the daylight hours; observers were in place to record such events throughout the study. For our statistical models, we calculated the average values for each environmental parameter (or sum for precipitation) between 08.00 and 21.00 each day, the timeframe when squirrels are active on the surface [16]; use of daily averages produced essentially the same results. Statistical analyses Statistical analyses were performed using SAS v. 9.4 (SAS Institute, Cary, NC); for all models, we examined normality and assessed goodness of fit, using the QQ plots. We compared body mass at different stages of the active season separately within each sex, using linear mixed models that included individual (animal identification) as a random effect. Having found a significant effect of stage, we subsequently made all pairwise comparisons using post hoc Tukey-Kramer tests. We also used mixed models to investigate whether there were differences between years within the same sex at each time of year (life-history stage); results for these analyses were the same, regardless of whether a Bonferroni correction for multiple comparisons was applied. We compared environmental conditions between sites using a paired (by day) non-parametric test, the Wilcoxon signed-rank test. For each year and site, we examined the effects of sex and environmental conditions on time spent above ground each day and mean daily ODBA using mixed models with ID included as a random effect. To account for nonlinearity across the breeding season, we applied penalized B-splines (hereafter: p-splines) using mixed model methodology and allowed the splines to vary by group (sex). Environmental parameters used in the models included average daily wind speed (m s −1 ), incident solar radiation (J cm −2 h −1 ) and ambient temperature (°C); we also included an interaction between wind speed and ambient temperature (i.e. a wind chill effect). Rainfall was included as a categorical variable (either 0 mm, 0-2 mm, or more than 2 mm rain per day), as were snowfall events (yes/no). For 2014, we included data from 3 May-31 July in our models, which includes gestation, lactation, moult and fattening of reproductive females; we truncated the season to include only the timeframe when no animals were hibernating and when sample sizes were sufficient for parameter estimation. For our Atigun River site, we also analysed the data between 21 April and 29 April (2014) separately, using a linear mixed model (no spline), excluding individuals that had not yet terminated hibernation; this timeframe includes the part of the season when males establish territories, seek out females for mating and mate-guard; sample sizes for non-hibernating Toolik animals during this interval were too small for a separate analysis. In 2015, we model only data for Atigun because sample sizes for Toolik were small across the entire active season; qualitative results from Toolik were similar to those at Atigun despite the small sample size (n = 3 and 4 for females and males, respectively). For 2015, our models for time spent above ground each day (light loggers) and mean daily ODBA (accelerometers) included data between 27 April and 9 June; sample sizes later in the season were too low. Body mass Body mass varied among life-history stages in females for both years of study (2014: F 3,58 = 77.4, p < 0.0001; 2015: F 1,15 = 49.98, p < 0.0001) such that females gained weight across the season (table 1) Table 1. Mean body mass (±s.d.) of female and male arctic ground squirrels at sites located adjacent to Atigun River and Toolik Lake in 2014 and 2015; sample sizes are shown in brackets. Different letters indicate significant differences between months within sex-year groupings. Female body mass data for July and August 2015 were not included in statistical analyses owing to low sample size. Asterisks indicate differences between years within sex-month groupings. (table 1) but did not differ significantly between years for any life-history stage. In both years, body mass of males was similar to morphometrically smaller females in early August, indicating they had not yet begun to fatten for hibernation (table 1). Mass of juveniles in August differed between years (F 43,1 = 8.11; p = 0.007) but not between the sexes (F 1,43 = 2.01; p = 0.16); juveniles were 51 g lighter in 2014, relative to 2015, despite being weighed 7 days later in the year, on average. Time above ground in 2014 At both sites, time spent above ground each day varied widely from one day to the next (figure 2). Although males appear to be above ground more than females in late April, the mean above-ground activity of females during this interval is influenced by females that have not yet terminated hibernation and emerged to the surface. When hibernating individuals are removed from the dataset, there is no significant difference between males and females in their durations spent above ground from 21 to Mean duration (minutes) spent above ground each day in 2014 for female (purple) and male (green) arctic ground squirrels at our field sites at (a) Atigun River and (b) Toolik Lake. Time above ground varies substantially from one day to the next owing to the effects of environmental conditions known to affect thermal exchange including mean ambient temperature (red line), total daily precipitation (blue line), blizzard events (blue asterisks), solar radiation (not shown) and wind speed (not shown). The vertical dashed lines indicate the start and end dates for the data used in the mixed model, which excludes intervals when any individuals were hibernating. Our model for Atigun animals that encompassed the timeframe when both sexes were active indicated that environmental conditions significantly affected time spent above ground (table 2); similar results were obtained for the smaller dataset from Toolik (electronic supplementary material, table S1). Parameter estimates for environmental variables were consistent with the prediction that squirrels reduce their time spent above ground when operative temperatures decrease; above-ground activity increased with increasing temperature, increasing solar radiation and decreasing wind speed, and was negatively affected by precipitation (table 2 and electronic supplementary material, S1). Our models also predicted that time spent above ground each day differed between the sexes, with females spending less time above ground during parturition and early lactation, but more time above ground during late lactation (Atigun p-spline: p = 0.0001, sex: p = 0.49, sex × p-spline: p < 0.0001; Toolik p-spline: p = 0.001, sex: p = 0.0001, sex × p-spline: p < 0.0001; figure 3 and electronic supplementary material, S1). The timeframe during which females at Toolik decreased their activity was later than at Atigun, consistent with the differences in spring emergence phenology between sites. However, unlike Atigun, females from Toolik spent less time above ground than males shortly after hibernation had terminated (1-5 May; electronic supplementary material, figure S1). Time above ground and overall dynamic body acceleration in 2015 Similar to 2014, day-to-day variation in time above ground at Atigun in 2015 was high, with reduced activity on wet and/or snowy days (figure 4a and table 3 (table 4). However, sex differences in p-splines for ODBA across the season did not parallel differences seen in activity; female ODBA was not different from that of males during early lactation, but female ODBA was higher between 26 May and 9 June, which corresponds with late lactation (figures 4b and 5b). The difference in patterns occurs because females have higher mean daily ODBA for a given amount of time spent above ground compared with males ( figure 6). Examination of within-day variation in time spent above ground and ODBA revealed that, during early lactation, females make frequent forays below ground throughout the day, presumably to warm and nurse young (figure 7). Time spent above ground by females during early lactation of the prior year (2014) was also interrupted by short intervals spent below ground (not shown); however, these intervals were much shorter compared with 2015 resulting in a smaller difference between the sexes in terms of time spent above ground per day. Examination of ODBA indicates lactating females also exhibited sporadic bouts of below-ground activity between the hours of approximately 22.00 and approximately 08.00, when they were not on the surface ( Discussion Life-history theory predicts that sex-based differences in the timing of reproduction and in when energy is allocated towards reproduction should be manifest in sex-specific differences in activity patterns. We predicted that trade-offs between above-ground foraging activity and risk of predation would lead to intraspecific differences within arctic ground squirrels in daily surface activity across the active season with females spending more time above ground than males during lactation, which is energetically expensive. Although we found some evidence for this in late lactation, females tended to spend less . Time spent above/below ground (grey line; 1, above ground; 0, below ground) and ODBA (averaged in 10 min blocks; colour line) on 19-21 May 2015 in two representative females (top panels) and two representative males (bottom panels). Bouts spent below ground (darkness) that were less than 5 min are not shown. time, not more, above ground between parturition and early lactation, presumably so they could provide maternal care to their offspring which are born hairless and incapable of independent thermoregulation; soil surrounding burrows in the Arctic remain frozen until late summer [32]. Our results contrast with previous studies of semi-fossorial mammals which suggested that high energy demands during lactation will drive a concomitant increase in time spent above ground by females [3,6]. Instead, we found that thermal exchange conditions played the most important roles in determining time spent above ground each day, and the energy demands of lactation were offset to a large degree by greater movement (ODBA) of females while on the surface, which we assume is indicative of greater foraging effort. Although females spent less time on the surface than males during early lactation in both years, the size of this effect was fourfold higher in 2015 when conditions were drier (less rain and fewer snowfall events). Females were 12% heavier at this stage of the breeding season in 2015, relative to 2014, indicating they probably had greater lipid and/or protein stores, and we suggest they used this stored capital to increase time spent below ground, nursing and warming their newborn offspring. Thermoregulatory costs Predation has long been considered a major selective force in the evolution of behaviour and behavioural plasticity [33], yet our study demonstrates that it is not appropriate to assume behaviour necessarily reflects a simple trade-off between risk of predation, as estimated by time spent above ground, and energy acquisition. Thermoregulatory conditions are clearly an important driver of activity patterns throughout the active season in arctic ground squirrels, with more than 50% of the variation in time spent above ground and ODBA explained by day-to-day differences in weather variables. This finding is surprising in the light of a recent study on red squirrels that indicated the effects of reproductive stage can overwhelm environmental effects on DEE, as measured using the doubly labelled water approach [2]. Studies of other ground squirrels also indicate that reproductive stage is an important driver of energy expenditure with DEE peaking during late lactation [1]. This is consistent with the higher levels of ODBA we measured for females in late lactation, given that activity-specific energy expenditure makes up a substantial portion of DEE [31]. However, it is important to note that this pattern (higher energy expenditure during late lactation) could become obscured by effects associated with daily variation in weather conditions if DEE is measured over short, 2-4 day intervals, as is typically the case for the doubly labelled water approach. The importance of environmental conditions in our study may be owing, in part, to the high variability in weather-related differences in thermal exchange that characterize the arctic summer; more work is needed to better document the impacts of weather conditions on temperate and tropical species. Our findings are consistent with recent studies indicating thermoregulatory costs can play a much greater role in driving the timing and duration of daily behaviours than previously appreciated. For example, nocturnal mice will become diurnal when challenged by cold, because activity during the warmer daytime combined with a period of rest in a buffered environment during the colder night reduces energy expenditure [34]. Similarly, we have hypothesized that it is this thermoregulatory advantage of 'daytime' activity that leads to persistent, entrained daily rhythms of physiology and behaviour during the polar day in arctic ground squirrels [15,35]. For much of our study, we found that both sexes spent similar amounts of time above ground, but females were consistently more active, as indicated by higher levels of ODBA. The difference between the sexes in ODBA may indicate that males are using risk-aversive behaviours to a greater degree than females during the interval in which their energetic demand is low. There is widespread evidence in ungulates that males will have higher vigilance at the expense of reduced foraging effort relative to females during lactation [36,37]. Small mammals are also known to engage in behaviours that mitigate the risk of predation while foraging. For example, Thorson et al. [38] found that thirteen-lined ground squirrels (Ictidomys tridecemlineatus) will abandon foraging effort at much higher food densities when the patch is located further away from escape burrows. Similarly, van der Merwe & Brown [39] showed that use of food patches by Cape ground squirrels (Xerus inauris) was governed primarily by proximity to burrows and open sight lines. In one of the few studies to investigate seasonal sex-related changes in the time budgets of a small mammal across the season, however, Ebensperger & Hurtado [40] found no evidence for seasonal changes or sex differences in time spent vigilant or foraging in degus (Octodon degus). We suggest that fine-grained data from increased use of biologging technology, combined with experimental manipulations at the level of the individual, will help shed light on the proximate and ultimate drivers of behaviour and energy expenditure in free-living small mammals [41]. This study indicates that male ground squirrels do not reduce their risk of predation outside of the mating season by spending more time below ground. However, it is not clear what these animals are doing while above ground, particularly given their ODBA is lower than females during gestation and late lactation. It is possible that time above ground serves some sort of social function, such as the establishment and/or persistence of territories [42]. However, arctic ground squirrels exhibit territorial behaviour and male/male aggression only in the early spring and late autumn [27], which suggests this is likely to be unimportant during the interval when females are lactating and fattening. The additional time spent above ground may be simply to loaf/bask in the sun. In small mammals, basking behaviour has been commonly reported as a means of passive rewarming from torpor [43] but it is not often considered as a thermoregulatory mechanism despite the high thermoneutral zone of many small mammals. Basking has, however, previously been reported in antelope ground squirrels [44] (Ammospermophilus leucurus) and yellow-bellied marmots [45] (Marmota flaviventris). Examination of ODBA data, however, suggests that males are continually active throughout the day while above ground (figure 5). Even if males are not basking they may still benefit from being on the surface because, at our study site, the active soil layer typically does not thaw until late July and therefore burrow temperatures remain at or below freezing for most of the summer months [32]. Thus, while a below-ground nest may provide a thermal refuge when it is cold or raining on the surface, it may be advantageous to escape the cool burrow and remain above ground when thermoneutral conditions prevail on the surface. Our results suggest day-to-day variability in time above ground and activity levels in arctic ground squirrels, in large part, reflects strategies related to behavioural thermoregulation rather than being associated with attempts to mitigate the risk of predation. However, we did find rather substantial (approx. 1 h) differences between our study sites in the amount of time spent above ground each day in 2014, and site differences in thermoregulatory conditions explained only approximately 25% of this difference. This suggests that time above ground is also likely to be affected by other factors such as population density, forage availability/quality or risk of predation. Establishing the relative importance of these other factors is likely to require field experiments designed to manipulate these parameters. Annual differences Although males and females differed in their timing of emergence from hibernation, we found relatively small sex-based differences in time spent above ground each day across life-history stages in the first year of our study. This finding was not consistent across years, however, as females spent substantially less time above ground during early lactation in the second year of our study. The drop in activity of individual females did not all occur on the same day and was not linked to changes in weather events, suggesting that it may have been driven by the need to give birth, nurse and provide thermoregulation for young during the early stages of lactation; ground squirrels are born hairless and the onset of endothermy is a gradual process during development [46]. Within-day patterns of above-ground activity during this interval were consistent with this hypothesis; females had regular intervals of above-ground activity during which movement was high, interrupted by below-ground episodes with little movement. These below-ground episodes also occurred in the first year of our study (2014; not shown), but were much shorter, such that there was a fourfold smaller difference between males and females in total time spent above ground each day. Although the cause of the between-year differences in above ground activity is unknown, females were in better body condition in early lactation during the second year of our study and we propose this may have allowed them to allocate more time to maternal behaviours, including warming and nursing their young. Juveniles born in 2015 had higher body mass in early August compared to the 2014 cohort, which may have been owing to interannual differences in maternal care. However, we cannot rule out the possibility that this difference may have been associated with better post-weaning foraging conditions or other disparities between years. Conclusion In hibernators, differences between the sexes in how and when energy is allocated towards reproduction have led to substantial sex-related variation in the seasonal onset and termination of heterothermy [47]. We anticipated these sex differences in reproductive requirements would also influence time spent above ground each day across the active season. Interestingly, the observed effects were context-dependent; females reduced their time above ground during early lactation, but this effect was much larger in the second year, possibly because they were in better body condition. We also found that although activity levels were correlated with time spent above ground, the relationship differed between the sexes with females being more active per unit time above ground. Thus, in addition to the energetic costs of producing milk during late lactation, females also have higher activity-specific energy expenditure than males during this interval. The sex-dependent plasticity we observed suggests males and females may react differently to changing environmental conditions with females less able to absorb changes that result in a further increase in foraging effort. Finally, we found that time spent above ground and ODBA (activity-specific energy expenditure) varied substantially from one day to the next for both sexes and most of this variation was attributable to weather-driven changes in thermal exchange conditions. Our results highlight the need to consider elements beyond predator avoidance and energy acquisition in studies focused on understanding how animals use space and time.
v3-fos-license
2020-10-30T05:12:28.367Z
2020-09-30T00:00:00.000
225117152
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://ijstm.inarah.co.id/index.php/ijstm/article/download/51/36", "pdf_hash": "c2affa6a5d916ed093cfdd68a8ffec97fbafe604", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44081", "s2fieldsofstudy": [ "Education" ], "sha1": "6003d898cfc8354f87a8373d0fe2675f8f66f334", "year": 2020 }
pes2o/s2orc
Project Based Learning Model in Improving The Ability and Trust . This research is focused on providing experience to teachers specifically in developing teaching materials by using project based learning. The purpose of this study is to assist teachers in improving the effectiveness and creativity of students in developing student confidence. Research and development continued with quasi-experiment in collaboration with class-room teachers. Stages of research to be carried out, namely: (1) preparation, (2) develop-ment, (3) experimental phase, (4) evaluation. The subjects of this study were teachers and students from SMKN 01, SMKN 02, SMKN 03 in Bengkulu City. Furthermore, at the implementation stage a collaborative experiment was carried out where the implementation of learning collaborates with the teacher and was designed online. The results of this study with the learning of the project model can develop students’ confi-dence and can improve students’ self-creativity. INTRODUCTION Efforts to improve student mastery of econom-ics at the school level really need to be improved. This can provide sufficient provisions in life and the field of work so as to solve problems relating to the economy. One of the efforts is the implementation of the 2013 curriculum which emphasizes students to think with the aim of making Indonesian individuals who are productive, creative, innovative, and affective through strengthening attitudes. Accounting lessons are very important in equipping students in real life. The importance of accounting lessons requires all parties to make improvements, especially those directly related to learning activities. In addition, accounting subjects listed in accountings are used as a benchmark for graduation in high school. Furthermore, in the selection of an accounting college one of the subjects that determine the graduation prerequisites for the social and humanities choice. This shows that accounting is important for students to master. However, the reality shows that one of the student learning outcomes in accounting subjects needs to be improved. Data from the Ministry of Education and Culture (2019) shows that the average score of the Computer-Based National Examination (UNBK) at the Social Sciences Department level in the 2018/2019 academic year is 46.86 from the 0-100 scale. This shows that the sub-jects tested were accounting that were classified as low. The results of the initial survey in one of the high schools in the city of Bengkulu through an in-terview with one of the high school teachers ob-tained several findings. These findings include: (1) students were not accustomed to finding their own concepts, (2) students still have difficulty in convey-ing the results of group discussions, (3) students were not accustomed to using real cases in learning, (4) learning materials were not widely found eco-nomics that can encourage students' thinking abilities and student skills. One effort that can be done to improve the quality of learning by designing learning so as to facilitate students in developing their abilities and confidence in learning. These efforts using learning media in the form of teaching materials specifically were designed to develop students' abili-ties in understanding the concept of the material. However, the reality in schools shows that learning tools are rarely found that can be used by teachers directly for learning, especially in developing stu-dents' abilities and confidence. In this view, Project Based Learning (PBL) is one of the recommended methods to use. PBL refers to a method that allows "students to design, plan, and implement extended projects that produce outputs that are publicly exhibited such as products, publications, or presentations" (Patton, 2012: 13). Through PBL, students engage in communication aimed at completing authentic activities (project-work), so that they have the opportunity to use language in a relatively natural context (Haines, 1989, as cited in Fragoulis, 2009) and participate in meaningful activities which requires the use of native languages (Fragoulis, 2009). The successful implementation of PBL has been reported by Gaer (1998) who taught speaking skills to the Southeast Asian refugee population who were already in their early grade ESOL (Economy for Speakers of Other Languages) classes. Their speaking skills were enhanced through PBL economic students. This study seeks to find out whether PBL can improve students 'speaking skills or not, what aspects of speech are improved, and what speaking activities are used to improve students' speaking skills. The scope of this study reveals the use of PBL in developing student confidence and improving student creativity during and after the Covid-19 pandemic. During and after the pandemic, students used the online method of communicating with teachers. The transition period from face to face method changes using online, this is not easily accepted by students. Especially in remote areas in Bengkulu province, where access to the internet still faces constraints on weak networks or signals. The researchers of this study analyzed how the project-based learning model could be accepted and could improve students' creativity in applying learning theory both during and after the Covid-19 pandemic. II. METHODS The research design used was in the form of quasi-experimental research. The purpose of this study is to test the practicality of the learning model given by the teacher, namely the project based learning model, with this PBL model the teachers can improve the quality of learning of senior high school students in Accounting and Financial department. In this case, the researchers got involved directly into the classroom from diagnosing difficulties / obstacles encountered in the learning process then formulating an action plan, implementing learning, monitoring the action process, reflecting and refining the action process, and evaluating the results of the actions or effectiveness of the model. This research activity were carried out through the following stages: a. Preparation (conducting material and curriculum studies) b. Development, At this stage two main activities were carried out, namely: (1) Validation, (2) practicality, (3) Field Trial Activities c. Stage of Collaborative Experiments Conduct learning trials using teaching materials that were designed in the ways of face-to-face and online. Data Collecting Technique Data collection techniques were divided into two stages, namely development research carried out by observation and distributing validity and practicality evaluation sheets. Validity data collection was done online by contacting experts, namely economics lecturers. While the practicalism test was conducted with a small group conducted by visiting students and teachers of high schools in Bengkulu city while applying the health protocol. In the experimental stage the data collection was done by giving tests to students after the treatment was given. In addition, observations were made to observe the implementation of the learning stages in accordance with the project based learning model. Development Research 1. Validity Analysis The validity estimation used in this study applies the item validity index proposed by Aiken with the following formula: , with ……………. (1) Information: validity index the score set for each rater minus the lowest score rater choice category score lowest score in the scoring category the number of categories rater can choose from the number of raters (Retnawati, 2014: 3) Practical Analysis The trial data that had been obtained was then converted into qualitative data on a five scale. The conversion on a scale of five was adapted from Widoyoko (2009) as in the following a. Expert Trial Result Data Project based learning based learning materials that have been prepared were assessed by experts who aim to see the quality of the product in terms of content. The trial results show that teaching materials in the form of Student Activity Sheets (LKS) met valid criteria. The validity test results of teaching materials are as follows. The results of the validator's assessment of the teaching material above show a valid category. This shows that in theory economic teaching materials based on project based learning models met valid criteria. Results of Collaborative Experiments Description of student learning outcomes The schools selected in the implementation of collaborative experiments consisted of three schools, namely: (1) SMK Negeri 1 Bengkulu City, (2) SMK Negeri 2 Bengkulu City, and (3) SMA Negeri 3 Bengkulu City. In each school one class was chosen as the experimental class, namely class X majoring in social studies. Description of student learning outcomes after being given learning by using projectbased learning based learning materials is shown in the following table: Based on the table above, it can be seen that the percentage of students mastery learning classically that reaches KKM is more than 65%. In addition, the average value of the two trial classes has reached the KKM value. This shows that the learning tools developed have met the effective criteria. IV. CONCLUSION After applying the model of learning i.e. project based learning, the result is the ability to learn and the confidence of students of SMKN 1, SMKN 2 and SMKN 3 Bengkulu City can be improved. This can be seen in the results of the t-test, where the results of t count are greater than t table. The t-test table shows the students' knowledge after the use of teaching materials with t-count > t-table and significant < 0.005. The research by developing teaching materials using project based learning could also improve students' independence in the process of accepting economic subject matterials. The students were even more creative in developing innovation in the field of science being studied. So the results of this study are accepting HO and rejecting Ha, meaning that after the project based learning model was applied, there was an improvement in knowledge and confidence of students of SMKN 1, SMKN 2 and SMKN 3 Bengkulu City, this study is in line with the research of Susanti et al.,(2020) which shows that the learning model with the project based learning model is effective in terms of the conceptual knowledge aspects. This research has a weakness that is the lack of tools used in the development of teaching materials such as student worksheets (LKS), internet networks in schools were still very low connected, along with props used in developing project-based learning models to improve the abilities of high school students in Bengkulu city. V. ACKNOWLEDGMENTS With the completion of this research, I would like to thank RESTIDIKTI, Muhammadiyah Bengkulu University, and my team who have helped a lot until the completion of all the outcomes in this research.
v3-fos-license
2021-10-18T17:09:35.679Z
2021-10-04T00:00:00.000
244205023
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.escubed.org/articles/10.3389/esss.2021.10039/pdf", "pdf_hash": "63995ef9519c68c88215fc0f4236a870f1cf8d6d", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44082", "s2fieldsofstudy": [ "Geology" ], "sha1": "af776b2c6cea025f89eeaaca28d17d5659c49c7a", "year": 2021 }
pes2o/s2orc
From Continent to Ocean: Investigating the Multi-Element and Precious Metal Geochemistry of the Paraná-Etendeka Large Igneous Province Using Machine Learning Tools Large Igneous Provinces, and by extension the mantle plumes that generate them, are frequently associated with platinum-group element (PGE) ore deposits, yet the processes controlling the metal budget in plume-derived magmas remains debated. In this paper, we present a new whole-rock geochemical data set from the 135 Ma Paraná-Etendeka Large Igneous Province (PELIP) in the South Atlantic, which includes major and trace elements, PGE, and Au concentrations for onshore and offshore lavas from different developmental stages in the province, which underwent significant syn-magmatic continental rifting from 134 Ma onwards. The PELIP presents an opportunity to observe magma geochemistry as the continent and sub-continental lithospheric mantle (SCLM) are progressively removed from a melting environment. Here, we use an unsupervised machine learning approach (featuring the PCA, t-SNE and k-means clustering algorithms) to investigate the geochemistry of a set of (primarily basaltic) onshore and offshore PELIP lavas. We test the hypothesis that plume-derived magmas can scavenge precious metals including PGE from the SCLM and explore how metal concentrations might change the metal content in intraplate magmas throughout rifting. Onshore lavas on the Etendeka side of the PELIP are classified as the products of deep partial melts of the mantle below the African craton but without significant PGE enrichment. Offshore lavas on both continents exhibit similarities through the multi-element space to their onshore equivalents, but they again lack PGE enrichment. Of the four onshore lava types on the Paraná side of the PELIP, the Type 1 (Southern) and Type 1 (Central-Northern) localities exhibit separate PGE-enriched assemblages (Ir-Ru-Rh and Pd-Au-Cu, respectively). It follows that there is a significant asymmetry to the metallogenic character of the PELIP, with enrichment focused specifically on lavas from the South American continent edge in Paraná. This asymmetry contrasts with the North Atlantic Igneous Province (NAIP), a similar geodynamic environment in which continent-edge lavas are also PGE-enriched, albeit on both sides of the plume-rift system. We conclude that, given the similarities in PGE studies of plume-rift environments, SCLM incorporation under progressively shallowing (i.e., rifting) asthenospheric conditions promotes the acquisition of metasomatic and residual PGE-bearing minerals, boosting the magma metal budget. INTRODUCTION Mantle Plumes and Precious Metals Near-surface intrusive Ni-Cu-PGE deposits are famously located in the Noril'sk Talnakh intrusion in Siberia (e.g., Lightfoot, 2007), the Skaergaard Complex in Greenland (e.g., Andersen et al., 1998), and the Bushveld Complex in South Africa (e.g., Maier and Groves, 2011), as reviewed by Naldrett (1997) and Barnes et al. (2016). These locations share a similar geodynamic setting, with intraplate magmas interacting with thick, stable (i.e., older) continental crust, and this setting may directly influence the precious metal budget of magmas in this geodynamic context (e.g., Zhang et al., 2008;Maier and Groves, 2011). Mantle plumes, buoyant diapirs made from the material of the mantle, can rise to the base of the lithosphere, introducing a thermal anomaly and inducing partial melting at the base of the asthenosphere, generating intraplate magmas (e.g., Morgan, 1971;Morgan, 1972;Griffiths and Campbell, 1990;Kellogg and King, 1993;Shannon and Agee, 1998;Jellinek and Manga, 2004). Intraplate magmas are thought to be sourced from a range of asthenospheric mantle reservoirs, many of which are relatively undepleted in incompatible elements (Zindler and Hart, 1986;Hawkesworth et al., 1988;Stracke et al., 2005;Hawkesworth and Scherstén, 2007). Furthermore, studies highlight the sub-continental lithospheric mantle (SCLM) below cratons as a potential source of metals, indicating that plume-derived magmas that ascend through the Archaean lithosphere are enriched in precious metals compared to those that do not feature significant lithospheric interaction (e.g., Hawkesworth and Scherstén, 2007;Zhang et al., 2008;Bierlein et al., 2009;Begg et al., 2010;Griffin et al., 2013;Barnes et al., 2015). This is particularly evident in regions in which the SCLM has been significantly pre-enriched by successive metasomatic events throughout their tectonic development (e.g., Wilson et al., 1996;Handler and Bennett, 1999;Powell and O'Reilly, 2007;Tassara et al., 2017;Rielli et al., 2018;Holwell et al., 2019;Wang Z. et al., 2020). Our study looks at the nature of precious metal (PGE and Au) variations in plume-derived magmas, generated by the Southern Atlantic Tristan plume, alongside a large suite of major and trace element concentrations. For the past ∼135 Myr, the environment above the Tristan plume has transitioned from a continental to oceanic setting, effectively removing the availability of the SCLM reservoir during melting and/or contamination. Herein, we investigate the effects this substantial geodynamic shift had on the relative concentrations of precious metals (i.e., the metal basket) in magmas generated in the region with a focus on correlations with major and trace element variability and the geochemical processes implied therein. Flood basalts can often share chalcophile, siderophile, and incompatible trace element signatures with underlying Ni-Cu-PGE mineralised magmatic intrusions, as exhibited in the PGE-rich East Greenlandic lavas near Skaergaard (e.g., Momme et al., 2002;) and S-saturated Siberian Traps near Noril'sk (e.g., Ripley et al., 2003;Lightfoot and Keays, 2005). This connection has not been explicitly investigated in the flood basalts associated with the Tristan plume in this context, and our study aims to provide insight into the PGE mineralisation potential of magmas in the region. We use an integrated machine learning approach based on work from Lindsay et al. (2021a) to analyse our new whole-rock geochemical data suite from onshore and offshore Tristan plume lavas, which comprise the Paraná-Etendeka Large Igneous Province (PELIP). Bulk geochemical data sets are excellent candidates for exploration using machine learning algorithms (MLAs), often comprising upwards of 50 measured elemental concentrations and hundreds of samples. This method allows us to explore multi-element trends and relationships that typically go undetected using traditional geochemical two-variable plots. By comparing and contrasting our new data to the North Atlantic Igneous Province (NAIP), which shares a similar plume-rift geodynamic setting to the Tristan plume, and was the focus of the workflow in Lindsay et al. (2021a), we work towards a multi-element variability model for plume metallogenesis in the Atlantic Ocean. The Paraná-Etendeka Large Igneous Province The PELIP is preserved asymmetrically between (primarily) Brazil in South America, and (to a lesser extent) Namibia in Africa, with a 15 times greater volume of lavas found in South America (around 1.2 × 10 6 km 2 ; Fodor et al., 1989). The lavas range in composition from basalts through to rhyolites, and, even amongst the mafic rocks, there is geochemical evidence for multiple mantle sources (e.g., Le Roux et al., 2002;Gibson et al., 2005;Beccaluva et al., 2020). After the Tristan plume arrived at the Gondwanan lithosphere under the modern-day central Paraná region in Brazil ca. 135 Ma, it induced partial melting of the asthenosphere and erupted lava at a rate of 0.8 km 3 /year (Renne et al., 1996). These early Tristan lavas were primarily basaltic and were compositionally zoned into High-Ti in central and north-west Paraná, and Low-Ti in the south-east, the latter of which eventually progressed into bimodal mafic-felsic volcanism (Peate et al., 1992;Polo et al., 2018). Currently found at the centre of the southern Atlantic Ocean, the Tristan plume lay beneath the South American and African Plate throughout the Cretaceous and Cenozoic eras (e.g., Douglass et al., 1999;Gibson et al., 2005;Fromm et al., 2015). The plume is responsible for the eruption of the lavas of PELIP, one of the largest continental flood basalt (CFB) provinces in the world (e.g., Stewart et al., 1996;Courtillot et al., 2003;Gibson et al., 2005). After Gondwana rifted (e.g., de Wit et al., 2008;Jokat and Reents, 2017;Martins-Ferreira et al., 2020), the plume produced volcanics in the newly opened ocean, creating the seafloor Rio Grande Rise and Walvis Ridge topographic features (e.g., O'Connor and Duncan, 1990;Ussami et al., 2013) (as shown in Figure 1A). FIGURE 1 | (A) Schematic two-part cross-section of the formation of the southern Atlantic Ocean, detailing the arrival of the Tristan plume beneath Gondwanan lithosphere 135-128 Ma, melting of the asthenosphere and SCLM, incipient continental rifting beginning after 128 Ma, and the eventual formation of oceanic hotspot trails connected to the plume until 0 Ma. The sample range for this study is indicated by the red boxes (refer to Table 1). lithospheric mantle (oceanic). (B) Schematic timeline for the PELIP, showing the eruption scales for each major locality from 135 Ma to present and major tectonic events such as rifting at 128 Ma and the ridge-jump at 70 Ma. Also included are the plate position of the plume focus and the head-to-tail plume transition. (* although dated primarily between 135 and 128 Ma, the large majority of onshore lavas erupted between 135 and 134 Ma on the Paraná side). From 134 to 128 Ma , rifting initiated in the thermally thinned and weakened lithosphere (McKenzie and White, 1989;Turner et al., 1996), like north-westerly plate movement, migrated the magmatic activity to the Etendeka-Angola margin, synchronous with the eruption of the south-eastern Low-Ti lavas in Paraná (Beccaluva et al., 2020). Etendeka lavas are also sub-divided into High-Ti and Low-Ti groups and are bimodal throughout (Milner et al., 1995a;Marsh et al., 2001). Following significant extension and basin formation, as the South American and African continents drifted further apart, the Tristan plume continued to erupt lava on the newly-formed seafloor throughout the Cretaceous at a less productive volcanic flux given the transition from plume head to tail (Camboa and Rabinowitz, 1984;Gibson et al., 2005;Fromm et al., 2015). The plume-derived lavas formed ridges on either side of the mid-ocean ridge, the Rio Grande Rise on the Southern American plate, and the Walvis Ridge on the African plate, which connect the PELIP to the active hotspot focus (O'Connor and Duncan, 1990;O'Connor and Jokat, 2015;Homrighausen et al., 2019). Figure 1B summarises the PELIP timeline. Paraná The predominantly basaltic and basaltic-andesitic lavas of the Paraná CFB sequence erupted onto the Botucatu Formation and Proterozoic basement of the late Gondwanan continent, peaking between 135 and 134 Ma but continuing to 128 Ma (e.g., Gordon, 1947;Leinz, 1949;Leinz et al., 1966;Peate et al., 1990;Marques et al., 1999;Thiede and Vasconcelos, 2010;Rocha-Júnior et al., 2012;Hartmann et al., 2019). Previous studies on the geochemistry of the Paraná lavas, referred to more formally as the Serra Geral Formation in Brazil, classified them based on major and trace element concentrations, and isotopic variations (e.g., Fodor 1987;Peate et al., 1992). The primarily tholeiitic High-Ti, Low-Ti, and Silicic groups present throughout the creation of the CFB were initially proposed by Fodor (1987) but later given more distinct magma-type subdivisions. The High-Ti group lavas are divided into three classifications: Urubici, the highest-Ti lavas in the Serra Geral Formation, and the Pitanga and Paranapanema lavas (Peate et al., 1992). Gramado and Esmeralda lavas, the two major Low-Ti group classifications, are often associated with the silicic Chapecó and Palmas members within the bimodality of south-eastern Low-Ti lavas in Paraná (Bellieni et al., 1984;Peate et al., 1990;Peate et al., 1992;Peate, 1997). Relatively scarce in the region compared to the other main lava types, the Ribeira lavas are primarily distinguished isotopically, forming intermediate lava with properties of both High-Ti and Low-Ti sub-groups (Peate et al., 1992;Peate and Hawkesworth, 1996). Licht (2018) further classified all mafic and silicic lavas in the region into 16 types based on Si, Ti, Zr, and P concentrations. In this new scheme, Gramado and Esmeralda comprise Type 1 (Southern), Ribeira and Paranapanema comprise Type 1 (Central-Northern), Pitanga and Urubici are Type 4, and Palmas and Chapecó belong to Types 9 and 14, respectively (Licht, 2018). The remaining 12 types are significantly less common volumetrically (Licht, 2018) and do not feature in this current study. The literature supports the theory that the magma-types erupted in a synchronous manner from separate subsurface melt systems (Peate, 1997;Rämö et al., 2016;Beccaluva et al., 2020), with each system being subject to distinct differentiation processes and geochemical development (Turner et al., 1999;Rossetti et al., 2018). Etendeka The African equivalents of the Serra Geral lavas are dated consistently ∼130 ± 2 Ma (by 40 Ar/ 39 Ar mineral dating as per Renne et al., 1996;Stewart et al., 1996), coinciding with the late-stage bimodal Paraná lavas and earliest Atlantic opening (Milner et al., 1995c). The Etendeka Group is distributed roughly parallel to the coastline, split into two mafic regions separated into northern and southern sections by Möwe Bay (Ewart et al., 1998a). The lavas collectively contribute a much smaller volume to the total preserved throughout the PELIP, covering ∼80,000 km 2 (Erlank et al., 1984) or 6% of the preserved CFB area (Miller, 2008). The Etendeka Group Lavas, comprising basalts, basaltic-andesites, and andesites with quartz latites (Erlank et al., 1984;Marsh et al., 2001), overlie the mixed sedimentary/ volcanic rocks of the Neoproterozoic Damara Sequence and the Mesozoic Karoo sediments (Miller, 2008 and references therein). Similar to Paraná, the northern region is dominated by High-Ti signatures and the southern region by Low-Ti lavas. Silicic lavas are typically located in the south, such as in the Goboboseb Mountains (Ewart et al., 1998b), and the province has many intrusive dolerite complexes (Milner et al., 1995c). The Etendeka Group features two main geochemical subgroups in the Tafelberg and Khumib, which represent the bulk of the Type 1 and Type 4 Serra Geral magma classifications, respectively (Erlank et al., 1984;Marsh et al., 2001;Ewart, 2004;Miller, 2008). Minor Esmeralda and Kuidas groups complete the Low-Ti components of the Etendeka Group Lavas (Ewart, 2004), showing slight trace element and isotopic differences to Tafelberg. Most noteworthy is the observation that the lavas and intrusive complexes in Etendeka generally have markedly higher MgO (15-25 wt.%;Teklay et al., 2020) than their equivalents in Serra Geral (typically <8 wt.%; Peate, 1997), representing a higher degree of partial melting and/or higher temperatures during melting in the former (Teklay et al., 2020). Consequently, the eastern part of the onshore PELIP is considered a more accurate record of the Tristan plume mantle source(s) signature through the Cretaceous (Hoernle et al., 2015;Jennings et al., 2019). Rio Grande Rise and the Walvis Ridge The formation of the age-progressive seafloor volcanic ridges in the developing South Atlantic Ocean ca. 128 Ma onwards resulted from a combination of mid-ocean ridge and intraplate magmatism following the breakup of Gondwana (O'Connor and Jokat, 2015;Homrighausen et al., 2019). The submarine ridges present the unusual situation in which OIB and MORB of the same age exist adjacent to each other, as normally plume lavas intersect much older oceanic crust in hotspots (e.g., Morgan, 1971;O'Connor and Duncan, 1990;Fromm et al., 2017). The Rio Grande Rise on the South American Plate has average bathymetries of 3,000 m in the west to 1,000 m in the central region (Camboa and Rabinowitz, 1984). Drill core samples from the plateau comprise mainly tholeiitic and alkali basalts (O'Connor and Duncan, 1990;Gibson et al., 2005). Working eastwards from the continent, the trail becomes significantly less prominent after 34°S 29°W (Figure 2). At this stage, the plume focus abruptly transitioned from the centre of the mid-ocean ridge to on top of the African Plate ca. 70 Ma ( Figure 1B) (e.g., Camboa and Rabinowitz, 1984;Gibson et al., 2005;Fromm et al., 2015;O'Connor and Jokat, 2015;Graça et al., 2019). The Walvis Ridge, on the African Plate, is of a similar age to the Rio Grande Rise (Milner et al., 1995c) and forms a narrow trail westwards from the High-Ti Khumib lava exposures in Namibia (Homrighausen et al., 2019). Bathymetric data places it at between 2,500 and 1,600 m below sea level, with the ridge increasing in height from east to west (Der Verfasser et al., 2011). At around 29°S 2°E, the trail bifurcates into two younger ridges, the Tristan and Gough tracks (O'Connor and Duncan, 1990;Rohde J. K. et al., 2013;Rohde J. et al., 2013), eponymous with the islands associated with the current plume focus ( Figure 2). A significant jump in tectonic plate movement at ∼70 Ma resulted in the splitting of the Walvis hotspot trail and the cessation of Rio Grande Rise volcanism, after which the Tristan plume focus lay to the east of the ridge for the first time, and volcanism occurred exclusively on the African plate (Rohde J. K. et al., 2013;Graça et al., 2019). Prior to the jumps, it is suggested that the two >70 Ma ridges would have been analogous with Iceland, as a contiguous volcanic ocean island formed by sustained ridge opening and addition of plume-derived lavas (Graça et al., 2019). The seafloor Walvis-Tristan-Gough hotspot track thus represents all stages of the transition from continental to modern-day oceanic plume. Table 1 displays sample information for our new PELIP suite (n 116). Whole rock samples were collected for onshore fossil lavas in Paraná (n 83). Etendeka samples (n 10) were acquired from a collaborator, and represent the Low-Ti end-members of the Namibian continental flood basalts (the Tafelberg Formation). Drill core samples (n 23) were collected from the International Offshore Drilling Programme (IODP) repository in Bremen, Germany for offshore sites along the Rio Grande Rise n 4,4,and 8,respectively) and Walvis Ridge (DSDP Leg 75-530; n 7). Given the higher degree of geochemical variability (e.g., Peate, 1997), volumetric abundance, and their potential to reflect sensitive changes in geodynamics, the Paraná rocks formed the bulk of the sample set in this study. For the present paper, we have classified Paraná samples based on their Si-Ti-Zr-P concentrations as suggested in Licht (2018), resulting in Serra Geral Type 1 (Southern), Type 1 (Central-Northern), Type 4, and Silicic groups. Samples To account for sample size differences (e.g., ∼1 kg for Paraná lavas to ∼50 g for IODP core sections) and potential geochemical heterogeneity within rock samples, randomly selected larger samples were cut into pieces equivalent in weight to core samples and processed individually as separate runs, i.e., sample A and B belonging to a single large original rock. The geochemistry of each was compared as a check on the processing methodology and as a comparison between samples of significantly different sizes from different localities. These A and B splits were then processed and analysed as separate samples, and their geochemistry was compared for quality control. Sample split details are available in Supplementary Material A1. Figure 2 shows the general sample localities within the context of the Southern Atlantic, and Table 1 gives a summary of sample localities, nomenclature and quantities. A full sample database with rock descriptions and physical properties is presented in Supplementary Material A2. Laboratory Techniques Rock samples were crushed to 1-2 mm chips using a jaw crusher following removal of weathered material and amygdales. They were then milled in a chrome-steel TEMA mill for use in bulk geochemical analysis techniques. Major elements were measured using X-ray fractionation (XRF) (using methods from Kystol and Larsen, 1999;Tegner et al., 2009). Dried and ignited samples were fused in a furnace in platinum crucibles with lithium borate flux, and cooled samples were moulded into beads using ammonium iodide as a wetting agent. Fused beads were analysed using a Bruker S4 Pioneer XRF spectrometer at Camborne School of Mines, alongside AGV-1, BHVO-2, BIR and DNC-1 standards. Trace elements were measured using inductively coupled plasma mass spectrometry (ICP-MS) at Camborne School of Mines (following the methodology of McDonald and Viljoen, 2006). This method used the 4-Acid sample dissolution technique before element detection using the Agilent 7700 Series mass spectrometer, alongside BCR-2 and Bir-1A standards. Finally, Os, Ir, Ru, Rh, Pt, Pd, and Au were measured at Cardiff University using the Ni-S fire assay and tellurium coprecipitation technique for PGE analysis outlined by Huber et al. (2001) and McDonald and Viljoen (2006). For this method, 15 g of sample was mixed with 12 g of borax flux, 6 g NaCO 3 , 0.9 g solid sulfur, 1.08 g Ni, and 1 g silica, and the mixture was melted in a furnace at 1,000°C for 1.5 h. The sulfide bead segregated from the quenched silicate matrix was dissolved in hydrochloric acid, re-precipitated with Te, filtered and diluted. The final solutions were then analysed for precious metal concentrations using ICP-MS, alongside the TBD1 and WPR1 standards. Full (anhydrous) major, trace and PGE+Au data are provided in Supplementary A1, with standard measurements, blank measurements (analytical and procedural), and detections limits where applicable. Machine Learning Workflow We have implemented the machine learning workflow introduced by Lindsay et al. (2021a) to examine the multielement geochemistry of the PELIP and the Southern Atlantic plume-rift system. The workflow combines Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbour Embedding (t-SNE) feature extraction methods with k-means clustering. All methods used algorithms from the sci-kit learn v0.21.3 package for Python 3.7.4 (Pedregosa et al., 2011) in addition to NumPy v1.16.5 (Harris et al., 2020) and pandas v0.25.1 (McKinney, 2010) packages. All code used is available in Supplementary Material B. Both PCA and t-SNE are effective dimensionality-reduction techniques that allow for the management of data sets with high dimensionality (i.e., a large number of variables/features). These techniques provide a means for digesting and presenting substantial amounts of variability information 1 | Samples gathered in this study with the locality, formation (or drill core for offshore samples), rock type, and quantity (excluding duplicates). These samples are classified purely by geographic location at this stage, prior to geochemical analyses. Samples are listed west to east as per Figure 2. This represents the complete list of samples for the study, and some are omitted from final analyses as per Machine Learning Workflow. Locality Formation or drill core Rock type Quantity contained within a large geochemical suite. By reducing the dimensionality of our data down to more manageable lowdimensional components we can effectively discuss complex multi-elemental trends. Principal components (PCs) generated during PCA, features that describe the directions of largest variability trends throughout the entire data structure (Pearson, 1901;Hotelling, 1933;Chang, 1983), can highlight inter-element associations, identify key drivers behind fluctuations in variable associations, and recognise multi-element enrichment signatures (Hyvärinen et al., 2001;Davis, 2002;Jolliffe, 2002). This information is generally displayed in "biplots," which illustrate both sample and element variability through the entire data set. The PCs can then be utilised in k-means clustering optimisation. In comparison to PCA, the slightly more advanced, nonlinear t-SNE method summarises the variability of all elements in a high dimensional data set by visualising this information in a single newly-generated low-dimensional space referred to as an embedding (van der Maaten and Hinton, 2008;van der Maaten, 2014). This technique uses the Kullback-Leibler Divergence (Kullback and Leibler, 1959) to preserve similarity between data points within highdimensional space and transform this information into an embedding. This creates a summary "map" of cumulative sample variability across all included elements. The positions of data points within an embedding space are strongly indicative of how geochemically similar or different they are to other data points, although it should be noted that the high-dimensional similarity of proximal points is preserved better than dissimilarity between distal points (Baramurali et al., 2019). Individual element variability, sample classifications, and multi-element clusters recognised through PCA can be contextualised within the overall data structure in embeddings to quickly and efficiently describe a wealth of information for a complete data set. We work with three user-determined "hyperparameters" in our models: perplexity (the balance between portraying local and global trends in the data), learning rate (the increments taken to find the optimal embedding layout), and maximum iterations (the number of times the algorithm will try to optimise) (van der Maaten and Hinton, 2008). We also use k-means clustering, an unsupervised MLA that clusters data into a pre-determined number of classification groups (k), based on an iterative density function that seeks out the optimum similarity between the data set across high-dimensional space (MacQueen, 1967;Howarth, 1983;Michie et al., 1994;Hastie et al., 2009;Marsland, 2009). This allows for objective classification using a multitude of variables, identification of multi-dimensional trends in the sample space and tests the similarity of previous classification clusters through the data structure. We can retrospectively assess clustering performance using the Davies-Bouldin Index (DBI) calculation (Davies and Bouldin, 1979), which highlights the statistically-optimum cluster number as a function of intra-cluster density. A low DBI is desirable and denotes a high degree of similarity between points in each cluster, and k-means models with the lowest DBI are prioritised in the study. Original data and Principal Components from PCA are used as inputs for k-means in this study, and the results are compared using DBI in pursuit of the most robust classification set-up. For an in-depth account of the MLA methodology, algorithm processes and review of the techniques within, refer to Lindsay et al. (2021a). The techniques herein require data sets without zeroes or missing (blank) or non-numerical (e.g., below detection limit) entries. For this reason, despite being measured in some localities in our study, Te, W and Os were omitted from the working data set given the high volume of missing values. Further, BaO and Cr 2 O 3 were removed from the XRF-derived data set as the concentrations of Ba and Cr were measured separately using the higher-precision ICP-MS technique, rendering the equivalent oxide values unnecessary. Rounded-zero imputation, as recommended by Horrocks et al. (2019), was not utilised in this instance to fill gaps in the data sheet. Finally, two Rio Grande Rise samples were removed from the data set due to anomalously high (∼19 wt.%) loss-on-ignition (LOI) values, likely indicating a high degree of alteration of these basaltic samples. The final data set used in all MLA stages consisted of 116 lines (including additional data from A/B sample splits and duplicate analyses, minus omissions for the above conditions), with concentrations for 51 major element oxides and trace elements measured for each sample. The final processed data is available in Supplementary Material A3, alongside new PCA, t-SNE and k-means information discussed herein. Samples excluded for data analyses are not discussed further herein. Since this data set is compositional in nature, standardisation of all measurements is required before any statistical analyses (Chayes, 1960;Palowsky-Glahn and Egozcue, 2006;Buccianti and Grunsky, 2014). Standardisation of data via the z-score calculation (Kreyszig, 1979) was performed on the data set prior to use in algorithms. The calculation presents all measured concentrations as a function of standard deviations from the mean value for each element. This validates the use of compositional data in this workflow and allows for a more accurate comparison of mixedunit concentrations across the different methods used by eliminating bias from, for example, high numerical values for elements measured in ppm (e.g., Ni and Cu) compared to small values measured in ppb (e.g., PGE). This process of standardisation, either using z-scores or other similar calculations, is recommended for data treatment prior to most unsupervised learning MLAs (Taylor et al., 2010;Wang J. et al., 2020), and is used prior to all analyses in this paper. Element standard deviations (with other basic statistics) and z-scores are given next to raw concentrations in Supplementary Materials A1, A3, respectively. Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 7 TABLE 2 | Minimum, maximum, and mean (x ) values for elemental concentrations measured for each locality in the PELIP, excluding Cr 2 O 3 , BaO, Te, and W. "-" not measured. * -summary stats use all sample data, including duplicates, (A) and (B) splits, and samples excluded from later MLA models for reasons discussed in Machine Learning Workflow. A selection of important anhydrous major element plots are shown in Figure 3, including an alkali-Fe 2 O 3 -MgO (AFM) ternary diagram ( Figure 3A) and Harker plots with mineral fractionation vectors ( Figures 3B-D); further Harker and classic normalised multi-element plots McDonough and Sun (1995) and are provided in Supplementary Figures C1-C3 for reference. Supplementary Figure C4 provides alteration plots for our samples, indicating low degrees of alteration throughout the set and establishing all geochemistry herein as primary signatures (Mathieu, 2018). In Figure 3A, Etendeka and Walvis Ridge samples represent the least evolved samples (i.e., high MgO and low SiO 2 ), followed by Rio Grande Rise and Serra Geral mafic samples, which are plotted tightly in the tholeiitic peak on the AFM triangle. Silicic lavas and a single mafic lava plot in the most fractionated sector. Serra Geral lavas, classified according to Licht (2018), have MgO concentrations consistently <7 wt.% while samples from Etendeka range from 5 to 15 wt.%. Rio Grande Rise lavas are subdivided into a majority ∼5 wt.% (similar to Serra Geral) and Walvis Ridge samples are clustered at 7-8 wt.% MgO. All Serra Geral samples form a positive linear array in the MgO-CaO plot in Figure 3B, while the other localities have more subtle trends, with a mean CaO of ∼10 wt.% ( Figure 3B). Concentrations of TiO 2 groups Serra Geral lavas effectively as seen in Figure 3C one group at >3 wt.% (Type 4), one at 2-2.5 wt.% (Type 1 Central-Northern), and one at <2 wt.% (Type 1 Southern and Silicic). Etendeka, Rio Grande Rise, and Walvis Ridge have generally <2.5 wt.% TiO 2 . In Figure 3D, samples show a similar MgO-Al 2 O 3 spread to MgO-CaO, with Serra Geral lavas forming a roughly positive array with Walvis Ridge samples, and the other localities a more diffuse and slightly negative trend. An inflection in the dominant trend direction is observed in all three bivariate plots at 6-7 wt.% MgO. Principal Component Analysis Dimensionality-reduction via PCA was performed using the concentrations of the full suite of 51 elements measured for 116 samples in the PELIP data set; the method standardises concentrations using z-scores within the script (see Supplementary Material B). In this variable set-up, PC1 to PC8 account for 89.06% of all data set variability ( Figure 4A); information in higher-order PCs (i.e., PC9 and upwards above the ∼90% mark in Figure 4A) have a minimal overall contribution to variability, hence the focus on the more significant PC1-8. A summary of PCA cumulative variability and eigenvalues is provided in Figure 4A, showing PC1-2 alone accounting for 56% of data set variability. Across the biplots in Figures 4B-E, data point and eigenvector interactions indicate how strongly a sample or element varies throughout the data set as a whole, respectively. Given the high number of variables, eigenvectors with strong similarities in direction and length are grouped together to reduce redundancy; original biplots are displayed in Supplementary Material A3. Although all elements were resolved individually during the PCA process, we present eigenvectors with similar lengths and directions as groups for visual clarity in Figures 4B-E. Overall, PC7-8 relationships ( Figure 4E) do not depict strong or consistent trends compared to the other PC biplots described below, working west to east across the PELIP. Sample numbers reflect the 116 data lines carried through to PCA, t-SNE and k-means (Machine Learning Workflow; Supplementary Material A3), not those given for sample gathering or geochemical statistics in Table 1 or Table 2. t-SNE Dimensionality-reduction using t-SNE was performed using z-scores of the 51 variables. A parameter set-up of 5,000 maximum iterations (until convergence), learning rate of 200, and perplexity of 50 was selected through a heuristic approach and provided the most well-defined, evenly spaced cluster structure: Figure 5 displays the chosen data set embedding classified by sample locality. Embeddings produced from different perplexity set-ups that were run alongside the model ultimately chosen are shown in Supplementary D for reference. The t-SNE algorithm discriminates the PELIP geographical groups in Figure 5, with all sample types excluding the Rio Grande Rise occupying a unique sector of the embedding; the Rio Grande Rise forms three nearby sub-clusters. Serra Geral Type 4 and Silicic samples plot distinctly from the other lavas, which appear to form a continuum of multi-element compositions with smaller physical separation between each type. Etendeka and Walvis Ridge plot at the bottom of the Embedding 2. Both Serra Geral Type 1 localities occupy the centre of the embedding, with Type 1 (Southern) displaying a larger spread across the space compared to other groups. The three sub-clusters for Rio Grande Rise samples bracket Etendeka, Walvis, and Serra Geral Type 1 samples, and this appears to be a borehole control on geochemistry (i.e., Figure 2). Figure 6 displays PELIP sample data arranged in the embedding space from Figure 5, with points coloured by z-scores for 45 of the measured major and trace elements (excluding PGE and Au), to give an account of how individual elements vary through the new embedding (as per Horrocks et al., 2019;Lindsay et al., 2021a). Elements with strong bimodal distribution (i.e., with clear, distinct zones of high and low z-scores) contribute significantly to the overall data structure, whilst those with more even or scattered z-score distributions do not. For the major elements, MgO, CaO, TiO 2 , K 2 O, P 2 O 5 , and BaO concentrations exhibit strong bimodality across the data set, whilst SiO 2 , Al 2 O 3 , Fe 2 O 3 , SO 3 , Na 2 O, and MnO are more uniform or do not appear to have distinct variability trends apart from occasional outliers. Trace elements, Sc, V, Cr, Co, Ni, Cu, Se, Rb, Y, Zr, Hf, Th, and U exhibit bimodality of varying strength. Conversely, As, Sr, Nb, Ta, W, Pb, and Bi do not exhibit bimodal distribution. For the REE, the high z-scores progressively shift location from the top to the centre of Embedding 2 (y-axis) with increasing atomic number. This mimics a shift from Serra Geral Type 4 to Type 1 (Central-Northern) in Figure 5. Figure 7 displays PELIP sample data arranged in the embedding space from Figure 5, with points coloured by z-scores for the PGE (excluding Os) and Au. Compared to major and trace element embedding plots, these metals are much more "nuggety," with high concentrations being much rarer than low concentrations. A small sub-cluster of Serra Geral Type 1 (Southern) samples in the far right ( Figure 5) host very high Ir, Ru, and Rh concentrations (Figure 7). The highest concentrations of Pt, Pd, and Au are strongly concentrated with Serra Geral Type 1 (Central-Northern) on the left-hand side, particularly Pd and Au (again, comparing Figures 5, 7). Overall, Serra Geral Type 4 and Silicic, Etendeka, Walvis Ridge, and Rio Grande Rise zones from Figure 5 do not host high PGE concentrations, with the two Serra Geral Type 1 lavas appearing to dominate in this aspect, reflecting their statistics from k-Means Clustering Clustering using the k-means algorithm was performed on the i) PCs and ii) z-scores of the 51 variables for 116 samples. Models were run for k-values of 2-7. By comparing DBI for all model parameter set-ups, we find that the most robust model was clustering samples using PC1-8 (up to ∼90% of variability; Figure 4A) for all variables (as suggested by Lindsay et al., 2021a), with k 7. However on considering cluster formations, k 7 only produces a further localised 2-point outlier cluster with otherwise identical clusters to k 6, hence we use the k 6 model going forward as it best describes global trends in the data set. All models from all parameter set-ups and their corresponding DBI are displayed in Supplementary Figures E1-E3, showing PCs to be preferential as input data. Figure 8A displays the t-SNE layout from Figure 5, again classified by locality, with the same embedding classified by the k-means clusters from the optimal set-up displayed adjacently in Figure 8B. A histogram describing the subscription of new cluster classifications (Groups 1-6) in terms of their prevalence in each sample locality is given in Figure 8C. These six clusters define the main multi-element FIGURE 5 | Embedding generated using t-SNE algorithm using all 51 element z-scores. Optimum embedding was selected with a perplexity of 50 for best separability and cluster definition. Data points were classified by locality. Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 end-members in the lavas analysed, independent of geographic classifiers. Overall, Group 1 is exclusively found in Serra Geral Type 1 (Southern), Group 2 is exclusively found in Serra Geral Silicic, Group 4 is exclusively found in Serra Geral Type 4, and Group 6 is exclusively found in Etendeka. Group 3 appears primarily in Serra Geral Type 1 (Central-Northern) (making up the vast majority of the locality's new classification), but some Group 3 are also found in Type 1 (Southern) and Rio Grande Rise samples. Group 5 is found in all localities except Serra Geral Type 4 and Silicic, i.e., the lower half of Embedding 2 from Figures 5, 8B. A box-and-whisker plot detailing PGE concentrations (excluding Os due to its absence from some Serra Geral samples) per MLA-based cluster is provided in Figure 9. Group 1 hosts the highest mean, median, and maximum values in the data set for Ir, Ru, and Rh, and has a comparable spread of Pt concentrations to Group 3. Mean, median and maximum Pd concentrations are significantly higher in Group 3 than all other groups. Group 5 often hosts intermediate concentrations of each PGE between Groups 1 and 3. Groups 2, 4, and 6 are generally the least PGE-enriched clusters. High-Dimensional Geochemistry Using Reduced Variables An important application of this study is comparing results to the test study from Lindsay et al. (2021a) using the NAIP data set, both in terms of MLA workflow performance, and geochemical and geodynamic implications from their respective results. By running aspects of the MLA workflow using a reduced array of input variables, matching the eleven variables established in the NAIP data set from Lindsay et al. (2021a), we can make a substantial assessment of the relative multi-element geochemistry of the two similar plume-rift environments. In the original study, these eleven elements were measured in each NAIP locality and are synonymous with maficultramafic magmas (e.g., Barnes et al., 2015). It would be inappropriate to judge interpretations of a 51-variable MLA investigation and an 11-variable MLA investigation. By running the PELIP data through the workflow a second time with the same 11 variables used in the NAIP, we can more accurately compare and contrast geochemical variability in the two settings. Accordingly, we re-run PCA and t-SNE for the PELIP data set using only MgO, Fe 2 O 3 , TiO 2 , Ni, Cu, Cr, Ir, Ru, Rh, Pt, and Pd as variables (as the eleven elements included in the NAIP study). In this reduced variable set-up, PC1-6 account for 93.10% of cumulative variability ( Figure 10A) and PC7-11 are disregarded. Across the biplots in Figures 10B-D, localities are governed by the nearby loading scores and the elements tied to them: -Serra Geral Type 4 variability is governed by TiO 2 in PC1-2; TiO 2 , Ir, Ru, and Rh in PC3-4; and TiO 2 , Ir, and Rh in PC5-6. -Serra Geral Type 1 (Central-Northern) variability is governed by Fe 2 O 3 , Cu, and Pd in PC1-2; Pt and Pd in PC3-4; and MgO, Cr, Cu, and Pd in PC5-6. -Serra Geral Type 1 (Southern) variability is governed by Ir, Ru, Rh, and Pt in PC1-2; Ir, Rh, and Ru in PC3-4; and Fe 2 O 3 , Ni, Ru, and Pt in PC5-6. -Serra Geral Silicic variability is governed by no particular elements in PC1-2 or PC3-4 and TiO 2 in PC5-6. -Rio Grande Rise variability is governed by Ir, Ru, Rh, Pt, and Cu in PC1-2; TiO 2 , Ir, and Rh in PC3-4; and MgO, TiO 2 , Cu, Cr, and Pd in PC5-6. -Walvis Ridge variability is governed by MgO, Cr, Ni, and Ir in PC1-2; Ru in PC3-4; and MgO, Cr, and Pd in PC5-6. -Etendeka variability is governed by MgO, Cr, and Ni in PC1-2; Fe 2 O 3 , TiO 2 , MgO, Cr, Ni, and Cu in PC3-4; and Fe 2 O 3 and Ni in PC5-6. Figure 11 displays the reduced variable embedding; A) classified by locality (for reference) and B) coloured by z-scored elemental concentrations for the eleven included variables. A parameter set-up of 5,000 maximum iterations (until convergence), a learning rate of 200, and a perplexity of 50 was selected as the best embedding; extra models are MLA-Based Geochemical Interpretation Magmatic Differentiation According to Major Elements Figure 3 suggests that Etendeka and Walvis Ridge lavas are the most primitive in the sample suite with the highest MgO contents ( Table 2). This is particularly notable in the AFM diagram in Figure 3A, in which Etendeka/Walvis samples plot earlier in the tholeiitic fractionation trend (Kuno, 1968 Black circlemean; horizontal linemedian; boxinterquartile range (Q1-Q3); whiskers -"maximum and minimum," i.e., Q1 or Q3 + 1.5 × interquartile range; coloured circlesoutliers (above 1.5 × interquartile range). indicating fractionation of minerals containing these elements from the parent magma, like olivine, pyroxene, plagioclase, and/or spinel-group minerals (in agreement with Peate, 1997). Mineral fractionation vectors (e.g., Richter and Moore, 1966;Cox et al., 1979, Rollinson, 1993 andreferences therein) in Figures 3B-D and Supplementary Figure C1 indicate stronger removal of olivine in Rio Grande Rise, Walvis Ridge, and Etendeka parent magmas compared to Serra Geral magmas. Serra Geral samples trend more closely with pyroxene and plagioclase removal (i.e., more advanced fractionation). This corroborates with Serra Geral positioning further along the AFM fractionation vector in Figure 3A. Table 3 summarises the major and trace element associations captured by the six MLA-based clusters by comparing their positions in Figure 8B with Figures 6, 7. These classifications reinforce and allow us to succinctly describe multi-element signatures (i.e., geochemical end-members) that pervade through PCA ( Figure 4) and t-SNE (Figures 5-7) interpretations for PELIP samples. We interpret the enrichment (or lack thereof) exhibited by each locality, working from west to east, herein. Reconciling PELIP Localities With MLA-Defined Groupings Serra Geral Type 4 samples are defined exclusively by the Group 4 end-member in Figure 8C and Table 3. They are the highest-Ti mafic lavas, show an expectedly strong affinity for TiO 2 eigenvectors in Figure 4 A-D, and show high-TiO 2 regions in the embedding space in Figures 5, 6. This end-member also features enrichment in incompatible elements like Zr, Hf, Rb, and LREE. High-Ti Paraná basalts are typically interpreted to be from a more enriched mantle source in the literature, often connected to OIB Tristan plume signatures more than lower-Ti South American CFBs (e.g., Peate, 1997;Rämö et al., 2016;Weit et al., 2017;Beccaluva et al., 2020;Zhou et al., 2020). Serra Geral Silicic samples are defined by the Group 2 end member ( Figure 8C; Table 3), those enriched in SiO 2 , alkalis, and, like Type 4, incompatible elements, such as Zr and Rb (reflected in Figures 4-6). Silicic magmas likely evolved and differentiated in the crust over time to generate their more FIGURE 11 | Embedding generated for the reduced PELIP data set (11 variables), coloured by z-scores for all elements used in the model. The embedding is also classified by sample locality in the top left for reference; the key is the same used in all other plots for localities (e.g., Figures 3-5, 8, 10). Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 silica-and alkali-rich compositions (e.g., Bellieni et al., 1984;Milner et al., 1995b;Garland et al., 1995) and may have acquired incompatible elements from the crust via extensive contamination (as per Simões et al., 2019). Precious (PGE and Au) and base metals (Ni, Cu, and Co) are notably depleted in both Type 4 and Silicic samples (Figures 6, 7; Table 3). Both Southern and Central-Northern Serra Geral Type 1 samples occupy adjacent spaces within the centre of the embedding from Figure 5, which supports their more recent classification as one major magma-type (e.g., Gomes et al., 2018;Licht, 2018), despite historically being separated into Low-Ti and High-Ti groups, respectively, in prior studies (Fodor, 1987;Peate et al., 1992;Peate, 1997). However, their detailed multi-element geochemistry illustrates differences within their data structure, including a moderate difference in TiO 2 ( Figure 3C; Table 2). MLA-based clustering assigned the Group 1 and Group 5 end-members in roughly equal proportions to Type 1 (Southern) ( Figure 8C; Table 3), the former designation being exclusive to the locality. Group 1 occupies an embedding space defined by very high concentrations of U, Th, As, Pb, and Bi (Figures 6, 7), elements all commonly concentrated in the continental crust. Of the Serra Geral mafic lavas, Type 1 (Southern) is the most crust-contaminated magma-type and acquires much of its isotopic and lithophile geochemical characteristics from mid-upper crustal differentiation processes (e.g., Peate et al., 1992;Peate and Hawkesworth, 1996;Rocha-Júnior et al., 2013;Licht, 2018;Marques et al., 2018). Group 1 also represents lavas with the highest Ir, Ru and Rh concentrations. Serra Geral Type 1 (Central-Northern) lavas express a consistent character through the MLA workflow, associated with Fe 2 O 3 , Sc, V, Cu, Pd, and Au eigenvectors (Figure 4), and high concentrations of these elements in the far left of the t-SNE plots (on comparing Figures 6, 8B). The Group 3 endmember, found primarily in Type 1 (Central-Northern) lavas, captures the majority of the multi-element signature from this association (Figures 8A-C; Table 3). Evidently, the main difference between the two Type 1 varieties is the metal basket associated with their dominant end-members, given that their petrological definitions ( Figures 3A-D) are reasonably consistent. Rio Grande Rise and Walvis Ridge samples are generally less tightly defined in PCA and t-SNE spaces, but overall plot similarly to Serra Geral Type 1, appearing between clusters of Type 1 (Southern) and Type 1 (Central-Northern) in Figure 4B and below them as linear arrays in the embedding space ( Figure 5). The two ridge lava types were classified primarily into Group 5 ( Figure 8C), which is found in most localities in some amount and could thus be described as a background end-member, occupying the central embedding zone of high CaO and moderate Fe 2 O 3 , Na 2 O, Al 2 O 3 , MnO, Sc, and V (i.e., z-score ≈ 0; Figures 4B-D, 6, 7, Table 3). This endmember is poorer in REE than other localities (especially Serra Geral Type 4 and Silicic), denoting a less enriched mantle source (i.e., shallower melts beneath a thinner lithosphere) and/or minimal crustal involvement (e.g., Le Roex et al., 1990;Gibson et al., 2005;Willbold and Stracke, 2006;Homrighausen et al., 2019). It should be noted that although the Rio Grande Rise fits within similar multielement clusters like the Walvis Ridge, there is a degree of geochemical variability between the three IODP drillholes from which the lavas were sampled (Figure 2), with t-SNE and k-means clustering clearly segregating them in Figures 5, 8A-C. Rio Grande Rise also features a significant amount of points classified as Group 3, which otherwise defines Serra Geral Type 1 (Central-Northern). However, these points plot as a separate sub-cluster of Group 3 in the centre of the embedding (Figures 8A,B) indicating variability between localities within the classification ( Figure 8C). Finally, Etendeka samples show a strong trend towards MgO, Ni and Cr eigenvectors ( Figure 4B) and embedding sectors ( Figures 5, 6), a multi-element association also identified in NAIP lavas (Hughes et al., 2015;Lindsay et al., 2021a). Etendeka is the only locality to be classified in Group 6 ( Figure 8C; Table 3), a multi-element end-member that likely represents high degree partial melts with predominant asthenospheric signatures (e.g., Gibson et al., 2005;Zhou et al., 2020). The literature suggests that many basalts from the Etendeka CFBs were particularly enriched in MgO due to higher temperature melting (e.g., Jennings et al., 2017;Natali et al., 2017;Jennings et al., 2019;Beccaluva et al., 2020), producing different magma compositions to the entire Paraná CFB sequence, which is less primitive in terms of MgO concentrations in comparison (Figure 3). The exclusivity of Group 6 to Etendeka and Groups 1-4 to Paraná and the Rio Grande Rise suggests a strong asymmetry to geochemistry on either side of the Tristan plume-rift system, explored further in the following sections. 3 | Summary of the defining multi-element associations in each of the six clusters generated for the data set by the k-means MLA, including precious metal contents and localities displaying these end-members, cross-referenced with Figures 4-9. SG, Serra Geral; C-N, Central-Northern; S, Southern; RGR, Rio Grande Rise; WR, Walvis Ridge; ET, Etendeka. Onshore vs. Offshore Trends in the PELIP The clustering of localities into distinct k-means groups indicate that, despite all PELIP lavas being attributed to melting imposed by the temperature anomaly from the Tristan plume, mutable source components exist in the corresponding parent magmas. This may reflect the rapidly shifting geodynamic conditions in the (now) PELIP during the Creataceous period and the knock-on effect this has on melting processes. For the onshore portions, the High-/Low-Ti split is commonly thought to represent geochemical zonation in the plume (i.e., lavas produced in line with the plume head focus, and lavas that are more peripheral, respectively) (e.g., Gibson et al., 1999;Zhou et al., 2020). This led to localised heterogeneous melting processes producing distinct multi-element signatures as a net effect (as per Rämö et al., 2016). In general, Low-Ti content represents crust-contaminated magmas; High-Ti content represents less contaminated intracontinental magmas (e.g., Peate et al., 1992;Rocha-Júnior et al., 2013;Natali et al., 2017;Licht, 2018;Marques et al., 2018). However, there are many other processes affecting the final signatures including melt regime differences, mantle source composition and fractional crystallisation. All Type 1 (Southern) and Silicic onshore PELIP lavas exhibit geochemical evidence for crustal contamination, such as elevated heavy REE, Rb, U and Th, and low Nb and Ta (Pearce, 2008) (Figure 6). These signatures are not present in the volcanic ridge or active hotspot lavas, suggesting that the processes or mantle components creating these melts were only present during the plume head CFB phase (c. 135 to 128 Ma) (e.g., Ussami et al., 2013;Graça et al., 2019). As the plume focus moved offshore into the new ocean basin, this CFB geochemical signature was no longer generated, reinforcing the link between Low-Ti lavas and the continental lithosphere and SCLM (Thompson and Gibson, 1991;Gibson et al., 2005). There is a well-established asymmetric distribution in isotope and trace element geochemistry across the PELIP (e.g., Piccirillo et al., 1988;Peate and Hawkesworth, 1996;Turner et al., 1996), and this together with greater preserved volumes of lava in Paraná and different sub-crustal thicknesses under both CFB regions (Gallagher and Hawkesworth, 1994) demands care in reconciling MLAbased interpretations with geodynamic processes. Despite a common regional rifting event in the opening of the South Atlantic, centred roughly on the Tristan plume, the multielement geochemistry of PELIP lavas on the South American and African plates are different in terms of endmembers identified using MLA (summarised in Figure 8) and PGE concentrations (Figure 9). In both Paraná and Etendeka, the High-Ti lava types are the products of partial melting of a mantle source enriched in incompatible elements and LREE (Rämö et al., 2016;Jennings et al., 2019;Zhou et al., 2020) suggest that this likely represents a deeper asthenospheric source undergoing low-degrees of partial melting under a thick lithosphere lid. Similar signatures are observed in Walvis Ridge and contemporary Tristan da Cunha lavas (Gibson et al., 2005). Most of these have distinctive high 87 Sr/ 86 Sr and low 143 Nd/ 144 Nd signatures, interpreted as the Gough mantle component indicative of on-axis plume melts (Le Roex et al., 1990;Weaver, 1991;Willbold and Stracke, 2006;Homrighausen et al., 2019;Zhou et al., 2020), similar to Enriched Mantle I (EMI) OIB signatures (e.g., Zindler and Hart, 1986). In Paraná, the Gough isotopic component is not present in Serra Geral lavas despite major and trace element similarities to OIB. This implies that although representing plume-derived melts, these magmas may be from a more passive plume melting influence compared to Etendeka and must at the very least involve different mantle components to begin with. Serra Geral Type 1 varieties are even further removed from this Gough isotopic component and some studies argue that these lavas do not contain any isotopic evidence for direct plume involvement. Paraná and Etendeka CFBs could consequently represent geochemical end-members of SCLM and asthenospheric material in intracontinental magmas, respectively (e.g., Stroncik et al., 2017;Beccaluva et al., 2020). PGE Variations Throughout the PELIP Throughout the PELIP, the concentrations and multi-element signatures of PGE and Au are distinctive in each locality ( Figure 9; Tables 2, 3). In PCA biplots ( Figures 4B-E) and embeddings (Figure 7), eigenvectors and/or highconcentrations for individual PGE and Au generally pair with each other, described herein. The MLA-based multi-element end-members defined by precious metal concentrations in Reconciling PELIP Localities With MLA-Defined Groupings are: Group 1 (high Ir, Ru, Rh, and Pt), found in Serra Geral Type 1 (Southern); Group 3 (high Pt, Pd, Au, and Cu), found in Serra Geral Type 1 (Central-Northern); and, to a lesser extent, Group 5 (moderate concentrations of Ir, Ru, Rh, Pt, and Pd) found throughout the PELIP. This is further illustrated by the box-and-whisker plots in Figure 9 and classic PGE plots from Supplementary C2-3. Groups 2, 4, and 6 have the lowest mean and minimum concentrations of Ir, Ru, Rh, and Pd and comparable Pt concentrations to Group 5. They therefore represent geochemical end-members defined by PGE depletion, further to their major/trace element patterns in Table 3. The balance between Ir-group PGE (IPGE: Os, Ir, Ru) and Pdgroup PGE (PPGE: Rh, Pt, Pd) in a mafic rock generally reveals melting and mineralogical information about the parental magma (e.g., Barnes et al., 1985). We have established that Serra Geral Type 1 (Southern) lavas are enriched in IPGE and Rh (i.e., a Group 1 signature). These metals are much more compatible during melting and require higher degrees of partial melting in order to be extracted from host phases (typically PGE alloys and platinum-group minerals; PGM) into a silicate magma (e.g., Barnes and Picard, 1993;Rehkämper et al., 1997;Alard et al., 2000;Helmy and Bragagni, 2017). PPGE (especially Pd) are more incompatible, so can be assimilated Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 into melts from mantle phases at lower degrees of partial melting and at lower temperatures than IPGE (e.g., Holzheid et al., 2000;Maier et al., 2003;Bockrath et al., 2004;Righter et al., 2008). The Group 1 link to As and Bi variability (Table 3) emphasises the segregation of Ir, Ru, and Rh from the other PGE, given that phase relations have been established between Te-As-Bi-Sb-Sn (the TABS series) and Os-Ir-Ru-Rh in pyrrhotite in the orthomagmatic sulphide systems (Mansur and Barnes, 2020). Modelling from Lindsay et al. (2021b) revealed the increase in partial melting in the mantle below the thinnest section of pre-Atlantic Brazilian lithosphere facilitated melting of not only IPGE-bearing sulphides and PGM/TABS phases in the SCLM, but spinel-group minerals in which IPGE have high partition coefficients (e.g., Capobianco and Drake, 1990;Barnes and Picard, 1993;Peach et al., 1994;Pitcher et al., 2009;Park et al., 2017). The Group 3 association of Pd, Au, and Cu found in Serra Geral Type 1 (Central-Northern) lavas could be indicative of a metasomatically-enriched mantle source. Studies suggest that the SCLM and upper asthenosphere are enriched by fluids and partial melts released from down-going oceanic crust at subduction zones, effectively re-fertilising a region of the sub-continent (e.g., Mitchell and Keays, 1981;Borisov et al., 1994;Righter et al., 2008). The metals typically associated with this process include Pd, Au, and Cu, some of the least compatible precious and base metals, which would preferentially be exsolved from a dehydrating slab (e.g., Woodland et al., 2002;Lorand et al., 2013;Tassara et al., 2017;Rielli et al., 2018;Wade et al., 2019). This process leads to the formation of base metal sulphides and accessory minerals (enriched in Pd, Au and Cu) within metasomatised SCLM. Magmas generated below this SCLM can melt these metasomatic mineral phases, incorporating their metal budget; this mechanism has been highlighted in many cratonic settings including Brazil (e.g., Zhang et al., 2008;Maier and Groves, 2011;Rocha-Júnior et al., 2013;Holwell et al., 2019;Choi et al., 2020). Data from PELIP CFBs indicates degrees of partial melting around 22.5% during plume head magmatism under a thinning lithosphere (Gibson et al., 2005), suitable for exhausting sulphides in melting sources. The presence of a metasomatic component in the melting SCLM has been identified as a key driver in the near-surface precious metal content of intraplate magmas (e.g., Powell and O'Reilly, 2007;Tassara et al., 2017). A shallower melting imposed by the thinning Brazilian landmass may have allowed for higher degrees of partial melting of the SCLM and thus elevated Pd, Au, and Cu in Serra Geral Type 1 (Central-Northern), in accordance with (e.g., Rämö et al., 2016). The link to Sc and V variability ( Table 3) may further suggest significant metasomatic activity in rocks bearing a Group 3 signature, given the sensitivity of these elements to fluid interaction and redox conditions in the mantle (e.g., Chassé et al., 2018;Woodland et al., 2018 and references therein). The more comparable Pt concentration levels through all localities (Figure 7; Table 2) and k-means clusters (Figure 9) is likely a representation of an absence in Pt anomalies. This may indicate that there are no Pt-enriched domains within the melting environment and therefore no increased abundance in Pt-bearing PGM. Whereas Os, Ir, Ru Rh, and Pd (plus Au) each tend to form more element-specific mineralisation phases (Lorand et al., 2008 and references therein), Pt can be found in a wider range of hosts (i.e., Mss, Cu-sulphides, and Pt-alloys) largely on account of its complex partitioning behaviour under upper mantle conditions (Lorand and Alard, 2001). Therefore, low levels of Pt could be introduced to the magma from a wide variety of locations, contrasting with the spiked Ir-Ru-Rh and Pd-Au-Cu signatures from SCLM-derived Type 1 melts. The fact that other localities in the PELIP do not have significant PGE enrichments further reinforces the ideas of Gibson et al. (2005), Rämö et al. (2016), and Beccaluva et al. (2020) that suggest variable melting processes under Paraná and the rest of the PELIP in response to lithospheric thinning drive variable incorporation of SCLM metals. The PGE signatures are unlikely to be inherited from the plume, otherwise, they would be ubiquitous in melts in the region, especially in the Rio Grande Rise and Walvis Ridge, which are distal to the SCLM. Modelling from Lindsay et al. (2021b) directly attributed PGE enrichment in Serra Geral Type 1 lavas to enhanced degrees of partial melting exhausting SCLM sulphides and spinel. It should be noted that Rio Grande Rise samples were mainly clustered into Group 3 ( Figures 8A-C), the end-member otherwise typifying Serra Geral Type 1 (Central-Northern) lavas and their distinctive enrichment in Pd, Au, and Cu. The Rio Grande Rise samples occupy a small population of Group 3 points in the very centre of the embedding space ( Figure 5), where Pt, Pd, Au, and Cu concentrations are moderate to low ( Figure 7). As such, whilst the majority of Rio Grande Rise lava major and trace element concentrations are similar enough to Type 1 (Central-Northern) to merit their classification in a single end-member cluster, the high Pd-Au-Cu association is unique to the continental lavas alone. This is an important feature in the argument for SCLM involvement in metallogenesis, as whilst we have magmas generated to feature similar multi-element signatures, the connection to the Brazilian and African cratons must drive the individual precious metal deviations expressed. This may also potentially reflect different metal inheritance or "preconditioning" on either side of the (current) Atlantic, as described by Lindsay et al. (2021b) for the Brazilian craton and by Hughes et al. (2015) across the North Atlantic craton and NAIP. Implications for Geodynamics and Metallogenesis With a focus on PGE, Cu, and Au, it is evident that specific stages in PELIP development are prone to higher metal concentrations than others. The Serra Geral Type 1 (Southern) and Type 1 (Central-Northern) samples, in particular, are the major hosts for elevated PGE contents through the region; Ir-Ru-Rh-Pt for the Southern magma-type and Pt-Pd-Cu-Au for the Central-Northern magma-type. In terms of processes controlling these differences, by combining information from all aspects of the MLA workflow and data from associated literature, we can suggest a working model for metallogenesis in the PELIP, Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 20 centred around the temporal changes to regional multielement geochemistry and PGE concentrations in this new data set. There have been three geochemical/geodynamic developmental stages in the PELIP: 1) continental lavas in Paraná and Etendeka, featuring the zoned plume head and High/Low-Ti bimodality; 2) Rio Grande Rise and Walvis Ridge plume tail hotspot trails, featuring the absence of crustal contaminated components; and 3) separated Tristan and Gough trails representing two "flavours" of concurrent mantle plume magma and the cessation of South American plate volcanism (Figures 2A,B). The relative timing of eruptive sequences in Paraná is complicated given the high flux rates (Bellieni et al., 1984;Thiede and Vasconcelos, 2010;Licht, 2018;Rossetti et al., 2018), but overall, Type 4 progresses stratigraphically in most sequences to Type 1 (Central-Northern) in the north-west, whilst Type 1 (Southern) and Silicic lavas erupted in the south-east of the basin. Their geochemical characteristics (Figures 3A-D, 4B-E, 5) and the cluster designation of each magma type ( Figure 8C) are distinct enough to be considered as four individual but synchronous geochemical processes between 135 and 134 Ma. Etendeka lavas erupted at a similar time to Type 1 (Southern) at around 134 Ma, and Rio Grande Rise and Walvis Ridge lavas erupted once the rifting had progressed to an oceanic setting from 128 Ma until the Tristan-Gough bifurcation at ∼70 Ma ( Figures 1A,B). The overall transition from thick to thin continental lithosphere and finally to oceanic lithosphere is the control on all magma geochemical variability from a single plume source, and we suggest that the PGE distribution varies as a function of this. The High-Ti lavas of Paraná were generated by small degrees of melting as the plume head decompressed beneath the thickest part of the Brazilian lithosphere, and despite being enriched in REE elements as per Enriched Mantle 1 (EM-1) OIB geochemistry, this melting regime (deeper asthenosphere, low degree partial melting) did not allow PGE acquisition on Type 4 magma ascent (Rämö et al., 2016). Geographically closer to the rift-zone, and on the axial sections of the plume head, Type 1 magmas were generated at shallower asthenospheric/SCLM depths, and higher degrees of partial melting (Beccaluva et al., 2020), which allowed for the melting and integration of SCLM PGE-bearing sulphides into magmas. Isotopic signatures from such melts can often be matched to cratonic xenolith compositions representative of the local SCLM (e.g., Gibson et al., 2005 and references therein). Type 1 (Central-Northern), from a slightly thicker lithosphere, exceeded melting conditions for PPGE sulphides (including Au and Cu). Type 1 (Southern), adjacent to the incipient ocean opening and thus under a thinner lithosphere, incorporated even shallower and higher degree partial melts and IPGE-bearing sulphides and/or spinel-group minerals to boost their concentrations in line with the Group 1 end-member (Lindsay et al., 2021b). On the African plate, Etendeka parent magmas were extracted under similar plume and asthenospheric conditions, but evidence suggests that (similar to Serra Geral Type 4 magmas) the involvement of deeper, hotter, and more plume-derived melts in their genesis (e.g., Marsh et al., 2001;Rämö et al., 2016;Jennings et al., 2019) did not facilitate incorporation of PGE from the shallower SCLM. An alternative explanation could be simply that SCLM components involved in Type 4 and Etendeka melts were not significantly pre-enriched, leading the resulting net multi-element geochemistry to be dominated by the asthenospheric components. From their similar chalcophile concentrations, the sulphur saturation histories of each region appear broadly equivalent (i.e., Supplementary Figure C1), so it is unlikely that this process differentiated the precious metal signatures. Rio Grande Rise and Walvis Ridge lavas feature the background/non-enriched signatures also common in the Type 1 lavas (i.e., Groups 2 and 5; Figure 8), with the lack of PGE-rich end-members reflecting the absence of SCLM input in offshore melting (e.g., Gibson et al., 1999;Hoernle et al., 2015;Zhou et al., 2020). Whilst early oceanic PELIP lavas show isotopic evidence for a small component of SCLM present in melts (e.g., high (La/Nb) n , low εNd and 206 Pb/ 204 Pb), this has been attributed to contamination by delaminated cratonic slivers from continental break-up in the sub-oceanic melt column (Douglass et al., 1999;Le Roux et al., 2002;Gibson et al., 2005). More recent enriched oceanic ridge lavas with isotopic compositons, unlike Brazilian or African SCLM components (high 206 Pb/ 204 Pb and 87 Sr/ 86 Sr; Gibson et al., 2005), have instead been linked to a deep mantle source and are thus more akin to classic EM-1 signatures (e.g., Wilson, 1992). The incorporation of SCLM remnants in westernmost Rio Grande Rise magmas are potentially reflected in their multielement resemblance to onshore Type 1 lavas in terms of k-means cluster distribution (i.e., the ubiquitous Group 5; Figure 8C), but this is restricted purely to major and trace element similarities, not precious metals. In summary, the manner in which different melts are generated directly affects resultant magma multi-element geochemistry and, in particular, PGE concentrations, as demonstrated across the PELIP. The "sweet spot" for PGE enrichment in magmas across the region appears to be linked to late-stage Serra Geral Type 1 CFBs. This is demonstrated to be a product of the imminent separation of South America and Africa where lithospheric un-lidding promoted higher degrees of partial melting and therefore incorporation of a greater proportion of SCLM-derived metals (whether through direct partial melting of the SCLM itself and/or through contamination) (as per Lindsay et al., 2021b). The geochemical signature increasingly found in PELIP hotspot trail lavas through time denotes a more prevalent involvement of the Tristan-Gough plume component, accompanied by the waning of SCLM-derived components as the South Atlantic opened. This is similar to the observation that the metal basket of the NAIP changed spatially and temporally as the North Atlantic rifted (Hughes et al., 2015;Lindsay et al., 2021a). Comparison With the Icelandic Plume The extra step of recreating the input parameters from the NAIP for the new PELIP set has enabled us to directly compare Earth Science, Systems and Society | The Geological Society of London October 2021 | Volume 1 | Article 10039 21 (the northern geographic equivalents of Etendeka onshore lavas) have multi-element geochemistry similar to Greenlandic lavas (Lindsay et al., 2021a) and also host PGE-bearing intrusions in the Isles of Rum, Mull, and Skye (e.g., Andersen et al., 2002). Evidently, the subtle differences in plume-rift architecture significantly influence regional geochemistry despite contemporary lavas having erupted very close together before drifting to their current geographic position. The notable missing EM-1 isotopic component through much of the Serra Geral lavas plays a key role in PELIP heterogeneity by emphasising the more passive plume melting regime in comparison to the rest of the region. This may be a result of i) changes in mantle sources as a response to differential lithospheric un-lidding; ii) markedly different cratonic composition or structure in the melting environment; or iii) heterogeneous melting sources. The marked decrease in precious metal content as the plume progressed further from the continent in both the PELIP and NAIP serves as substantial evidence that metal baskets are inherently linked to SCLM input in transitional continentocean plume settings. CONCLUSION A widespread account of PGE concentrations across the PELIP using our geochemical MLA workflow has illustrated the importance of continent-edge geodynamic conditions in controlling the metal basket of plume-derived intraplate melts. 1) Dimensionality reduction and multi-element analyses describe distinct signatures for each of the PELIP sample localities. PGEs are focused strongly in the Serra Geral Type 1 (Southern) and Type 1 (Central-Northern) lavas in Paraná, with each having its own set of associated metals -Ir-Ru-Rh-Pt for the former, Pt-Pd-Au-Cu for the latter. These enrichments are driven by enhanced degrees of partial melting attributed to shallowing melt foci in response to lithospheric thinning between 134 and 128 Ma. 2) PGE enrichment is asymmetrical across the PELIP, as the Etendeka equivalents of the Type 1 lavas are notably unenriched in all PGE analysed, which is attributed to the dominance of deeper, hotter but lower degree plumederived partial melts instead of the passive melting of SCLM material under Paraná. 3) The western and eastern oceanic ridges show multielement similarities to their onshore equivalents when considering major and trace elements, but they lack high concentrations of precious metals, which reinforces the hypothesis that PGEs are being supplied to Serra Geral Type 1 magmas by the melting of SCLM under a progressively thinning Brazilian lithosphere. 4) On comparing the Tristan plume system to the similar Icelandic plume system (from which the test of the MLA workflow was conducted), despite recognising some multielement associations common in both regions, PGE enrichment is different in each. The Icelandic system does not feature asymmetry in PGE enrichment either side of the Atlantic Ridge. Therefore we conclude that, whilst SCLM enrichment is complex and variable in a plume-rift mineral system, there are common processes for how these metals are acquired. 5) By using unsupervised MLA, we were able to: characterise PELIP sample multi-element variability quickly, objectively, and effectively; validate our MLA-based major and trace element geochemical interpretations against the region's geodynamic history; and then overlay new PGE interpretations on top of location-specific magmatic processes. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS JL undertook sample acquisition and preparation, all lab work, machine learning analyses, plotting and writing of manuscript. HH supervised project and provided substantial edits to manuscript drafts. CY helped formulate machine learning workflow via Python and provided substantial edits to manuscript drafts. JA supervised project, assisted on sample acquisition fieldwork and provided feedback on final draft edits. IM supervised PGE lab work and provided feedback on final draft edits. FUNDING Funding provided solely by the University of Exeter's Vice-Chancellor Scholarship for Post-Graduate Research, an internal funding programme for early career post-graduate researchers. We would also like to thank Holger Kuhlmann and all at the IODP Repository in Bremen for helping sort, select, and ship offshore core samples, and we also thank Matthew Head for assistance using MatLab and GMT for figure preparation.
v3-fos-license
2022-12-27T15:42:08.146Z
2016-04-07T00:00:00.000
255133848
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11306-016-1024-7.pdf", "pdf_hash": "5cb1514986ad7384b45dabc2ae7ed8ab6ad8b550", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44083", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "5cb1514986ad7384b45dabc2ae7ed8ab6ad8b550", "year": 2016 }
pes2o/s2orc
Metabolic fingerprints of human primary endothelial and fibroblast cells Human primary cells originating from different locations within the body could differ greatly in their metabolic phenotypes, influencing both how they act during physiological/pathological processes and how susceptible/resistant they are to a variety of disease risk factors. A novel way to monitor cellular metabolism is through cell energetics assays, so we explored this approach with human primary cell types, as models of sclerotic disorders. In order to better understand pathophysiological processes at the cellular level, our goals were to measure metabolic pathway activities of endothelial cells and fibroblasts, and determine their metabolic phenotype profiles. Biolog Phenotype MicroArray™ technology was used for the first time to characterize metabolic phenotypes of diverse primary cells. These colorimetric assays enable detection of utilization of 367 specific biochemical substrates by human endothelial cells from the coronary artery (HCAEC), umbilical vein (HUVEC) and normal, healthy lung fibroblasts (NHLF). Adenosine, inosine, d-mannose and dextrin were strongly utilized by all three cell types, comparable to glucose. Substrates metabolized solely by HCAEC were mannan, pectin, gelatin and prevalently tricarballylic acid. HUVEC did not show any uniquely metabolized substrates whereas NHLF exhibited strong utilization of sugars and carboxylic acids along with amino acids and peptides. Taken together, we show for the first time that this simple energetics assay platform enables metabolic characterization of primary cells and that each of the three human cell types examined gives a unique and distinguishable profile. Introduction Primary cells cultured from ex vivo isolated tissues originating from different spatial sources of the same organism may show specific phenotypes, including distinct fingerprints of cellular metabolism (Pearson 2007). One way to monitor cellular metabolism is through multiplexed cell energetics assays. A new approach explored in this study was to use Biolog Phenotype MicroArray TM technology to characterize cells for metabolically and bioenergeticallylinked phenotypes (Bochner et al. 2011;Putluri et al. 2011). By simultaneously measuring utilization of hundreds of substrates, this assay system can reveal unique and insightful information on metabolic pathway activities and cellular responses to nutrients, hormones, cytokines, ions, and anti-cancer agents. Systemic sclerosis (SSc) or scleroderma is a chronic, multiorgan autoimmune disease complicated by early vasculopathy and heart involvement (Boueiz et al. 2010;Hettema et al. 2008;Meune et al. 2010;Ngian et al. 2011) as well as progressive fibrosis of the skin and internal organs, including lungs (Kuwana et al. 2004;Reiseter et al. 2015). Thus, endothelial cells and fibroblasts are among the earliest and most affected cell types in SSc. The endothelium is heterogeneous depending on its vascular bed and tissue/embryological origin. Different endothelial cell phenotypes exhibit differential responses to changes in their environment and signaling (Rosenberg and Aird 1999). Thus, human endothelial cells derived from the umbilical vein (HUVEC) have a specific and intrinsic expression pattern of inflammatory molecules as compared to human coronary artery endothelial cells (HCAEC). Cardiovascular cells that contribute directly to atherosclerosis and cardiac dysfunction are known to exhibit metabolic flexibility, characterized by the ability to switch from generating ATP primarily through oxidative phosphorylation to using glycolysis as the predominant energy source, as well as to shift from one fuel source to another (Vallerie and Bornfeldt 2015). This flexibility occurs in endothelial cells, myeloid cells, and cardiomyocytes during normal development and physiology, and is thought to have evolved to protect cells with heightened energy demand from the increased oxidative stress that can be a result of elevated rates of oxidative phosphorylation (Galvan-Pena and O'Neill 2014). The cells shunt glucose to oxidative side branches of glycolysis (Eelen et al. 2015), to provide energy more rapidly (Galvan-Pena and O'Neill 2014), or to use the most abundant fuel available (Kolwicz et al. 2013). Studies support the concept that metabolic flexibility confers the advantage of ensuring ATP supplies for continual cardiac contraction under a variety of physiological conditions (Goodwin and Taegtmeyer 2000;Kaijser and Berglund 1992;Schonekess 1997;Wentz et al. 2010). Cardiac metabolism also undergoes a reprogramming in response to pathological hypertrophy, characterized by increased reliance on glucose metabolism and decreased fatty acid oxidation (Kolwicz et al. 2013). The question then arises whether metabolic flexibility and dysfunction in vascular and cardiac cells themselves could contribute to cardiovascular pathologies. Since patients with SSc have an elevated risk for vasculopathy, our aim was to determine what kind of metabolism HCAEC exhibit in comparison to a standardly used, however less appropriate, HUVEC model. In the past, we have shown that HCAEC have a higher responsiveness and susceptibility to cytokines, and can as such, represent an excellent gene and protein expression model for evaluating the effects of vascular stress (Lakota et al. 2013(Lakota et al. , 2007(Lakota et al. , 2009. Therefore, it is of interest whether HCAEC have a different metabolic phenotype at baseline levels than HUVEC. SSc is also a complex chronic connective tissue disease characterized by progressive fibrosis (Laar and Varga 2015) which can lead to irreversible damage of external and internal organs, such as the skin and lungs, respectively. The major cells implicated in fibrosis are fibroblasts, with their main physiological function in wound healing and tissue remodeling. The fibrotic component is dominant in SSc, as compared to other autoimmune diseases and determines its prognosis and therapeutic refractoriness. Studies in cultured SSc skin fibroblasts have facilitated the identification of potential pathways involved in their profibrotic phenotype. Profibrotic fibroblasts characterized by abnormal growth and extracellular matrix synthesis may differentiate or expand from normal resident fibroblasts and multiple factors, including signaling pathways appear to be involved in the development and/or persistence of the SSc fibroblast phenotype (Usategui et al. 2011). We postulate that fibroblasts, when activated/stimulated, under stress microenvironment conditions, could function much like cancer cells, with unbridled proliferation, due to a redirection of their metabolism by the reverse Warburg effect (Pavlides et al. 2010). In cancer, fibroblasts/stromal cells can convert to myofibroblasts, which (using glycolysis) can produce large amounts of lactate and pyruvate, thus feeding the faster growing cancer cells (Pavlides et al. 2009). It is interesting that in SSc, where fibroblasts are also highly activated and responsible for abnormal extracellular matrix accumulation (Usategui et al. 2011), inhibition of autophagy and aerobic glycolysis was suggested as a primary strategy to reverse fibrosis (Castello-Cros et al. 2011). Thus, studying metabolic utilization of substrates is relevant in disease development, as well as for providing innovative strategies to combat disease activity. No data to our knowledge is currently available on the metabolic activity of human primary cell types using Biolog Phenotype MicroArray TM technology, and specifically fibroblast or endothelial cells utilizing multiple substrates. So, we aimed to measure metabolic activity of primary human endothelial and fibroblast cells on multiple carbon and energy sources, as well as determine their specific metabolic fingerprints using the Biolog Phenotype MicroArray TM and OmniLog assay system. 2 Materials and methods 2.1 Cell culture HCAEC, HUVEC and NHLF (normal human lung fibroblasts) are all primary cell types generated ex vivo from isolated tissue (Lonza, Inc. Basel, Switzerland). Cell characteristics as reported by the manufacturer are summarized in Supplementary Table 1. The cells were plated onto six well plates or 75 cm 2 flasks (TPP, Trasadigen, Switzerland) at 37°C in a humidified atmosphere with 5 % CO 2 . HCAEC were grown in EGM-2 M medium containing 5 % fetal bovine serum (FBS); for HUVEC we used EBM-2 medium containing 2 % fetal bovine serum (both from Lonza Inc., Basel, Switzerland). NHLF were cultivated in FGM-2 medium containing 2 % FBS. All media components were from Lonza Inc. (Basel, Switzerland). Prior to the experiments, cells were incubated in serum-free medium for 30 min. Phenotype MicroArray and OmniLog optimization experiments HCAEC and NHLF specifically, were used for optimization experiments. Optimal cell medium, reagent dye and seeding cell densities were tested. Two different cell media were also tested, specifically IF-M1 (with amino acids) and IF-M2 (without amino acids) with two different Redox Dye Mixes MA and MB using initially three different seeding densities. All Phenotype MicroArray assay components were from Biolog Inc. (Hayward, CA USA). Specifically, cells were seeded into 96 well plates (TPP, Trasadigen, Switzerland) at seeding densities of 20.000, 10.000, 5.000, 2.500, 1.250 and 625 cells/well in a compete medium MC-0 (50 lL/well). The MC-0 medium was prepared using either IF-M1 or IF-M2 medium with added Pen/Strep, L-glutamine (0.3 mM) and dialyzed FBS (at final concentration 5 %) (all from Lonza Inc., Basel, Switzerland). Cells were then incubated for 1 h at 37°C under 5 % CO 2 , before adding Biolog Redox Dye Mix MA (10 lL) or Biolog Redox Dye Mix MB (10 lL) and measuring tetrazolium reduction, resulting in formation of a purple colour. Tetrazolium colorimetric measurements All cell types (HCAEC, HUVEC and NHLF) for experiments on Biolog Phenotype MicroArray plates PM-M1 to PM-M4 (Biolog Inc., Hayward, CA, USA) were used between passages 4 and 5. A total of 367 different carbon and energy substrates were tested, as shown in Table 1. Cells were initially grown in 75 cm 2 culture flasks to confluency, before they were resuspended at a density of 200.000 cells/ml in Biolog IF-M1 medium with Pen/Strep, L-glutamine and dialyzed-FBS. Cells were then washed 29 with PBS and detached with trypsin at 37°C for 2 min. Trypsinization neutralizing solution was added and cells were centrifuged at 3509g for 5 min at room temperature. MC-0 assay medium was added and cell count determined on Countess cells counter (Thermo Fisher Scientific, USA). Additional MC-0 was added to achieve a cell density of 10.000 cells/50 lL, which was seeded into the plates and incubated at 37°C under 5 % CO 2 for 18 h. Following the incubation, 10 lL Redox Dye Mix MB/well was added, the plate covered with sealing tape to prevent CO 2 loss and incubated at 37°C in the OmniLog (Biolog Inc., Hayward, CA, USA) for 24 h to kinetically measure tetrazolium reduction. OmniLog measures the intensity of purple colour formation using a CCD camera to record digital images every 15 min. All experiments on PM-M1 to PM-M4 plates were done in triplicate (48 plates total). Statistical analysis The Omnilog PMM Data Analyses Software was used to analyze metabolic phenotypes. The data were adjusted by subtracting average values of three negative control wells from all other samples at the end time point. Metabolic responses were normalized due to the differences in doubling times of the different cell types. Omnilog data exportable parameters were exported to Microsoft Excel and Bar graphs were made. Venn diagrams and digital images were processed using CorelDraw. Primary human cell types and growth conditions In order to investigate cell morphology and growth of human primary endothelial (HCAEC, HUVEC), as well as fibroblast (NHLF) cells, inverted microscopy was used. Cells appeared phenotypically healthy, with endothelial cells exhibiting ''cobblestone'' morphology, while fibroblasts appeared typically elongated (Fig. 1). For measurement of the cellular metabolism Biolog Phenotype MicroArray was used. The kinetics of purple formazan accumulation was measured with OmniLog PMM System, operating as an incubator/reader that holds up to 50 microplates, and reads/quantifies colour density in each well, every 15 min over the user-specified time period. Prior to all measurements of cell metabolic activity, it is useful to first optimize which of the two dye mixes will work better for the cell type under study and how many cells will be needed per well to produce a given amount of reduced formazan product within an appropriate time frame. Dye preference and optimal number of cells for determining growth in complete medium has previously been determined for 15 different human, mouse and rat cell lines (http://biolog. com/pdf/pmmlit/00P%20133rC%20Redox%20Dye%20Mix% 20Brochure%20JUL07.pdf), however no data is currently available on optimal conditions for primary human endothelial and fibroblast cells. We therefore performed experiments which His-Ala His-Asp His-Gly His-His (c) His-Leu His-Lys (d) His-Met C His-Pro His-Tyr His-Val Ile-His Ile-Ile D Ile-Leu Leu-Asn Leu-Asp E Leu-Glu Lys-Pro Lys-Ser G Lys-Thr Met-Gly Met-His Met-Met Met-Phe Met-Pro Met-Thr Phe-Asp Phe-Glu a-D-Glucose determined the specific media (M1 or M2), cell densities and Biolog redox dye mixes (MA or MB) to be used (Fig. 2). No reaction was detected using Biolog Redox Dye Mix MA. M1 medium enabled slightly stronger reactions compared to M2 medium. The cell density of 10.000 cells/well was the first dilution to give maximal readings while 5.000 cells/well showed considerably decreased responses. The optimal conditions for growth of HCAEC and NHLF were therefore set on 10.000 cells/well in Biolog IF-M1 medium with 1 9 Pen/Strep, 0.3 mM L-glutamine and 5 % dialysed FBS with Biolog Redox Dye Mix MB (10 ll/well) (Fig. 2). These conditions were used in all subsequent experiments determining cell growth on different substrates. Utilization of substrates on PM-M1, PM-M2, PM-M3 and PM-M4 microplates The extent to which human primary cell types from various tissues use different carbon substrates for energy has, to our knowledge, not been systematically investigated. It is known that, in addition to glucose, animal cells can metabolize and grow on other substrates (Bochner et al. 2011). In 1976, a survey of nutrient metabolism (Burns et al. 1976) examined 93 carbohydrates of which (a) 15 supported mammalian cell proliferation and (b) 42 were toxic or growth inhibitory and concluded that carbohydrate preferences of cells can be utilized to biochemically distinguish between different mammalian cell lines. In the following years, culture media were developed using glucose, pyruvate, and glutamine as energy sources, which were shown to support growth of most cell types and the interest to investigate the diversity of possible nutrients metabolized by different cell types waned. Recently, there is renewed interest to understand how the metabolism of different cell types could contribute to pathological pathways and help develop new approaches in nutritional therapy to support and improve treatment of a wide range of systemic disorders. Using the optimized assay conditions we measured metabolic utilization of different substrates by HCAEC, HUVEC and NHLF in four microplates PM-M1 to PM-M4 containing 367 substrate. Representative plates out of three independent experiments are shown (Fig. 3). Glucose is widely accepted as the primary nutrient for the maintenance and promotion of cell function and all cell types tested produced a strong response in wells containing glucose (Fig. 3, black boxes). Background, the average absorbance measured in negative controls, was subtracted from the data used for further analysis (Fig. 3, blue boxes). Exclusive substrates are indicated in HCAEC (Fig. 3, green boxes) and NHLF (yellow boxes), while HUVEC showed no exclusively utilized substrates. Data analysis of substrate utilization All three cell types highly utilized adenosine, inosine, Dmannose and dextrin as substrates (to tetrazolium reduction levels above 100 mOD), comparable to glucose. Using plates PM-M1 through PM-M4, patterns of substrate utilization were observed for HCAEC, HUVEC and NHLF, as indicated in green, red and yellow bars, respectively (Fig. 4). HCAEC (green bars), prevalently utilized substrates on PM-M1 and to a lesser extent on PM-M3, with only a few substrates utilized on PM-M2 and PM-M4. NHLF, on the other hand, showed high and exclusive utilization of certain glutamine-associated peptides on PM-M2, PM-M3 and PM-M4, not observed in either HCAEC or HUVEC (Figs. 3, 4). HCAEC were able to use a larger number of substrate nutrients, with higher tetrazolium reduction, as measured in mOD for energy production than HUVEC (such as dextrin, glycogen, maltotriose, D-maltose, a-D-glucose-6-phosphate, mannan, D-mannose, D-turanose, D-fructose-6- were assayed according to the standard protocol and data collected after 24 h using the OmniLog and PM software, with subtraction of the background. Average height (mOD) of tetrazolium reduction was measured in triplicate. HCAEC human coronary artery endothelial cells, HUVEC human umbilical vein endothelial cells and NHLF normal human lung fibroblasts phosphate, D-galactose, pectin, uridine, adenosine, inosine, D, L-a-glycerol-phosphate, tricarballylic acid, D, L-lactic acid, pyruvic acid, a-keto-glutaric acid, succinamic acid, among others) (Fig. 4). The specific substrates metabolized only by HCAEC, and not by either HUVEC or NHLF were the polymeric substrates mannan, pectin, gelatin, Val-Val and prevalently tricarballylic acid, as shown in Figs. 3 and 4. HUVEC did not show any exclusive metabolism of any substrates, but it was differentiated from HCAEC by notably stronger metabolism of D-glucuronic acid. The assays indicate that cardiovascular cells, such as HCAEC exhibit extensive metabolic flexibility enabling them to produce energy more rapidly and use the most abundant fuel available, as well. These characteristics might also contribute directly to atherosclerosis and cardiac dysfunction, which are both known to be associated with changes in metabolism and obesity. NHLF produced saturation of tetrazolium reduction in glucose wells during the 24 h kinetic measurements and utilized many substrates. Specifically, dextrin, glycogen, maltotriose, D-maltose, D-mannose, D-fructose-6-phosphate, inosine, D, L-lactic acid, D-glucose-6-phosphate, a-D-glucose-6-phosphate and adenosine produced strong reductive responses. Strikingly, NHLF also showed higher metabolism of the polyols D-sorbitol and xylitol along with L-glutamic acid, L-glutamine and all of the glutaminecontaining dipeptides (Fig. 4, PM-M2, PM-M3 and PM-M4, yellow bars). The most relevant glutamine-producing tissue is the muscle, accounting for about 90 % of all glutamine synthesized (Newsholme et al. 2003). Glutamine is also released, in small amounts, by the lung and the brain. In contrast, the biggest consumers of glutamine are the cells of intestines (Brosnan 2003), the kidney cells for the acidbase balance, activated immune cells (Newsholme 2001), and many cancer cells (Yuneva et al. 2007). We show for the first time, that normal healthy primary lung fibroblasts (in contrast to either HCAEC or HUVEC) have higher utilization of glutamine and glutamine-associated dipeptides. Glutamine has been shown in the past to be essential for growth of human embryonic diploid lung fibroblasts, when 10 % undialysed calf serum was used as a medium (Litwin 1979). Interstitial fibroblasts within a biomatrix are exposed to varying levels of amino acids (Rishikof et al. 1998) and it was previously found that expression of a1(I) collagen mRNA was directly dependent on amino acid availability (Krupsky et al. 1997). Since type I collagen is a major structural protein in the lung known to participate in tissue fibrosis during systemic sclerosis, the effects of glutamine on its levels would be crucial to study. Rishikof et al. reported in 1998, on the regulation of type I collagen mRNA in human embryonic lung fibroblasts and found that the addition of the combination of glutamine and cysteine increased a1(I) collagen mRNA levels 6.3-fold (Rishikof et al. 1998). Importantly, glutamine, utilized for nucleotide synthesis (Engstrom and Zetterberg 1984) also increased a1(I) collagen mRNA in dermal fibroblasts by elevating gene transcription (Bellon et al. 1995). The tetrazolium reduction measured in L-tryptophan during the 24 h measurements was detected in low levels only in NHLF. Neither HCAEC nor HUVEC metabolized L-tryptophan during this time frame. It is of interest that the metabolism of L-tryptophan is involved, via the kynurenine pathway, in patients with the eosinophilia-myalgia syndrome (Silver et al. 1992). Ingesting tryptophan may lead to a syndrome characterized by scleroderma-like skin abnormalities, fasciitis, in addition to eosinophilia. Plasma concentrations of L-kynurenine and quinolinic acid, both metabolites of tryptophan, were significantly higher in patients with active disease, as compared to patients studied after eosinophilia had resolved, or normal subjects (Silver et al. 1990). Since in late, diffused scleroderma, lungs can be prevalently affected, it could be hypothesized that this could be due to changed metabolism of tryptophan pathways in lung fibroblasts from patients. This would be relevant to test in the future. Comparison of exclusive and overlapping substrates Comparison of all three cell types together revealed that 49 of the 367 tested substrates were utilized by all three cell types, however both HCAEC and NHLF utilized several substrates uniquely, with a distinct pattern. On the other hand, HUVEC did not exhibit any exclusively utilized substrates. The specific substrates metabolized only by HCAEC, and not by HUVEC or NHLF were mannan, pectin, gelatin, Val-Val and prevalently tricarballylic acid, as shown in the Venn diagram (Fig. 5). NHLF were the only cells utilizing D-sorbitol, xylitol, L-glutamic acid, Dglutamic acid, L-tryptophan and the dipeptides Ala-Glu, Arg-Gln, Arg-Phe (b), Asp-Gln, Asp-Trp,Ile-Trp, Leu-Trp, Phe-Gly, Pro-Arg (b), Pro-Glu, Pro-Gln, Pro-Trp, Trp-Ala and Trp-Glu for their energy production (Fig. 5). Metabolism of substrates was measured within a 24 h timeframe. In the future, it could be advantageous to measure metabolic activity of different human primary cell lots, passages and preparations at different time periods with altered CO 2 and oxygen levels in the incubator. This could be especially important for utilization of certain substrates, such as L-Glutamine, L-Tryptophan, Met-and Val-associated dipeptides, which indicated firm delineation between the fibroblast and endothelial cells. Our results confirm that human primary cells can exhibit a unique and distinct metabolic fingerprint which could make them differentially susceptible to environmental changes that could subsequently lead to differential pathological processes. Concluding remarks Taken together, this is the first report to date exhibiting specific utilization of different carbon and energy sources of human primary endothelial and fibroblast cells on Biolog PM-M1 to PM-M4 plates. We showed that HCAEC have a higher overall metabolic rate than HUVEC, and that mannan, pectin, gelatin, Val-Val and prevalently tricarballylic acid were utilized exclusively by HCAEC. NHLF exhibited high metabolic activity by utilizing many substrate nutrients for energy production, especially glutamine associated dipeptides (Ala-Gln, Arg-Gln, Asp-Gln, Gln-Gly, Gln-Gln, Gln-Glu, Ile-Gln, Met-Gln, Pro-Gln, Ser-Gln, Thr-GlnTyr-Gln and Val-Gln). Also, the lack of utilization of mannan, pectin and gelatin in NHLF, coupled with their unique utilization of D-sorbitol, xylitol, L-tryptophan and tryptophan-associated dipeptides (Asp-Trp, Ile-Trp, Leu-Trp, Pro-Trp, Trp-Ala and Trp-Glu) points to a distinct lung fibroblast metabolic phenotype. Taken together, substrate utilization patterns on PM-M1 to PM-M4 plates provide a baseline metabolic characterization of different human primary cell types, which could be important in distinguishing early and celltype specific metabolic changes that could initiate pathophysiological processes. In the future, it would be crucial to perform further experiments on activated cells or cells isolated from patient tissues, in order to elucidate early cellular changes relevant to cardiac complications, as well as the process of lung fibrosis in systemic sclerosis. Compliance with ethical standards Conflict of interest The authors state no conflict of interest. Ethical approval The work was performed within the National Research Program P3-0314, with approval from the Slovene Ethical committee #99/04/15. Human and Animal Rights This article does not contain any studies with human participants or animals performed by any of the authors.
v3-fos-license
2023-09-17T15:06:57.426Z
2023-09-15T00:00:00.000
261985182
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "c3fa465fdb952046f0ee9fe6520cd861357e327c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44085", "s2fieldsofstudy": [ "Biology" ], "sha1": "bdbf718403eade8588f39d3cf5e4068d18df1c95", "year": 2023 }
pes2o/s2orc
Sulforaphane and bladder cancer: a potential novel antitumor compound Bladder cancer (BC) is a common form of urinary tract tumor, and its incidence is increasing annually. Unfortunately, an increasing number of newly diagnosed BC patients are found to have advanced or metastatic BC. Although current treatment options for BC are diverse and standardized, it is still challenging to achieve ideal curative results. However, Sulforaphane, an isothiocyanate present in cruciferous plants, has emerged as a promising anticancer agent that has shown significant efficacy against various cancers, including bladder cancer. Recent studies have demonstrated that Sulforaphane not only induces apoptosis and cell cycle arrest in BC cells, but also inhibits the growth, invasion, and metastasis of BC cells. Additionally, it can inhibit BC gluconeogenesis and demonstrate definite effects when combined with chemotherapeutic drugs/carcinogens. Sulforaphane has also been found to exert anticancer activity and inhibit bladder cancer stem cells by mediating multiple pathways in BC, including phosphatidylinositol-3-kinase (PI3K)/protein kinase B (Akt)/mammalian target of rapamycin (mTOR), mitogen-activated protein kinase (MAPK), nuclear factor kappa-B (NF-κB), nuclear factor (erythroid-derived 2)-like 2 (Nrf2), zonula occludens-1 (ZO-1)/beta-catenin (β-Catenin), miR-124/cytokines interleukin-6 receptor (IL-6R)/transcription 3 (STAT3). This article provides a comprehensive review of the current evidence and molecular mechanisms of Sulforaphane against BC. Furthermore, we explore the effects of Sulforaphane on potential risk factors for BC, such as bladder outlet obstruction, and investigate the possible targets of Sulforaphane against BC using network pharmacological analysis. This review is expected to provide a new theoretical basis for future research and the development of new drugs to treat BC. Introduction Bladder cancer (BC) is the most common malignancy of the urinary tract and ranks as the 10th most common malignant tumor globally.In 2020, there were 573,278 new diagnoses and 212,536 deaths due to BC (Sung et al., 2021).According to recent estimates, there will be 91,893 new cases of BC in China in 2022, resulting in 42,973 deaths, with a significantly higher incidence rate among males compared to females (Xia et al., 2022).Approximately 90% of BC cases are urothelial carcinomas, which are associated with high incidence and postoperative recurrence rates (Lenis et al., 2020).BC can be categorized according to tumor infiltration depth and staging as non-muscle invasive bladder cancer (NMIBC, 70%-75%, Ta, T1 and Tis) and muscle-invasive bladder cancer (MIBC, 25%-30%, T2-T4) (Seidl, 2020). However, the recurrence rate after the first surgery for NMIBC is approximately 70%, with 10%-20% progressing to MIBC (Kaufman et al., 2009).In MIBC patients, 25% of patients have lymph node metastases at diagnosis, and approximately 5% of patients develop distant metastases (Mokdad et al., 2017).In recent years, the proportion of newly diagnosed BC patients with advanced or metastatic disease has been rising, resulting in poor survival and prognosis.The 5-year survival rate for advanced or metastatic BC is only 15% (Mokdad et al., 2017). Currently, a diverse range of standardized treatment options are available for BC.For all resectable non-metastatic MIBC patients, radical cystectomy and bilateral pelvic lymph node dissection are strongly recommended.However, MIBC patients still face the risk of postoperative recurrence, metastasis, and death (Patel et al., 2020;Slovacek et al., 2021).The first-line treatment for advanced or metastatic BC is platinum-based chemotherapy, which has a median overall survival (OS) of 15 months but is associated with intolerable side effects.The median OS of advanced or metastatic BC patients who are not eligible for platinum-based chemotherapy is only 9 months (Thomas and Sonpavde, 2022).Therefore, the treatment of BC still faces immense challenges, and there is an urgent need to discover effective new anti-tumor drugs.Interestingly, with the advancement of medical technology, plant extracts have shown significant potential for anti-cancer treatment, and there is an increasing interest in identifying specific plant compounds and their mechanisms of action. Due to the complexity of bladder cancer, phytochemicals exhibit the following advantages in the treatment and prevention of BC: i) they are abundant in food sources, with proven efficacy, safety, tolerability, practicality, low cost, minimal side effects, and easy acceptance; ii) they share a common origin with medicines and foods: many phytochemicals with anticancer activity are derived from vegetables and fruits, and dietary habits are closely linked to cancer; iii) phytochemicals have a wide range of sources, both from extraction from plants and chemical synthesis; iv) most phytochemicals are metabolized and excreted in urine, allowing for more effective delivery to the bladder (Xia et al., 2021).Excitingly, the tissue uptake of sulforaphane is by far the highest, with urinary excretion ranging from 70% to 90% of the dose, with small individual differences and a tendency to accumulate in the bladder (Yagishita et al., 2019).Therefore, people are increasingly interested in paying attention to its therapeutic effects in BC. Sulforaphane [SFN, C6H11S2NO,1-isothiocyanato-4-(methylsulfinyl)butane] (Figure 1) is an isothiocyanate found in cruciferous vegetables and is one of the hydrolytic products of glucosinolates from myrosinase in plants.It is abundant in cruciferous vegetables such as broccoli, cabbage, cauliflower, kale, and mustard greens and is currently the most effective plant-active substance discovered in vegetables for its anti-cancer effect (Vanduchova et al., 2019).Although SFN was isolated and identified in 1959, it did not gain people's attention until 1992 when Prochaska et al. developed a method for screening fruits and vegetable extracts that can induce phase 2 enzymes (Prochaska et al., 1992).Current research has confirmed that sulforaphane not only has detoxifying (Alkharashi et al., 2019), antioxidant (Akbari and Namazian, 2020), anti-inflammatory (Al-Bakheit and Abu-Qatouseh, 2020), antibacterial (Deramaudt et al., 2020), immune-regulating (Mahn and Castillo, 2021), obesityreducing (Çakır et al., 2022), cardiovascular disease-improving (Zhang et al., 2022), and diabetes-improving (Bose et al., 2020) effects but also has significant anti-cancer effects in many cancers, such as lung cancer (Wang et al., 2004), breast cancer (Fowke et al., 2003), colon cancer (Lin et al., 1998), and prostate cancer (Joseph et al., 2004).Due to the potent pharmacological effects of SFN, more and more studies are focusing on its impact on BC cells.Therefore, this article summarizes the effects and mechanisms of SFN on bladder cancer to clarify its therapeutic effects on BC. Sulforaphane and its enantiomers and analogues The absolute bioavailability of sulforaphane is about 80%, which is easily absorbed and eliminated, and its biological half-life is only a few hours.SFN is mainly metabolized through the mercapturic acid pathway in vivo.Once ingested, it is absorbed in the jejunum and passively diffused into the bloodstream.It first binds to the thiol of plasma proteins and enters the cell through the plasma membrane, then reacts with glutathione to form a conjugate, followed by a series of sequential transformations catalyzed by γ-glutamyltranspeptidase (γGT), cysteinylglycinase (CGase), and N-acetyltransferase (NAT).Finally, the conjugate is excreted by the transport protein and metabolized into mercapturic acid.Finally, these metabolites are transported to the kidneys and selectively transported to the bladder through urinary excretion (Mennicke et al., 1987;Chung et al., 1998).Therefore, due to the metabolic characteristics of SFN and its metabolites, it tends to accumulate in specific organs, especially the bladder (Bricker et al., 2014).After oral administration of SFN for 1.5, 6, and 24 h, the levels of bladder tissue in rats were 189.6 ± 49.0, 51.7 ± 10.9, and 6.9 ± 0.9 nmol/g, respectively.The maximum concentration in urine was about 50-fold higher than that in plasma (Veeranki et al., 2013;Bricker et al., 2014). In addition, there are three main reasons why SFN can attract people's interest: i) It naturally exists in a wide range of vegetables; ii) It is a highly effective phase 2 enzymes inducer; and iii) It is a single functional inducer (Prochaska and Talalay, 1988).However, Phase I enzymes may activate the carcinogen as the ultimate carcinogen, while the induction of cytoprotective phase 2 enzymes is sufficient for chemoprevention.SFN can induce phase 2 enzymes activity in the human body while inhibiting the production of phase I enzymes, ultimately eliminating carcinogenic substances and other harmful substances through various enzymatic systems (Tortorella et al., 2015).It is worth mentioning that SFN has the highest induction rate of phase 2 enzymes in the bladder among many organs (Veeranki et al., 2013), which has a high concentration in the bladder, and the bladder epithelium is the tissue with the most exposure to SFN and its metabolites, second only to gastric tissue (Zhang, 2004;Veeranki et al., 2013).Therefore, SFN is more effective than other target organs in preventing bladder cancer.It is of great significance to study the molecular mechanism of SFN against bladder cancer. SFN consists of a basic structure consisting of β-D-thioglucose, an oxime sulfonate and a variable side chain composed of amino acids, including methionine, tryptophan, phenylalanine, or other amino acids.The side chain structure can contain alkyl, aryl, or heterocyclic groups, which determine the chemical, physical, and biological properties of isothiocyanates (Beal, 2009;Klomparens and Ding, 2019).Importantly, existing research data have confirmed that structural characteristics can determine the bioactivity of natural isothiocyanates, as even minor changes can have significant effects on their efficacy.Additionally, the oxidation state of sulfur can also alter the activity and potency of compounds, such as the fact that thioether I and sulfone II derivatives have lower activity than sulfoxide derivatives (sulforaphane) (Zhang et al., 1992;Rose et al., 2000;Juge et al., 2007).Therefore, based on the structural characteristics of SFN, there is great interest in exploring its enantiomers and analogs to obtain a chemically stable and biologically more active compound. However, only a few studies have so far reported on the differences between SFN enantiomers and analogs.SFN exists in two forms of enantiomers, including natural (R)-SFN and unnatural (S)-SFN (Figure 2).Natural SFN exists as a single enantiomer with an RS absolute configuration (Vergara et al., 2008).In plants, natural SFN is mainly (R)-SFN, but depending on the plant species or location (flower, leaf, or stem), the content of (S)-SFN can account for 1.5%-41.8% of the total SFN (Okada et al., 2017).Abdull Razis et al. (Abdull Razis et al., 2011a) showed that (R)-SFN significantly upregulates the levels and activities of NAD(P)H: quinone oxidoreductase-1 (NQO1) and glutathione S-transferase (GST) in the liver and lungs of rats, while the activity of (S)-SFN is lower or ineffective.Further studies by Abdull Razis et al. demonstrate that glucuronosyl transferase and epoxide hydrolase are two major carcinogen-metabolizing enzyme systems, (R)-SFN and (S)-SFN can both enhance the activity of epoxide hydrolase in the rat liver, but the upregulation effect of (R)-SFN is stronger.In addition, (R)-SFN increases the expression and activity of glucuronosyl transferase, while (S)-SFN has the opposite effect (Abdull Razis et al., 2011b). Although an increasing number of SFN analogues have been analyzed, no analogues with biological activity beyond natural SFN have been found (Elhalem et al., 2014;Kiełbasiński et al., 2014;Janczewski, 2022).The natural SFN analogues contain alkyl chains with 3-5 carbon atoms, and their II, IV, and VI sulfur atoms are in an oxidized state.However, these analogs include iberin b) and alyssin c) (Karrer et al., 1950) with a methylsulfonyl group, ibertin d) (Kjaer et al., 1955a), erucin e) (Kjaer and Gmelin, 1955) and berteroin f) (Kjaer et al., 1955b) with a methylsulfonyl group, and cheirolin g) (Schneider, 1908) and erysolin h) (Schneider and Kaufmann, 1912) with a methylsulfonyl group.The α,β-unsaturated analog of sulforaphene i) is also known (Balenović et al., 1966) (Figure 3).However, researchers have not achieved satisfactory biological activity regardless of whether they converted the sulfoxide group in SFN structure into sulfones or sulfide in methyl mercaptan groups, or substituted sulfoxides with methylene or carbonyl groups, or changed the number of methylene units from 4 to 3 or 5.Although these analogs altered the rigidity of the methylene bridge and the structure of SFN, in all cases, the induction activity of phase 2 enzymes was not improved.And in certain instances, its activity was even decreased, demonstrating the superiority and importance of the natural SFN structure (Posner et al., 1994;Moriarty et al., 2006;Zhang and Tang, 2007). Sulforaphane and BC There is controversy surrounding epidemiological studies on the association between cruciferous vegetable intake and BC risk.According to the results of two case-control studies and one cohort study, there is a significant negative correlation between cruciferous vegetable intake and BC risk (Michaud et al., 1999;Castelao et al., 2004;Lin et al., 2009).However, some research has shown that there is no significant correlation between cruciferous vegetable intake and BC risk (Park et al., 2013;Nguyen et al., 2021;Yu et al., 2021).In a meta-analysis of a prospective study that included 1,503,016 participants, there was no significant doseresponse relationship between cruciferous vegetable intake and BC (Yu et al., 2022).Additionally, in a meta-analysis of a systematic review of 41 studies and 303 observational studies that included 13,394,772 patients, the results showed no significant correlation between cruciferous vegetable intake and the health outcome of BC (Li Y. Z. et al., 2022).However, it is worth noting that despite the controversy surrounding the association between cruciferous vegetable intake and BC, an increasing number of studies are exploring the potential anti-BC properties of SFN. Currently, many studies have reported the study of SFN and BC cell lines and animal models.Although the genetic heterogeneity and frequency of oncogenes and tumor suppressor genes vary among different types of BC, SFN has shown clear therapeutic effects on different subtypes of BC in both in vivo and in vitro experiments (Tables 1, 2).Some of the human bladder cancer cell lines that have been commonly used in in vitro experiments are RT4, T24, RT112, 5637, UM-UC-3, TCCSUP, J82, SW780 and others, some of which are subtyped as NMIBC, while others are subtyped as MIBC (Fujiyama et al., 2001;Arantes-Rodrigues et al., 2013;Conde et al., 2015;Liu et al., 2015).In these in vitro studies, researchers have treated various BC cell lines and animal models with specific concentrations of SFN to reveal the potential of SFN against BCa in vitro.This paper summarizes the existing in vitro studies of SFN and BC (Table 1). Of course, many studies have also reported in vivo studies of SFN and BC animal models.In these in vivo studies, researchers constructed animal tumor models by subcutaneously or orthotopically implanting human bladder cancer cell lines into immunodeficient mice.Finally, researchers confirmed that SFN can effectively inhibit BC growth by treating animal models with different concentrations and methods of SFN.Similarly, this article summarizes the current in vivo research on SFN and BC (Table 2). Sulforaphane and bladder outlet obstruction: bladder protection Partial bladder outlet obstruction (pBOO) is a common chronic disease of the urinary system where blockages usually occur at the bottom and neck of the bladder, preventing normal urination.Currently, pBOO commonly occurs in benign prostatic hyperplasia (BPH), bladder neck spasm, urethral stricture, congenital urethral malformation, and bladder neck tumors, among which BPH is the most common cause of pBOO (MacDonald and McNicholas, 2003;Albisinni et al., 2016;Kai et al., 2020).BOO primarily induces progressive remodeling of bladder tissues through three consecutive stages: hypertrophy, compensation, and decompensation (Levin et al., 2004).In the short term, pBOO can temporarily improve bladder function by inducing bladder mucosal hyperplasia and detrusor smooth muscle hypertrophy.However, long-term BOO can cause loss of bladder smooth muscle, deposition of extracellular matrix, and degradation of neurons, leading to changes in the bladder's tissue structure and decreased function (Kai et al., 2020;Lee et al., 2022). Currently, some studies have confirmed that BOO may be related to the development of BC.Lin et al. confirmed that BOO induces bladder mucosal carcinogenesis by forming infrared spectral anomalies, through a study of changes in rabbit bladder mucosal conformation before and after BOO formation using Fourier transform infrared spectroscopy with attenuated total reflection techniques (Lin et al., 1994).In addition, a study analyzed the data of 66,782 patients in the National Health Insurance Research Database (NHIRD) found that adverse outcomes after treatment of BOO caused by elderly BPH, whether by drugs or surgery, were significantly associated with a higher incidence of BC (Lin et al., 2019).Furthermore, in rats treated with N-butyl-N-(4-hydroxybutyl) nitrosamine (BBN), it was found that pBOO-induced bladder hypertrophy, hyperplasia, angiogenesis, and hypoxia were significantly related to an increased incidence of BC, which would accelerate bladder carcinogenesis (Matsumoto et al., 2009).Therefore, actively treating BOO is of great significance for the prevention of BC. Fortunately, SFN has shown definite therapeutic effects on BOO.In an in vivo study by Liu et al., SFN prolonged the voiding interval, increased bladder capacity, improved bladder compliance, and inhibited the increase of collagen fibers in BOO rats.In addition, Frontiers in Pharmacology frontiersin.orgthey found that SFN could alleviate pBOO-induced bladder injury by activating the Nrf2-ARE pathway, increasing the activity of antioxidant enzymes such as superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), and catalase (CAT) to reduce oxidative stress, and lowering the B cell lymphoma 2 associated X protein (Bax)/B cell lymphoma 2 (bcl-2) ratio to inhibit apoptosis (Liu et al., 2016).In Liu et al.'s subsequent in vivo study (Liu et al., 2019), through treating BOO rat models with 0.5 mg/kg/day of SFN, they found that bladder compliance in BOO rats was significantly reduced.Further research found that SFN improved bladder compliance by upregulating matrix metalloproteinases-1 (MMP-1) and downregulating tissue inhibitors of metalloproteinases-1 (TIMP-1) expression.At the same time, they also found that SFN could inhibit the decrease in BOO compliance by reducing the collagen I/III ratio. 5 Effects on bladder cancer Induction of apoptosis in bladder cancer cells Apoptosis is one of the most famous active forms of programmed cell death, and it plays a key role in limiting cell population expansion, tumor cell death, and maintaining tissue homeostasis.There are three main apoptotic pathways: the endogenous mitochondrial pathway, the exogenous death receptor pathway, and the endoplasmic reticulum (ER) pathway (Gupta and Gollapudi, 2007).Among these, the Bcl-2 protein family is an important regulator of cell survival and apoptosis, while cysteine-containing cysteinases are key enzymes in the initiation and execution of apoptosis (Czabotar et al., 2014;Boice and Bouchier-Hayes, 2020).Available studies have shown that SFN can induce BC apoptosis through all three of these pathways.Studies have shown that survivin is an important anti-apoptotic protein that is associated with poor prognosis in BC and is also a predictor of BC disease progression (Rosenblatt et al., 2008;Jeon et al., 2013).Research has confirmed that SFN can significantly increase caspase3/7 activity and poly (ADP ribose) polymerase (PARP) cleavage while reducing the expression of survivin protein in BC cells (Abbaoui et al., 2012;Wang and Shan, 2012).At the same time, SFN also reduces the expression of the tyrosine kinase receptors EGFR and HER2/neu in aggressive BC cells (Abbaoui et al., 2012).In addition, Nrf2, reactive oxygen species (ROS), and mitochondria play an important role in SFN-induced apoptosis.SFN alone or in combination can increase the activation and expression of caspase-3, caspase-8, caspase-9, and PARP cleavage in BC cells (Tang and Zhang, 2005;Jo et al., 2014;Park et al., 2014;Jin et al., 2018).However, SFN can also induce ROS production and induce MMP depolarization to disrupt mitochondrial membrane integrity, thereby inducing apoptosis (Jo et al., 2014;Park et al., 2014;Jin et al., 2018).Notably, tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) induces apoptosis in a variety of cancer cells by binding to the death receptors DR4 and DR5.SFN/TRAIL upregulates DR5 expression and ROS production by inhibiting Nrf2 activation.Thereby promoting apoptosis in TRAIL-resistant BC cells (Jin et al., 2018).Secondly, SFN upregulates ROS production, induces mitochondrial oxidative damage, mitochondrial membrane potential depolarization, cytochrome c release, and induces an imbalance between Bax and Bcl-2; downregulates the proteins of inhibitor of apoptosis proteins (IAP) family members, activates caspase-9, caspase −3, and cleavage of PARP; upregulates glucose-regulated protein (GRP) 78 and C/EBP-homologous protein (CHOP) expression; and accumulation of p-Nrf2 in the nucleus; thereby induces apoptosis in BC cells.However, the regulatory effects of SFN on these proteins may be dependent on the ER and Nrf2-ARE pathways (Jin et al., 2018). Induction of bladder cancer cells cycle arrest The process of cell cycle progression is strictly dependent on the regulation of cyclins, cell cycle-dependent protein kinases (CDK), and CDK inhibitors (CKI), which ultimately complete DNA replication and cell proliferation.Abnormal expression of cell cycle proteins can cause abnormal cell proliferation, eventually leading to tumor development (Dang et al., 2021).It is worth noting that there are three major cell cycle checkpoints that regulate the cell cycle process, including the G1/S checkpoint (restriction point), the G2/M DNA damage checkpoint, and the spindle assembly checkpoint (Barnum and O'Connell, 2014;Satyanarayana and Kaldis, 2009;Gao et al., 2020).Therefore, targeting the cell cycle checkpoints and disrupting the cell cycle is an important direction for treating cancer.Increasing evidence suggests that SFN can significantly induce G2/M cell cycle arrest in BC cells, thereby inhibiting their proliferation (Shan et al., 2006;Abbaoui et al., 2012;Jin et al., 2018;Xie et al., 2022).Part of the mechanism is due to the upregulation of CDK1, CDK2, cyclin A, and cyclin B by SFN, which alters the CDK-cyclin axis (Xie et al., 2022).Interestingly, histone H3 phosphorylation, a mid-phase marker of mitosis, regulates transcriptional activity in G1 phase and affects chromatin condensation in G2/M phase, and phosphorylation modification of histone H3 serine 10 [H3(Ser10)] is closely related to cell cycle and gene transcriptional regulation (Nowak and Corces, 2004).Previous studies have shown that SFN can significantly increase the levels of cyclin B1, Cdk1, the cyclin B1/Cdk1 complex, and phosphorylated histone H3 (Ser 10) in BC cells, inducing mitotic arrest.However, the mechanism may be mediated by SFN-dependent activation of ROS-dependent cyclin B1/Cdk1 complexes and histone H3 (Ser 10) phosphorylation (Park et al., 2014).In addition, SFN can also block the cell cycle by regulating CKI.p27, a very important CKI, inhibits cell proliferation by binding to cyclin/CDK complexes (Razavipour et al., 2020).Interestingly, SFN can block the G0/G1 checkpoint, thereby inhibiting breast cancer cell proliferation.However, this mechanism seems to be significantly associated with upregulation of p27 (Shan et al., 2006). Inhibition of growth, invasion and metastasis of bladder cancer cells Tumor cell growth, migration, and invasion are dynamic and complex processes that contribute to the progression of many diseases.The ability of tumor cells to migrate and invade allows them to change locations within tissues and detach from primary tumors, resulting in disease spread.It also allows tumor cells to enter lymph and blood vessels to spread to distant organs and establish metastases (Duff and Long, 2017).Fortunately, current literature reports have shown that SFN can significantly inhibit the growth, invasion, and metastasis of BC cells through multiple pathways (Tang et al., 2006;Zhang et al., 2006;Wang and Shan, 2012;Bao et al., 2014).The PI3K/Akt/mTOR pathway is involved in many cellular processes, including motility, growth, metabolism, and angiogenesis, and its abnormal activation often promotes BC growth and drug resistance (Shaw and Cantley, 2006;Courtney et al., 2010;Xie et al., 2022).Studies have shown that low doses of SFN promote BC angiogenesis, while high doses significantly inhibit it (Bao et al., 2014).In addition, SFN can reduce the levels of AKT, mTOR, mTOR complex 1 (Raptor), and mTOR complex 2 (Rictor) in most BC cells, thereby inhibiting their growth.This seems to be partially due to the inhibition of the PI3K/Akt/mTOR pathway.Interestingly, increased levels of pAkt and pRictor have been observed in a small subset of BC cells, but their relevance remains unclear (Xie et al., 2022).Additionally, FAT atypical cadherin 1 (FAT1) is highly expressed in breast cancer tissues or cells and is associated with a poor prognosis.SFN can dosedependently reduce FAT1 expression to inhibit the vitality, invasion, and metastasis of BC cells (Wang et al., 2020).The epithelial-to-mesenchymal transition (EMT) is the biological process by which epithelial cells acquire a mesenchymal phenotype, which is mediated by EMT transcription factors (TFs), including SNAIL (Snail1/Snail and SNAIL2/Slug), TWIST (Twist1 and Twist2), and zinc-finger E-box-binding (ZEB).In addition to regulating each other, TF also leads to epithelial gene repression (such as downregulation of E-cadherin, ZO-1, and occludin) and mesenchymal gene induction (such as upregulation of N-cadherin, vimentin, and fibronectin) (Lamouille et al., 2014;Debnath et al., 2022).Notably, SFN has been shown to clearly regulate EMT processes in BC cells.Previous studies have shown that upon treatment of human BC cells with SFN, on the one hand, SFN dose-dependently downregulates cyclooxygenase-2 (COX2), MMP-2, and MMP-9; on the other hand, SFN induces E-cadherin expression through inhibition of Snail and ZEB1.Interestingly, MiR200c is a key regulator of EMT in BC cells, and SFN can also inhibit ZEB1 and induce E-cadherin expression by upregulating MiR200c.These are mechanisms by which SFN inhibits the EMT process in BC cells through the COX-2/MMPs/ ZEB1, Snail and miR-200c/ZEB1 pathways (Shan et al., 2013).Secondly, histone H1 is closely associated with BC development and progression, with common modifications such as acetylation, phosphorylation, methylation, ubiquitination, SUMOlyation, and ADP-ribosylation, where the histone acetylation state is regulated by histone acetyltransferase (HAT) and histone deacetylase (HDAC) (Portela and Esteller, 2010;Telu et al., 2013;Shen et al., 2015).Excitingly, SFN reduces histone H1 phosphorylation by inhibiting HDACs 1, 2, 4, 6 and HATs and enhancing the activity of the phosphatases PP1β and PP2AD (Abbaoui et al., 2017). Inhibition of glucose metabolism in bladder cancer cells As we all know, tumor cell glucose metabolism is closely related to the occurrence, progression, invasion, metastasis, and treatment of diseases.Tumor cells often achieve sustained growth by reprogramming their glucose metabolism.Even under conditions of sufficient oxygen, cancer cells can still generate adenosine triphosphate (ATP) and lactate by high-speed glycolysis, a phenomenon known as the Warburg effect (WARBURG, 1956).The production of a large amount of ATP and raw materials through the Warburg effect contributes to the proliferation and progression of tumor cells.Therefore, inhibiting glucose metabolism reprogramming is of great significance in BC and even other malignant tumors.According to existing research, SFN can inhibit glucose metabolism in human liver cancer, gastric cancer, and prostate cancer cells by downregulating the expression of serine palmitoyltransferase 3 (SPTLC3), glycolysisrelated enzymes including hexokinase 2 (HK2), pyruvate kinase M2 (PKM2), lactate dehydrogenase A (LDHA), and T-box transcription factor 15 (TBX15), and upregulating the expression of kinesin family member 2 C (KIF2C).These mechanisms are partially due to activation of the insulin receptor substrate 1 (IRS-1)/Akt pathway and the TBX15/KIF2C pathway (Singh et al., 2019;Teng et al., 2019;Gu and Wu, 2023).Therefore, SFN exhibits great potential for antitumor glucose metabolism.Research has shown that although the systemic metabolic spectrum of BC is still unclear, there is a severe glucose metabolism disorder in its urine (Putluri et al., 2011), serum (Bansal et al., 2013), and cell lines (Dettmer et al., 2013).Fortunately, SFN has shown significant inhibitory effects on BC glucose metabolism.According to reports, SFN can significantly reduce a variety of glucose metabolism enzymes, including HK2, PKM2, and pyruvate dehydrogenase (PDH), to downregulate the glycolysis of BC cells and inhibit their proliferation in in vivo and in vitro experiments.However, this seems to be related to blocking the AKT1/HK2 axis (Huang et al., 2022).In addition, SFN significantly inhibits the proliferation of BC cells under hypoxic conditions compared to normoxic conditions.SFN can lower the glycolysis metabolism in the hypoxic microenvironment by downregulating hypoxia-inducible factor-1 alpha (HIF-1α) induced by hypoxia and blocking HIF-1α nuclear translocation in NMIBC cell lines, thereby inhibiting the proliferation of NMIBC cells (Xia et al., 2019) (Figure 4). Therapeutic effect of sulforaphane combined with drugs/carcinogens As new anticancer drugs continue to emerge, chemotherapy resistance in traditional cancer treatment regimens, as well as severe side effects under conventional treatment, have prompted many cancer patients to choose complementary or alternative medicine (CAM) treatment regimens.In fact, the combination of traditional Chinese and Western medicine, especially green chemoprevention, has become a very popular option, especially for late-stage tumors and patients with poor prognosis under conventional treatment (Irmak et al., 2019;Schuerger et al., 2019).The combination of these natural compounds and conventional chemotherapy drugs increases the toxicity of cancer cells through synergistic effects, reducing treatment doses and toxicity (Singh et al., 2016;Zhou et al., 2016).It is worth mentioning that SFN has become an important choice for BC chemoprophylaxis and new drug development.Due to its low toxicity, it has been studied jointly with many chemotherapy drugs or carcinogens and has shown significant preclinical BC prevention effects. In fact, the long-term use of the mTOR inhibitor Everolimus to inhibit tumor growth and spread has been unsuccessful due to resistance caused by genomic instability induced by long-term treatment with Everolimus, but the underlying mechanisms are currently unclear (Burrell and Swanton, 2014).It has been reported that SFN can inhibit resistance-related tumor dissemination during treatment of BC with everolimus, demonstrating great potential for treating BC patients who are resistant to mTOR inhibitors.Studies have shown that after treatment with 0.5 nM everolimus, or 2.5 μM SFN, or 0.5 nM everolimus +2.5 μM SFN for 8 weeks, everolimus enhanced the chemotactic movement of RT112 cells, while SFN or the SFN-Everolimus combination significantly inhibited the chemotactic movement of RT112 cells.The mechanism seems to be that the SFN or SFN-Everolimus combination inhibits the mTOR complex pRictor and regulates CD44 receptor variants (upregulates CD44v4 and CD44v7) and integrin α and β subtypes (upregulates α6, αV and β1, and downregulates β4) (Justin et al., 2020a;Justin et al., 2020b).In addition, the combination of the carbonic anhydrase (CA) inhibitor acetazolamide and SFN treatment can significantly inhibit the survival of BC cells.Acetazolamide combined with SFN treatment can significantly inhibit BC growth in vivo and in vitro, produce effective anti-proliferative and anti-cloning effects, induce cell apoptosis through activation of caspase-3 and PARP, and inhibit the EMT process of BC cells by downregulating the levels of CA9, E-cadherin, N-cadherin, and vimentin.However, the reduction of these components may be due to downregulation of the survival-mediated Akt pathway (Islam et al., 2016a).Furthermore, the use of SFN in combination with cisplatin, docetaxel, and chimeric antigen receptor-modified T (CAR-T) cell therapy has also shown good therapeutic effects, even helping to alleviate some toxic side effects of certain chemotherapy drugs (Kerr et al., 2018;Calcabrini et al., 2020;Shen et al., 2021). Furthermore, the main bladder carcinogen is 4-aminobiphenyl (ABP), and its induction of high levels of ABP-DNA adducts is associated with more aggressive tumor behavior, with 80% of ABP-DNA adducts being dG-C8-ABP (Block et al., 1978;Talaska et al., 1991;Bookland et al., 1992).Research has shown that DING et al. found that ABP can induce BC cell DNA damage in a dose-dependent manner through measuring dG-C8-ABP, while SFN can significantly reduce the levels of dG-C8-ABP in both in vivo and in vitro experiments, thus inhibiting ABP-induced bladder DNA damage.However, its inhibitory effect appears to be related to the activation of the Nrf2 signaling pathway, but the molecular mechanism by which Nrf2 inhibits ABP-induced DNA damage is not known, nor can the exact Nrf2 regulatory gene that mediates the anti-ABP activity of SFN be identified (Ding et al., 2010).Secondly, N-butyl-N-(4hydroxybutyl) nitrosamine (BBN) is the most commonly used carcinogen in bladder cancer research, and its carcinogenicity is limited to the bladder (He et al., 2012).It has been identified as an effective and specific bladder carcinogen in rat studies (Oliveira et al., 2006).Fortunately, in a C57BL/6 mouse bladder cancer model induced by BBN with or without SFN treatment for 23 weeks, SFN significantly improved the abnormal fecal microbiota composition, intestinal epithelial barrier disruption, and inflammatory response of BC mice induced by BBN.However, the mechanism includes normalization of gut microbiota imbalance, increased fecal butyric acid levels, and expression of tight junction proteins, G proteincoupled receptor 41 (GPR41), and glucagon-like peptide 2 (GLP2) to improve intestinal mucosal damage, and reduce the levels of cytokines (IL-6) and secretory immunoglobulin A (SIgA) to reduce inflammation and immune response (He et al., 2018). Network pharmacological analysis To explore and validate the target and molecular mechanisms of SFN on BC, we conducted a network pharmacology analysis of SFN and BC.First, we identified the structure of sulforaphane through Pubchem (https://pubchem.ncbi.nlm.nih.gov), and used the Swiss Target Prediction database (http://www.swisstargetprediction.ch/) and the Traditional Chinese Medicine Systems Pharmacology database (https://www.tcmsp-e.com/)to screen drug targets.We then submitted the collected targets to the UniProt database (https:// www.uniprot.org/),with the species limited to "Homo sapiens".We converted the protein targets to official gene names and selected gene targets with probabilities greater than 0 in the Swiss Target Prediction database, excluding duplicate genes, resulting in drug targets for sulforaphane (102).Next, we searched Genecards (https://www.genecards.org/),OMIM (https://www.omim.org/),and Disgenet databases (https://www.disgenet.org/)using "bladder cancer" as a keyword to obtain disease targets.After removing duplicate targets from the three databases, we obtained 12,423 disease target genes.Finally, we input the drug target genes and disease target genes obtained using the above methods into the online Venny 2.1 plot platform (https://www.bioinformatics.com.cn/) to obtain the intersectional target genes of "bladder cancer" and "sulforaphane (102)" (Figure 5). These cross-genetic targets are considered potential targets for SFN therapy in BC, and we performed an analysis on them through a series of methods.First, we uploaded these genes to the String online database (https://string-db.org/),generating a protein-protein interaction network map.We set the species to "human" and integrated the comprehensive score >0.4 as the critical value for inclusion in the network.Then, we used Cytoscape 3.9.1 to exclude irrelevant gene targets and further visualize these results (Figure 6), identifying key targets of sulforaphane.Additionally, we analyzed the function of the Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways.We input these gene data into the David data platform (https://david.ncifcrf.gov/tools.jsp)and set the species as "Homo species," further analyzing the enrichment analysis of SFN in BC-related biological processes (BP), cellular components (CC), molecular functions (MF) and signal pathways.For the obtained information, we meet the value < 0. 05 requirement and selected the top 10 enrichment information for BP, CC, and MF in sequence according to gene number, as well as the top 20 enrichment information for KEGG, and visually analyzed the results using a bioinformatics online platform (https://www.bioinformatics.com.cn/)(Figure 7). Finally, our results indicate that SFN has very high targeting activity toward BC.Among them, the heat shock protein 90 alpha family class A member 1 (HSP90AA1), proto-oncogene tyrosineprotein kinase SRC (SRC), epidermal growth factor receptor (EGFR), MAPK1, MAPK14, nitric oxide synthase 3 (NOS3), and protein tyrosine phosphatase receptor type C (PTPRC) play key roles in the anti-BC process of SFN.In addition, the cellular components of the GO enrichment analysis indicated that the targets of SFN interfered with the normal assembly of various cellular components, including the cytoplasm, cytosol, plasma membrane, nucleus, and mitochondria.This further indicates that the workplace of SFN is within the cell.Furthermore, the molecular functions and biological processes of GO enrichment analysis suggest that the targets of SFN are involved in the activation and binding of a range of cellular receptors and cascade downstream signaling pathways, as well as the regulation of cell proliferation and apoptosis, and modulate the activity of a number of protein kinases and the production of ATP.In addition, KEGG enrichment analysis showed that metabolic pathways, the PI3K/Akt signaling pathway, the Calcium signaling pathway, and the cGMP-PKG signaling pathway play key roles in the anti-BC process of SFN.These pathways play important roles in the growth, invasion, metastasis, angiogenesis, and energy metabolism of BC.Taken together, our network analysis and available laboratory data suggest that SFN can inhibit BC metastasis, invasion, and progression through multiple targets and pathways. Molecular mechanism and targets of sulforaphane against bladder cancer As we all know, the occurrence and development of BC involve multiple pathways and targets, and changes in the activity of these signaling pathways significantly affect many cell activities, from growth and proliferation to apoptosis, invasion, and metastasis.Previous studies have shown that the occurrence of BC is not only the result of the involvement of multiple oncogenes (ras, p53, RB1, FGFR3, EGFR), but is also related to the abnormal activation of multiple signaling pathways (PI3K/Akt, Wnt/β-catenin, JAK/STAT, Notch, NF-κB, MAPK, Hedgehog) (Zhang et al., 2015;Chestnut et al., 2021).Through network pharmacology analysis, GO enrichment analysis, KEGG enrichment analysis, and a review of a large number of previous studies, we found that SFN has extremely high targeting activity against these targets and pathways.It is worth mentioning that because SFN can inhibit the progression of BC cells through multiple targets and pathways, both domestic and foreign scholars are currently conducting extensive research on its molecular mechanism. It is worth mentioning that SFN can significantly upregulate the expression of Nrf2-dependent enzymes (glutathione transferase and thioredoxin reductase) and downregulate the expression of COX-2, but p38 MAPK inhibitors can reverse this effect.The mechanism seems to be that SFN activates p38 MAPK, leading to the activation of Nrf2 mediated by p38 MAPK (Shan et al., 2010). As we all know, the nuclear factor kappa-B (NF-κB) family is a key regulatory factor for cell survival, and its NF-κB signaling participates in many biological processes, including immune and inflammatory responses, proliferation, apoptosis, and EMT (Liu et al., 2017;Zhang et al., 2017).However, the key step in activating typical NF-κB is the phosphorylation-dependent activation of the IκB kinases (IKKs) complex (Hayden and Ghosh, 2008;Israël, 2010).It is worth noting that some evidence suggests that the abnormal upregulation of NF-κB transcription factors is closely related to poor prognosis in BC patients (Doyle and O'Neill, 2006;Perkins and Gilmore, 2006;Chiang et al., 2019;Walter et al., 2020).In fact, more and more evidence suggests that SFN can inhibit the abnormal activation of NF-κB in various ways.SFN can not only inhibit IκB degradation and phosphorylation (Cancer Genome Atlas Research Network, 2014;Song et al., 2022;Yang et al., 2022), but also inhibit the NF-κB signaling pathway by activating NF-κB nuclear translocation (Jeong et al., 2010;Subedi et al., 2019).Furthermore, a large amount of evidence indicates that COX-2 is overexpressed in human BC and BC animal models, and it is closely related to the progression, prognosis, and recurrence of BC (Shirahama et al., 2001;Klein et al., 2005), and NF-κB is closely related to the expression of COX-2 mRNA (Garg and Aggarwal, 2002).It is worth mentioning that SFN can time and dose-dependently activate NF-κB nuclear translocation in T24 BC cells and inhibit NF-κB DNA binding to the COX-2 promoter, thereby inhibiting COX-2 mRNA and protein levels (Shan et al., 2009) (Figure 8). Effect of sulforaphane on bladder cancer stem cells Cancer stem cells (CSCs) are tumor-initiating clonogenic cells capable of maintaining cellular heterogeneity, self-renewal, and differentiation, persisting in the tumor microenvironment longterm, and playing a role in tumor growth, metastasis, chemoresistance, and recurrence (Zhang et al., 2012;Bellmunt, 2018).However, bladder CSCs (BCSCs) represent a distinct type of CSCs originating from bladder epithelial stem cells and non-stem cells with autophagy, sphere-forming, and multilineage differentiation capabilities, which was first identified in urothelial carcinoma of the bladder in 2009 (Ohishi et al., 2015;Yang et al., 2017).The stemness of BCSCs is regulated by various targets and pathways, making them an ideal target for intervention therapy in BC.Notably, BCSCs exhibit high heterogeneity, with the stem cell populations and characteristics varying significantly among different subtypes of BC.CD44, CK5, P-cadherin, and CK14 are prominently expressed in CSCs derived from NMIBC, while additional stem cell markers, such as ALDH, Nestin, CD133, CD90, NANOG, OCT4, and SOX, are expressed in CSCs derived from MIBC, which are partially associated with invasiveness, chemoresistance, and self-renewal in MIBC (Ooki et al., 2018;Hayashi et al., 2020).Therefore, targeting BCSCs is a promising treatment strategy for treating BC and controlling recurrence and metastasis. Excitingly, SFN has demonstrated significant value in anti BCSCs.Previous studies have shown that ΔNp63α and TAp63α, two major subtypes of the p63 family of proteins, play essential roles in stemness maintenance and cell proliferation (Yang et al., 1998;Melino et al., 2015).Importantly, SFN has been found to dose-dependently reduce the expression levels of ΔNp63α, TAp63α, and stem cell markers such as NANOG, OCT4, and SOX2, as well as diminish the sphere-forming ability of CSCs, which can be reversed by overexpression of ΔNp63α and TAp63α (Chen et al., 2020;Chen et al., 2022a).This mechanism appears to be partially mediated by SFN inhibiting the ΔNp63α/ NANOG/OCT4/SOX2 axis (Chen et al., 2022a).Additionally, ZO-1, an epithelial-mesenchymal marker that is associated with EMT and CSCs characteristics in BC (Islam et al., 2016b).SFN has been shown to promote the expression of ZO-1 to reduce the expression of stem cell markers (CD133, CD44, NANOG, and OCT4) and nuclear expression of β-catenin in CSCs.The mechanism involves SFN modulating the ZO-1/β-catenin axis to suppress CSCs stemness (Chen et al., 2022b).Furthermore, the Sonic Hedgehog (SHH) signaling pathway also plays an important role in the characteristics of BCSCs, which regulate selfrenewal, proliferation, and invasiveness (Islam et al., 2016b).Luckily, SFN significantly lowers the expression of key components of the SHH pathway (Shh, Smo, and Gli1) and inhibits tumor sphere formation, thereby suppressing the stemness of cancer cells (Ge et al., 2019;Wang et al., 2021).These findings suggest that SFN can inhibit BCSCs selfrenewal by targeting the SHH signaling pathway.Moreover, previous research has shown that the combination of acetazolamide (AZ) and SFN significantly inhibits BC growth, progression, and EMT processes (Islam et al., 2016a).Recent studies have revealed that the combination of AZ and SFN can also significantly reduce the sphere-forming ability and expression of CSCs markers such as Oct-4, Sox-2, and Nanog, thus suppressing cancer cell stemness (Bayat Mokhtari et al., 2019).Additionally, miR-124, one of the most extensively studied microRNAs, has been demonstrated to inhibit BC cell proliferation through targeting CDK4 (Cao et al., 2019).Interestingly, SFN suppresses the expression of stem cell markers (CD44 and EpCAM), IL-6R, STAT3, and p-STAT3 in a dose-dependent manner on the one hand, and on the other hand, significantly increases the expression of miR-124 in a dose-dependent manner.However, knockdown of miR-124 eliminates the effect of SFN on CSC-like characteristics in further experiments (Wang et al., 2016).These results indicate that SFN can target BCSCs through the miR-124/IL-6R/STAT3 axis.and alkaline conditions, with a decrease in quantity of 20% after cooking, 36% after frying, and 88% after boiling (Beevi et al., 2010;Baenas et al., 2019).Therefore, SFN is difficult to deal with in the pharmaceutical process, greatly reducing its success rate after oral administration.Moreover, the effective dose and lethal dose range of SFN have not been determined.In animal models, the dose range of sulforaphane is 5-100 mg/kg for reducing tumors (Singh et al., 2019;Kaiser et al., 2021).For a person weighing 60 kg, this is equivalent to 300-6000 mg, which clearly exceeds their threshold.In addition, in some clinical trials, the test dose of SFN cannot be accurately converted to the amount of vegetables consumed.Research has shown that the average concentration of SFN in raw broccoli is 0.38 μmol/g (Yagishita et al., 2019).However, most of the doses of SFN used in clinical trials range from 25 to 800 μmol, equivalent to about 65-2,105 g of raw broccoli, which is actually difficult to consume.In fact, high concentrations of SFN have been shown to exhibit significant toxicity toward normal cells.In vitro studies have shown clear adverse effects with SFN at concentrations of 10-30 μM, including induction of DNA, RNA, and mitochondrial damage (Zhang et al., 2005;Sestili et al., 2010;Fahey et al., 2015).Secondly, how to improve the bioavailability of SFN is also a key issue.It has been reported that the ability of individuals to use gut myrosinase to convert glucoraphanin into SFN varies widely (Fimognari et al., 2012;Sivapalan et al., 2018).However, even if the same SFN and myrosinase are given to subjects at the same time, there is still variability in the conversion and bioavailability of SFN among individuals (Shapiro et al., 2001).In addition, there is also concern about mitigating the toxicity of SFN.SFN has a low toxicity, and most in vitro and in vivo experiments have been conducted at concentrations ranging from 0 to 40 μM without significant observed toxicity.In human trials, SFN has been relatively safe at low doses with no adverse reactions, and minimal harm has been observed at high doses (Shapiro et al., 2006).Furthermore, although the US Food and Drug Administration has restricted some clinical trials to SFN doses of 200 μmol, the adverse reactions are still negligible, with only one case of grade 2 constipation reported.Moreover, the study suggests that higher doses of SFN may have greater benefits, but further testing is needed (Alumkal et al., 2015).Therefore, the dose-related advantage of SFN in reducing adverse reactions is evident.It can exert positive anti-BC effects by reducing toxic side effects in a dose-dependent manner.Based on this, further testing with more extreme doses is still needed in the future.It is worth mentioning that most of the current research on the impact of SFN on BC is still based on in vitro experiments and animal models.Future work needs to validate in vitro findings, optimize SFN drug dosage through animal model studies, and conduct more clinical trials for BC patients to improve the bioavailability of SFN.Excitingly, it has been reported that daily oral administration of 200 μM SFN in melanoma patients can achieve plasma levels of 655 ng/mL with good tolerance (Tahata et al., 2018).Drinking 300 mL of broccoli soup per week can lower gene expression in the prostate and a negative correlation between cruciferous vegetable intake and prostate cancer progression has been observed (Traka et al., 2019).In addition, pancreatic ductal adenocarcinoma patients under palliative chemotherapy showed improved outcomes after taking 15 capsules per day (90 mg/ 508 μmol SFN) for 1 year.However, taking 15 capsules per day can be difficult for some patients, and broccoli sprouts can sometimes exacerbate digestive problems such as constipation, nausea, and vomiting (Lozanovski et al., 2020).This issue also reminds us to develop better-tolerated and more efficient new SFN formulations. Conclusion and prospects As we all know, the occurrence and development of BC involve abnormal regulation of multiple pathways and targets.Therefore, drugs with multiple pathways and targets can play a significant role in the treatment of BC.However, SFN is expected to become an ideal drug for the treatment of BC.SFN is the best anti-cancer active substance found in vegetables and has been widely recognized in recent years.As a natural product, it is cheaper, safer, and easier to obtain than other anti-cancer drugs.In addition, SFN has been shown to have low toxicity, is not oxidizable, and has good tolerance to individual administration, making it an effective natural dietary supplement in many clinical trials.Secondly, our review shows that SFN can inhibit the progression of BC cells through various pathways, including inducing cell apoptosis and cell cycle arrest, inhibiting cell growth, migration, and invasion, regulating cell glucose metabolism, inhibiting tumor angiogenesis, and acting as a synergistic agent for chemotherapy drugs.In addition, it can play an anti-BC and anti-BCSC role by regulating multiple signaling pathways, including PI3K/Akt, NF-kB, MAPK, Nrf2, ZO-1/β-Catenin.miR-124/IL-6R/STAT3, etc. However, due to the low bioavailability of SFN and its unstable biochemical properties, its clinical application encounters many obstacles.Fortunately, more and more researchers are working to improve its bioavailability and absorption by cancer cells.For example, new SFN injections such as microencapsulation, microspheres, micelles, and nanoparticles are being developed.It is worth mentioning that the current research on the effect of SFN on BC is mostly based on in vitro cell experiments and in vivo animal experiments.We still need to carefully design more pharmacokinetic and clinical trials to clarify the toxicity, effective dose, and lethal dose range of SFN, which will provide a certain foundation for the development of new drugs.In summary, all research results indicate that SFN is expected to be used as a new or adjunct drug for the treatment of BC. FIGURE 4 FIGURE 4Sulforaphane inhibits the molecular process of bladder cancer (induced by sulforaphane are noted by using →, while the inhibition represented by ⊣ symbol).The occurrence of bladder cancer is closely related to (A) apoptosis, (B) cell cycle, (C) glucose metabolism and (D) progression. FIGURE 5 FIGURE 5Venny of sulforaphane and bladder cancer. FIGURE 6 FIGURE 6Protein network of sulforaphane and bladder cancer.(A) Protein interaction network, (B)Protein network analysis. FIGURE 7 FIGURE 7Enrichment analysis of sulforaphane and bladder cancer.(A) GO enrichment analysis, (B) KEGG enrichment analysis. FIGURE 8 FIGURE 8Effect of sulforaphane on MAPK and NF-κB (induced by sulforaphane are noted by using →, while the inhibition represented by ⊣ symbol).MAPK signal pathway is correlated with NF-κB.Sulforaphane can block the occurrence, development, invasion and migration of bladder cancer cells by inhibiting common upstream and downstream molecules, multiple receptors and ligands. TABLE 1 Sulforaphane and bladder cancer in vitro. TABLE 1 ( Continued) Sulforaphane and bladder cancer in vitro. TABLE 1 ( Continued) Sulforaphane and bladder cancer in vitro. TABLE 2 In vivo study of sulforaphane and bladder cancer.
v3-fos-license
2018-04-03T01:23:43.941Z
2015-07-10T00:00:00.000
13756428
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0130009&type=printable", "pdf_hash": "db8b5f22b3c5573dcbedb96ab2b22517870451b9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44087", "s2fieldsofstudy": [ "Psychology" ], "sha1": "db8b5f22b3c5573dcbedb96ab2b22517870451b9", "year": 2015 }
pes2o/s2orc
Strategic Adaptation to Task Characteristics, Incentives, and Individual Differences in Dual-Tasking We investigate how good people are at multitasking by comparing behavior to a prediction of the optimal strategy for dividing attention between two concurrent tasks. In our experiment, 24 participants had to interleave entering digits on a keyboard with controlling a randomly moving cursor with a joystick. The difficulty of the tracking task was systematically varied as a within-subjects factor. Participants were also exposed to different explicit reward functions that varied the relative importance of the tracking task relative to the typing task (between-subjects). Results demonstrate that these changes in task characteristics and monetary incentives, together with individual differences in typing ability, influenced how participants choose to interleave tasks. This change in strategy then affected their performance on each task. A computational cognitive model was used to predict performance for a wide set of alternative strategies for how participants might have possibly interleaved tasks. This allowed for predictions of optimal performance to be derived, given the constraints placed on performance by the task and cognition. A comparison of human behavior with the predicted optimal strategy shows that participants behaved near optimally. Our findings have implications for the design and evaluation of technology for multitasking situations, as consideration should be given to the characteristics of the task, but also to how different users might use technology depending on their individual characteristics and their priorities. Introduction People choose to multitask in many daily settings, as illustrated in a recent special issue on multitasking [1]. For example, office workers frequently self-interrupt themselves throughout a typical day [2,3], switching activities every two to three minutes [4]. This desire to switch between activities remains even when performing activities that really should demand our complete and undivided attention. A topical example of this is driver distraction and the numerous reports of drivers using their phones to write and receive messages while driving (e.g., [5][6][7]). A core question for multitasking research has been to consider whether people are good at multitasking (e.g., [8][9][10][11]). If people are not good at multitasking then maybe this behavior should be discouraged. At one level the answer to this question seems clear cut as there is an abundance of research demonstrating dual-task interference effects: performance on a task is usually worse when it is performed at the same time as another task compared to when that task is performed alone [12]. Such dual-task interference effects often stem from the limits on our basic cognitive and perceptual abilities: we often cannot actively engage in two tasks at the same time, but instead must interleave our efforts between tasks (e.g., [2,[13][14][15][16][17][18][19]) For example, a driver who is writing a text message on a phone must take his or her eyes off the road to perform the text-typing task. However, this gives the driver a strategic choice. Should the driver write the entire text message at once and so take his or her eyes off the road for a long period of time? This might seem like a reckless decision. Alternatively, a few characters might be typed and attention returned to driving before a few more characters are typed. The choice of interleaving strategy has implications for how well each task is performed, giving a dual-task tradeoff (e.g., [14,[20][21][22][23][24][25]). The person must decide which task is more important and so prioritize performance of one task over the other. The focus of this paper is on understanding how people make dual-task interleaving tradeoffs. In doing so we seek to understand how good people are at multitasking. To address this question we report the results of an experiment in which participants had to perform two separate tasks at the same time but could only work on one task at a time. Participants therefore had to decide when to switch between tasks. Results show how this decision is systematically influenced by three factors: task characteristics, incentives, and individual differences in skill. Before describing the details of this study, and a computational model that was developed to understand the results, we first review work of each of the primary factors of interest. Task Characteristics Previous work has extensively investigated how task difficulty affects multitasking performance (e.g., [21,[26][27][28]). A theoretical interest has been in understanding the general characteristics that makes tasks hard to perform in multitasking settings. Two characteristics have been identified. First, task characteristics place limitations on performance, as the task in part dictates how fast a participant can complete its components (referred to as data-limitations in [27]). For example, a text message will be faster written on a phone that auto-completes words compared to a phone that does not auto-complete words, as in the first case less time is spent on typing each individual word. How difficult it is to combine tasks in a multitask setting also depends on the amount of overlap between the cognitive resources that are needed for the tasks [29][30][31]; the larger the overlap between resources that are needed for both tasks (e.g., vision, memory), the more difficult it is to perform tasks concurrently. In our previous work investigating multitasking behavior we have used a tracking and typing task [18,32], which is inspired by the dialing-while-driving scenario described in the introduction. In our set-up, participants interleave between a typing task and a tracking task (described in more detail later) in a discretionary way (cf. e.g., [13,16,18,32]). That is, participants can only see and work on one task at a time and need to decide when to switch between tasks. The benefit of this discretionary set-up is that it gives a quick and easy way to directly infer the participant's task interleaving strategy. However, a disadvantage of our discretionary set-up is that explicit switching between windows is relatively costly, requiring the participant to press a button on a joystick. There has been extensive discussion within the literature on how such 'information access-costs' can influence the emergence of interactive behavior (e.g., [33][34][35][36][37][38][39]). Eye-tracking has been successfully used in some previous studies to infer dual-task interleaving strategies, for instance see work by Hornof and colleagues [40,41]. In our analysis of optimality of the chosen strategy, we craft a model which also incurs these switch-costs and which is used to investigate the performance of various discrete interleaving strategies. This includes extreme strategies, ranging from a no-interleaving strategy, which does the typing task without checking on the tracking task even once, to a maximum interleaving strategy in which checks are made on the tracking task after entering each and every digit in the typing task. As such, our investigation covers the full range of possible task interleaving strategies and it is expected that performance will fall within these performance 'brackets' (cf. [42,43]). We will now describe the two tasks, tracking and typing, in more detail. Variations in the characteristics of each task can influence how people choose to interleave attention when multitasking. Tracking tasks have been used in various multitasking studies (e.g., [31,[40][41][42][44][45][46][47]), and the difficulty of this task can be easily manipulated. In our tracking task, a moving cursor (10x10 pixels) needs to be kept inside a circular target area. We can manipulate two factors to control the difficulty of the tracking task: the radius of the target area and the function that controls the movement of the cursor. The target area has a radius of either 80 pixels (small radius) or 120 pixels (large radius). Keeping the cursor inside a small radius requires more frequent attention to the task than keeping the cursor inside the large radius. This is comparable with how it might be easier to keep a car inside a wide lane compared to a narrow lane. We also manipulate the speed with which the cursor moves around at times when it is not actively controlled by the participant. The position then updates following a random walk with mean of 0 pixels and a standard deviation of 3 (low noise) or 5 pixels per update (high noise). When the cursor movement has a higher standard deviation, it becomes less predictable and needs more attention. This is comparable with how driving a car at a high speed on a busy multilane highway is far more demanding and will require higher levels of vigilance than driving at a slow speed along a quiet back road. Digit typing tasks have also been used in multitasking research concerned with driver distraction [14,24,25,48]. Previous research has shown that the way in which digits are typed is influenced by how they are represented. If numbers are displayed or memorized in a chunked manner, people tend to interleave at the boundaries between chunks [14,24,25,48]. In our study we control for these patterns by presenting the to-be-typed digits visually, thereby not relying solely on memory of structured representations. In addition, motor actions can also provide cues for the interleaving of digits. Specifically, if a number contains both sequences of repeating digits and sequences of different digits that require a finger movement, then there is a benefit to interleave at points where the finger had to be moved [24]. In the current study we control for this by using a limited set of three digits (1, 2, and 3) and by encouraging participants to use dedicated fingers for typing each digit. We randomized the frequency and positioning of each digit, with the added constraints that each digit occurred at least six times and that each digit did not occur more than three times in sequence. How many digits are dialed in a sequence is also influenced by the priorities that people set [14,24,25]. In our study we manipulate priorities through the use of monetary incentives, which are discussed next. The payoff function translates performance on each of the individual tasks into a single monetary value and combines these values into a single total score that is reported to the participant. The participant can then use this information to try and maximize their score. Payoff functions have been used frequently in empirical studies, particularly as a way to motivate participants [49][50][51]. More recently, payoff functions have been advocated as being useful in combination with computational cognitive models [18,32,[52][53][54][55][56]. In a multitasking setting, the use of a payoff function has three advantages. First, it provides the experimenter and the participant with an objective criterion to assess optimal performance: optimal performance is that which leads to the highest payoff score. Second, performance on individual tasks might be expressed in different units (e.g., speed, accuracy) and it might not be trivial to assess how a 'loss' on one task should be traded-off against a 'gain' on another task (but see [14,24] for one way of doing this). A pay-off function avoids this problem, by explicitly specifying how performance translates into a single unit across tasks. Third, people are known to have difficulties with maintaining internal scales of variance [57], for example to assess the exact brightness of a light. Such internal scales are not required when scores are made explicit by a payoff function. Instead, the objective monetary score can be used to assess whether performance on a current trial was better or worse compared to performance in other trials. We use payoff functions to investigate how flexible performance is. We manipulate the payoff functions between participants, such that more or less value is placed on each of the two tasks, and investigate how well participants adjust their behavior to the payoff functions. This can be seen as a systematic way of changing participants' priorities [18]. If participants are solely driven by the task characteristics and not by incentives, then such changes should have little effect on performance. That is, participants might then be expected to use "default" ways of interleaving tasks [31]. However, we suspect that people are sensitive to incentives. In preceding work, participants experienced one payoff function and we analyzed how well they performed against that payoff function [18]. However, we have not investigated how well participants perform in cases where the payoff function changes. Here, we empirically test whether participants use different strategies, and have different performance, when different payoff functions are being used. We then compare human performance to predictions by a computational cognitive model to see whether participants achieved the best performance that was possible given their characteristics and the payoff function that they experienced. Individual Differences We also investigate how performance is affected by individual differences in task skill. There is a growing appreciation in the multitasking literature that task skill can influence multitasking performance (e.g., [10,11,41,52]). One idea is that the better an individual is at performing a task in isolation, the more able they are to perform that task in a dual-task setting (e.g., see Chapter 6 in [58]). Applying this to our tracking-while-typing task, we might expect individual differences in how quickly and accurately a person can type digits. As will be demonstrated in the empirical section, this individual difference in typing skill was found to influence choice of interleaving strategy and dual-task performance. We refer to typing as a "skill" in that typing ability is developed through years of practice (cf. e.g., [59,60]). We did not expect that there would be acquisition or strong improvement of this skill during our experiment. The performance on individual tasks can influence performance in dual-task settings if there is time pressure. For example, in our experiment the cursor cannot be controlled while participants are working on the typing task. During this time the cursor will drift, and participants eventually need to check again whether they need to correct the trajectory of the cursor. Given the same time window, very skilled typists might be able to type more digits per visit to the typing task compared to less skilled typists. This is indeed what we find in our empirical results. The faster typists type more digits per visit to the typing window, however, the average time that is spent in the typing window is not affected by typing skill (see results). Such subtle difference in skill can then also further affect performance. For example, in our task participants need to type a finite string of digits, and faster typists might be faster at completing this task than slower typists-thereby achieving a better score. Other individual differences might also have occurred during our experiment. In particular, there might have been differences in how well participants could control the joystick. However, the experiment did not contain an independent session that could be used to assess joystick control. Although participants practiced with the control of the joystick during the single-task tracking trials, these sessions were not systematic enough to assess participants' general tracking ability. We therefore did not include tracking as a skill factor in the statistical analyses and the model. Overview In the remainder of this paper we first describe a dual-task experiment and demonstrate how participants' performance of these tasks is influenced by task characteristics, incentives, and individual differences in skill. We then describe a computational cognitive model that is used to make performance predictions for the range of plausible dual-task interleaving strategies. The model is calibrated to the constraints of the task, the incentives (payoff function), and the observed (single-task) typing speed of individual participants. The model is used to identify the optimal task interleaving strategy (i.e., the strategy that maximizes reward through the payoff function in each condition for each participant). By taking this approach we were able to directly compare how participants performed in the experiment against the prediction of their optimal dual-task performance. This allows us to ask, in a very precise way, whether people are good multitaskers or not. Typing-while-tracking experiment Building on previous dual-task experiments [18,32], participants were required to divide their efforts between two concurrent tasks. One task was to type a string of twenty digits using a keyboard. In the other task, a randomly moving cursor needs to be kept inside a circular target area using a joystick. Both tasks were presented on the same display, but participants could only see and control one task at a time and so needed to decide when to switch their attention between these tasks. For each task it is possible to define a performance metric (i.e., speed at which the typing task is completed and how well the cursor is kept inside the target area). It is then possible to combine these separate task performance metrics into a single payoff function, thereby allowing the relative value of each task to be varied between different experimental conditions. Specifically, in one between-subjects condition ('speed'), the payoff function puts relatively more value on fast completion of the typing task. Whereas, in another condition ('accuracy'), the payoff function puts relatively more value on keeping a randomly moving cursor inside a target area. As we shall describe in detail below, these payoffs were used as a monetary incentive scheme to reward participants in the experiment. Method Participants. Twenty-four students (nine female) from the UCL psychology participant pool took part for monetary compensation. All participants were 18 years of age or older (M = 24.3, SD = 6.6, Max = 46 years). Payment was based on how well each task was performed (details in the Design section). Total payment ranged from £5.00 to £13.03 (M = £8.72). The UCLIC Ethics Committee (the University College London Interaction Centre's ethical committee) approved the study (approval number Staff/0910/005) and written consent was obtained from each participant. Materials. The dual-task setup was identical to that in [18] but differed in the payoff functions used. The experiment required participants to perform a continuous tracking task and a discrete typing task, presented on a 19-inch monitor with a resolution of 1280 x 1024 pixels (see Fig 1). The typing task was presented on the left side and the tracking task on the right. Each task was presented within a 450 x 450 pixels area, with a vertical separation of 127 pixels between the tasks. The tracking task required participants to keep a moving square cursor (10 x 10 pixels) inside a target circle (see Fig 1). The target had a radius of either 80 (small radius) or 120 pixels (large radius). The movement of the cursor was defined by a random walk function. The rate of drift was varied between different experimental conditions. In a low noise condition, the random walk had a mean of zero and standard deviation of 3 pixels per update, while in a high noise condition the random walk had a mean of zero and standard deviation of 5 pixels per update. Updates occurred approximately once every 25 milliseconds. Participants used a Logitech Extreme 3D Pro joystick with their right hand to control the position of the cursor. The drift function of the cursor was suspended whenever the joystick angle was greater than +/-0.08 (the maximum angle was +/-1). The speed at which the cursor could be moved was scaled by the angle, with a maximum of 5 pixels per 25 milliseconds. The typing task required participants to enter a string of twenty digits (chosen from digits 1 to 3) using a numeric keypad with their left-hand. Digits were presented in a randomized order with the constraint that no single digit was presented more than three times in a row in the sequence. A digit was removed from the string when it was entered correctly and all digits moved one position up. In this way, the left-most digit was always the next one to be entered. When an incorrect digit was typed, the string would not progress. The study used a forced interleaving paradigm, in which only one of the two tasks was visible and could be worked on at any moment in time. By default the typing task was visible and the tracking task was covered by a gray square. Holding down the trigger of the joystick made the tracking task visible and covered the typing task. Releasing the trigger covered the tracking task and made the typing task visible once more. Participants could only control the task that was visible (e.g., the cursor would randomly drift and its position could not be corrected when it was not visible). Design The experiment followed a 2 x 2 x 2 mixed factorial design. Within subjects, two factors of task characteristics were influenced: noise (high or low) and radius size (small or large). Between subjects, the payoff function was manipulated with 2 levels. Each payoff function adhered to the same basic structure (see below), but had different parameters so as to place different value on the typing and tracking task (see Table 1 for values and Fig 2 for an illustration). In both payoff conditions, both the speed of completing the typing task and the accuracy of performing the tracking task influenced the payoff score. However, between groups the relative weight of these two components differed. For ease of reference we therefore refer to the two groups as "speed" and "accuracy". In the 'speed' payoff condition, the parameters placed more weight on fast completion of the typing task, whereas in the 'accuracy' payoff condition more weight was placed on keeping the cursor inside the target area. Participants were randomly assigned to one or the other payoff condition in the experiment. The payoff function had three components, as in Eq 1: Participants could gain points on the typing task, where faster trial times lead exponentially to higher scores, as in Eq 2: That is, gain had an exponential relationship with the total time that was needed to complete the typing task (variable "TotalTrialTimeInSeconds"). Longer trial times lead to lower gain scores. To offset the impact, this score was multiplied with a parameter that could reduce the severity of longer trial times ("severityOfTrialTime") and the gain value was given a start value (startValue gain ). Having a higher start value and a smaller value for the severity of trial time lead to higher gain scores. Table 1 provides the parameter values that were used in the two payoff conditions. The top figure in Fig 2 illustrates how the "gain" component of the score changed as a function of the total trial time. It can be seen that in the "speed" condition the decline in gain as a function of trial time is steeper. A digit penalty of-£0.01 was applied for every digit that was typed incorrectly. An exponential tracking penalty was applied when the cursor moved outside of the target area, as in Eq 3: The tracking penalty function has an exponential relationship with the total time that the cursor spent outside of the target area (parameter SecOutside). Longer times outside of the target area lead to stronger penalties. Again, this function was offset by a startvalue (startValuetracking ) and multiplied with a parameter to reduce the impact of time outside (parameter severityOfBeingOutside). To avoid participants from losing all their money on a given trial, the payoff function had a minimum score of-£0. 20. Table 1 provides the parameter values that were used in the two payoff conditions. Fig 2 illustrates how the tracking penalty accumulated as a function of time that the cursor was outside of the target area. In the "accuracy" payoff condition, the penalty increases more rapidly compared to the "speed" payoff condition. Procedure Participants were informed that they would be required to perform a series of dual-task trials and that they would be paid based on their performance. A participant's payment was based on the cumulative payoff over the course of the study, in addition to a base payment of £5 (participants in 'speed' payoff condition) or £3 (participants in 'accuracy' payoff condition). Different base payments were chosen, as the average gain per trial differed between conditions. By choosing a different base-rate, each participant had a guaranteed minimum payment of £5 (the institute's default payment rate per hour). After an explanation of the task, participants performed two single-task training trials for each task and two dual-task practice trials. Participants were instructed that in dual-tasks they could only see and control one task at a time and had to actively switch between tasks by pressing the trigger button on the joystick. Participants then completed four blocks of experimental trials (one for each experimental condition). In the first two blocks, participants experienced a single noise level, either low or high noise. The noise level was randomly assigned and balanced across participants. On the first block a radius size (small or large) was also randomly assigned, on the second block the other radius level was assigned. For the third and fourth block this order of radius conditions was repeated, but with another level for noise. For each block, participants completed five single-task tracking trials, five single-task typing trials, and twenty dual-task trials. The dual-task trials were further grouped into sets of five trials, with a short pause between each set. The total procedure took about one hour to complete. Participants were aware that the payoff that they received was influenced by their performance on the typing task and by their performance on the tracking task. Specifically, in all conditions, participants were told that they could gain points by completing the typing task as quickly as they could and that faster trial completion times would lead to higher scores. All participants were also instructed that they lost points when the cursor went outside of the target area. They were also informed that they lost points when they made typing errors. Or to state differently: all participants were informed that both speed (on the typing task) and accuracy (on the tracking task) mattered. However, they were not informed of the exact equations that underlie their payoff, nor of the relative weight of each component (i.e., whether fast completion or tracking accuracy were more valuable). This allowed us to investigate how well participants adapted their performance to the feedback they received on their performance at the end of each trial. Do people behave differently based on the payoff function, or do they apply "default" interleaving strategies that are independent of the payoff function? For ease of reference, we refer to our two groups of participants as "speed" and "accuracy" to emphasize what task had a relatively stronger weight in the payoff function. However, both aspects mattered in both payoff conditions. Measures In our main analysis we report results only for the last 5 trials of each block. The motivation for this is that we are interested in participants' behavior after they had time to become accustomed to the payoff function and have received feedback on their performance. For each metric we calculate a score (e.g., total trial time) per trial and report the average score across the 5 trials. This average score is also used in statistical analyses. Performance is expressed in three metrics: total trial time, maximum deviation of the cursor from the center of the target area, and total time the cursor spent outside of the target area. Total trial time is defined as the time between the start of the trial and the time at which the last digit of the string of digits was pressed. For maximum deviation of the cursor we calculated per trial what the furthest deviation of the cursor from the center of the target was. For each participant we then calculated the average value across trials. This measure is of interest given its similarity to a metric of driver distraction research: how far does a car (here: cursor) drift outside of the lane boundary (here: target area) due to inattention? The third measure is the average total time that the cursor spent outside of the target area. The metric is again related to measures of driver distraction: how long was a car (here: cursor) outside of the lane boundary (here: target area) due to inattention? We also analyzed four related metrics that reflect participants' strategy for interleaving between tasks. The maximum number of digits typed per visit to the typing window reflects how long participants were willing to stay in the typing window while the cursor drifted out of sight. Only correctly typed digits were considered. The second metric is the average time that was spent per visit to the typing window. The third metric is the average number of visits to the tracking window. The fourth metric is the average time that is spent in the tracking window per visit. Taken together, these four metrics describe how frequently participants visit each task and how long they spend on each task before moving on to the next task. This again relates to measures of driver distraction that investigate how frequently and how long participants glance at the road (here: number of visits to tracking task and duration of that visit) and how much time they spend on a distracting task (here expressed as maximum number of steps completed and as average visit time). In our analysis we found that participants differed in their typing speed and that this affected performance and strategy. To incorporate this into our statistical analysis, we split the participants of each payoff condition into two groups using a split mean procedure on the average interkeypress interval times (IKI). This resulted in four equal groups: fast typers in the speed payoff condition (IKIs of 184 For statistical analysis we used a 2 (payoff function: speed/accuracy) x 2 (cursor noise: low/ high) x 2 (target size: small, large) x 2 (typing speed: relatively slow/fast) ANOVA. We only considered main effects and two-way interactions. A significance level of .05 was applied throughout. Table 2 gives an overview of the statistical effects found. These are discussed in more detail in the text. Results and Discussion Overall performance In general, the cursor deviated more for the 'speed' (of typing) payoff condition (Fig 3: black points) than for the 'accuracy' (of tracking) condition (grey points). The cursor also deviated more when the noise was high (squares) compared to low (circles), and when the radius was large (open points) compared to small (closed points). For trial time, performance was mostly affected by task difficultly, as trial times were shorter when noise was low (circles), or when the radius was large (open points). Statistical analysis confirmed these findings. The effects are summarized in Table 2, and discussed in more detail below. The raw data is included together with an R analysis script in S1 file. ** . *** *** ** Noise (N) *** *** *** *** *** *** *** Radius (R) *** *** *** *** *** *** *** IKI group (I) *** *** ** *** ** * .: .05 < p < = .10; *: .01 < p < .05; = .181. Target radius also interacted with interkeypress interval group, F(1, 21) = 5.03, p = .036, η p 2 = .193. Finally, there was an interaction between cursor noise and target radius, F(1, 23) = 4.893, p = .037, η p 2 = .175. There were no other significant effects. Taken together, the analysis shows that the difficulty of the tracking task (i.e., noise and radius) consistently affected performance on each task. Similarly, individual difference in participants' typing speed affected performance on each task. Manipulation of the dual-task payoff function had an effect on how participants performed on the tracking task (i.e., maximum cursor deviation and total time that the cursor was left outside of the target). More specifically, participants tended to allow the cursor to drift further, and let it remain outside of the target area for longer, when the payoff function rewarded faster completion of the typing task compared to accurate tracking performance. However, there was no effect of payoff manipulation on total trial time. To better understand these results, we next consider metrics related to how participants choose to interleave tasks. Dual-Task Interleaving Strategies Fig 4 plots two measures of dual-task interleaving strategy: the maximum number of digits that participants' choose to type during a visit to the typing window, and the duration of time that was spent in the tracking window per visit to this window. Again, each experimental condition has a relatively unique point in this strategy space, especially when comparing the two payoff conditions (i.e., compare the black with the grey points in Fig 4). A summary of statistical effects is given in Table 2, and discussed in more detail below. That is, more digits were typed when the payoff condition encouraged fast completion of the typing task (speed payoff condition). The maximum number of digits was also affected by task characteristics, such that more digits were typed per visit to the typing window when the task environment conditions were easier (i.e., low noise, large radius). Specifically, more digits were There were no other significant effects. Specifically, there was no significant effect of typing speed. When comparing these results with the analysis of maximum number of digits typed per visit, the lack of a significant effect of typing speed on mean visit time to the typing window suggests that participants had set an objective criterion for how long they could spend in the typing task and that this criterion depended on the payoff condition and the task difficulty. Given this criterion, a participant can type more or less digits depending on their typing skillbut still spends roughly the same time per visit independent of typing skill. These differences in the length of each visit to the typing task and in the maximum number of digits typed per visit to the typing task also affected how often participants visited the tracking task. Participants made more visits to the tracking window when the payoff function pro- Finally, we also analyzed the average time spent in the tracking window per visit to this window. This time was affected by task difficulty, such that more time was spent in difficult situations (e.g., small radius, high noise). More time was also spent in the tracking window when noise was high (M = 1.53 sec, SD = 1.14 sec) compared to when noise was low (M = 0.99 sec, SD = 0. . This result might be due to a floor effect: some fast typers could complete the typing task without ever visiting the tracking window in some of the conditions (e.g., large radius with low noise). In contrast, the slow typers always had to visit the tracking task. This might have made their average time on the tracking task (i.e., the main effect of typing group) slightly higher. Taken together, the above analyses show that participants' dual-task interleaving strategy was affected by the three factors of interest: changes to the payoff function, changes to the difficulty of the tracking task (noise and radius), and individual differences in participants' typing speed. For example, participants dedicated more of their time to the typing task and paid fewer visits to the tracking task when the payoff function rewarded fast completion of the typing task more strongly. Similarly, when the tracking task was easier (i.e., when the cursor moved slower and the target was larger), visit times to the typing tasks were longer and fewer visits were made to the tracking task. Typing speed only affected some metrics. For example, it did not influence how long each visit to the typing task was, but it did influence how productive each visit was: fast typers completed more of the letter string than slow typers in the same time window. Discussion of results The results of this experiment show what performance metrics and dual-task interleaving strategy were affected by our three factors of interest: task characteristics (noise, radius), individual differences in skill (typing speed), and incentives (payoff function). What these data do not reveal is whether participants adopted strategies that would result in the highest possible monetary reward over the trial-given the constraints that these factors place on performance. To better understand this aspect of the data we developed a computational cognitive model of task performance. The model is used to explore the performance of various dual-task interleaving strategies so as to identify the range of strategies that would yield the highest possible reward, given the constraints imposed on performance by the task (e.g., cursor noise, radius size, payoff function) and the individual (e.g., typing speed). Model development Our model of dual-task performance is a modification of the model of average performance in [18]. The refinements are that the current model can capture individual differences in typing speed and can account for typing errors. A detailed description of model development and parameter choices is given in [54]. The model is used to predict performance for various strategies for interleaving attention between tasks. The model captures each task (typing, tracking) as a series of discrete steps. This is similar to other procedural models of dual-task performance (e.g., [31,41]). However, compared to the preceding models, we model actions at the keystroke level (cf., [43,61]) and don't make strong assumptions about actions at the millisecond level. This level of abstraction has been valuable in other dual-task models [14,18,24,32]. We refer to our model as a 'computational cognitive model'. The term "cognitive" is used in reference to Newell's definition of the "cognitive band" of cognition ( [62], see also [63]). Newell describes different types of human behavior that take place over different time scales (i.e., ranging from microseconds to months or years). Within this framework the 'cognitive band' takes place between a few hundred milliseconds to several seconds. Similarly, our model captures behavior that takes place at this timescale by specifying actions that take several hundreds of milliseconds (the keystroke level, cf. [43,61]). We call our model a computational model, as it is implemented as executable code; which is distinct from Marr's notion of computational explanation [64]. We will now describe the structure of the model in more detail. Typing model The typing model types in digits according to a pre-specified strategy that is set by the modeler (see section on Strategy space below). The typing speed is calibrated to each individual participant's average interkeypress interval as measured in single-task trials. The model also makes typing errors, at the same rate as individual participants in singletask typing trials. Errors are inserted at random positions in the string of digits on each model run. It was assumed that typing an erroneous digit required the same time as a correct digit. In addition, it was assumed that a post-error slowing cost [65] slowed down typing speed on the immediately following correct digit. The mean post-error slowing time was estimated by subtracting the normal interkeypress interval time from the average time observed in the interval for the first correct digit after an erroneous digit. This model captures the core features of interest and is sufficient for making detailed predictions of typing time across a range of different dual-task interleaving strategies. Tracking model The tracking model focuses on two core aspects of the experimental task: (1) that the cursor can only be controlled when the tracking window is open, and (2) whenever the cursor is not controlled it drifts according to the drift function of the experiment (see Methods section). At times when the model controlled the cursor movement, this was done as follows. Every 250 msec the position of the cursor relative to the center of the target area was determined. A linear function was then used to determine the angle of the joystick to move the cursor towards the center (this function was determined in [18]): Angle ¼ À0:01 Ã current distance from target center À1 <¼ angle <¼ 1 Based on the angle, the position of the cursor was updated every 25 msec by multiplying the angle value with 5 pixels. Both the frequency of the update and the angle multiplication were identical to how this was implemented in the experiment. Dual-task model On each trial, the dual-task model typed a series of digits using the typing model before switching to the tracking task. The number of to-be-typed digits was specified as an explicit strategy choice. When the model switches between typing and tracking, a switch cost was incurred (250 msec, taken from [18]). The model then pursued active tracking of the cursor, based on the tracking model for a pre-determined fixed period of time. After this time had passed, another switch cost was incurred (180 msec, taken from [18]). The higher switch cost to switch from typing to tracking intuitively reflects the need to first locate the cursor on the screen-the digits are always in the same position and therefore require less time to locate. Once the model switched back from tracking to typing, it would continue typing until it was time to switch again. It would continue this pattern until all 20 digits were typed in correctly. Strategy Space We explored how different explicit strategies for interleaving tasks affected performance. A strategy was determined by two variables (1) a basic strategy determined how many digits were typed per visit to the typing window before switching to the tracking task, and (2) a strategy alternative determined how much time was spent in the tracking window on each visit before switching back to the typing task. For the basic strategies (number of digits typed per visit), we explored performance for a relatively simple set of twenty strategies in which a consistent number of digits was typed per visit to the typing window. For example, a strategy to always type 1 digit per visit would make twenty visits; a strategy to always type 2 digits per visit would make ten visits; a strategy to always type 8 digits per visit would make two visits in which 8 digits were typed and one in which the remaining 4 digits were typed. For each of these twenty basic strategies, we explored the performance of various strategy alternatives. Strategy alternatives varied in how much time was spent in the tracking window per visit to this window. We explored this for 12 alternatives, between 250 and 3,000 msec, in steps of 250 msec. Within a single simulation we kept the time spent in the tracking window per visit constant (i.e., if the model spent 250 msec during the first visit in the tracking window, a similar time was used the second visit). For each of these distinct strategy variants the model was run multiple times and performance predictions were made. In total this lead to the use of 229 strategy alternatives. For 19 strategies (typing 1 to 19 digits per visit), we explored the effect of 12 alternatives for time on the tracking task (giving 12 x 19 = 228 strategy alternatives). There was one strategy without interleaving (typing all 20 digits in one visit). We ran 50 simulations (i.e., 50 simulated trials) for each individual, each experimental condition (noise, radius, payoff), and each strategy alternative. This gave a total of 12 (participants per payoff function) x 2 (payoff functions) x 2 (noise) x 2 (radius) x 229 (strategy alternative) x 50 (simulations) = 1,099,200 simulations. For each model simulation we were able to derive performance measures equivalent to those gathered for human participants (i.e., total trial time, maximum deviation of the cursor, total time that the cursor spent outside of the target area). Given these performance measures it was possible to calculate the payoff achieved by the model on each simulated trial using the same objective function for rating human performance in the experiment (see Eqs 1-3). Comparison of human performance with predicted optimal performance The empirical results demonstrated that participants adapted their strategies to the payoff function, the task characteristics, and their individual typing skill. With the model, we now want to ask a different question, namely: were participants good multitaskers? To address this question, we selected for each individual participant, in each experimental condition, the strategy alternative that, on average, was predicted to achieve the highest payoff. We compared performance of this strategy on various metrics with human performance (as reported above). For some individuals, in some conditions, the model predicted that multiple strategies could achieve the highest score (i.e., no one strategy alternative was better than all other strategy alternatives). In these cases, performance for all measures of interest (e.g., trial time, maximum deviation of the cursor, number of digits typed per visit) was averaged across the set optimal strategies. This method allowed for a comparison between model and data without additional assumptions about how participants might choose between strategies that are otherwise equivalent in terms of their expected payoff. For example, alternative selection methods might be to 'bracket' the range of good performance [42,43] or to select the strategy that achieved the best mean value on some other measure of performance (e.g., trial time, or maximum deviation of the cursor). This would require additional assumptions about what the most representative/best metric is. Our approach does not require such additional assumptions. Table 3 summarizes the fit of these metrics (and four other metrics, see [54] for selected graphs) on: R2, RMSE (and RMSE%), and the number of conditions for which the error bars between model and human data overlap. Following [66], an ANOVA was applied to the model data to explore whether the same patterns of statistical effects were present in the data as observed in the human data. In this ANOVA, the model predictions for the best strategy alternative for each individual and each condition were treated as if generated by a participant. We applied a similar ANOVA structure as was used for the analysis of the empirical data-using a split mean analysis on typing speed to distinguish relatively fast typers from relatively slow typers. Table 4 reports these ANOVA results. In Table 3 we count what proportion of effects in the ANOVA of model data (Table 4) was similar to the ANOVA results of the empirical data (Table 2). In cases where one data set (i.e., model or human data) predicted a marginal effect and the other dataset predicted no effect or a significant effect, this was counted as explaining "half" of the effect. ANOVAs were not applied to the payoff score data, as the payoff function was an independent variable. An effect was counted as "wrong" in cases where the model predicted an effect that did not occur in the human results. Our analysis shows that on at least two metrics the human performance data was consistent with the performance predictions of the optimal model. First, R2 values were generally high (i.e., six out of nine measures were 0.89 or higher). Second, the ANOVA analysis of the model Strategic Adaptation in Dual-Tasking data produced similar main effects and interaction effects as the human data (e.g., 96% of main effects correct). Perhaps more importantly, the model predicts that on almost all the dependent variables there should be effects of payoff function, task characteristics (noise, radius), and individual differences in typing skill. Taken together, this analysis shows that the participants in the study were adopting strategies that were consistent with the predicted optimal performance model. However, the model predictions of optimal performance did not always align perfectly with the human data. First, only in few conditions did the standardized error bars of the model and *** *** *** *** *** *** *** Noise (N) *** *** *** *** *** *** *** Radius (R) *** *** *** *** *** *** *** IKI group (I) *** *** *** *** *** P x N *** *** *** *** P x R *** *** *** N x R *** . *** P x I ** . human data overlap (on average 2.3 out of 8), suggesting a difference between human and model data. Second, RMSE percentage scores were relatively high. Fig 5 helps in exploring where these differences occurred. For most measures, the largest discrepancy was in the hardest condition: high noise, small radius. Other discrepancies also occurred in the high noise, large radius condition. Inspection of the figures suggests that participants could have spent less time on the tracking task. This discrepancy might be attributed to the relative simplicity of the tracking model. For example, the model immediately started tracking when the tracking window opened, whereas participants might have needed some time to locate the cursor first. The model can be considered a model of idealized tracking performance, as it does not take these effects into account. More fine-grained data of human performance (e.g., eye-tracking data) is needed to model these effects. More detailed assumptions about tracking behavior would go beyond the level of granularity of the measurements in the current experiment. Exploratory analysis of learning to achieve optimum performance We also explored how the strategies that participants applied changed over time and how this relates to expected performance as predicted by the model. Behind the data of the participant, rectangular areas show the model's prediction of relative success for each strategy in a particular condition. The three grey tones show strategies for which the best scoring strategy alternative (i.e., time spent on tracking) had a score that was maximum 0.5 pence (black), 0.1 pence (dark grey), or 0.2 pence (light grey) from the predicted maximum score for that specific condition and participant. Grey shade was always relative to a specific participant and a specific condition. Hence, a comparison of grey levels should be made within a participant and within a condition. Across conditions, different absolute scores might have been achieved. If participants adopted optimal strategies, then their performance should lie inside the grey rectangular areas, especially inside the dark grey areas. However, the degree of overlap varied between participants and conditions. Some participants (e.g., participant 101 in the speed and participant 203 in the accuracy payoff condition, see Fig 6) adapted very well by almost always applying strategies that fell in the optimum region. Although these participants did not always apply optimal strategies on all trials, in general the trend lines suggest that over time they gradually reached optimal performance. Some participants showed effects of strategy transfer between conditions. For example, participants 106 (speed payoff) and 202 (accuracy payoff) seemed to apply very similar strategies across conditions, which in general lead to good performance, but not necessarily optimal performance. Finally, some participants' strategy did not match predictions of the optimal strategy. For example, participant 201 consistently applied sub-optimal strategies on three blocks and did not vary strategies between conditions. To quantify these results, we counted on how many trials the participants' chosen strategy fell in a grey area (i.e., where predicted score was less than 2 pence away from the optimum score) and applied an ANOVA analysis with payoff function, noise, and radius as factors. High scores were achieved on three times as many trials in the low noise condition (M = 15.04, SD There was no effect of payoff function, F < 1. There was a significant interaction effect between payoff function and noise, F(1, 22) = 7.23, p = .013, η p 2 = 0.247. There were no other significant interaction effects. Very similar effects were found when the analysis was performed when only counting strategies that achieved a score within 0.5 pence of the maximum strategy (i.e., that fall inside the dark black bars, see [54] for analysis). These results suggest that how well participants performed in comparison with their own payoff curve (i.e., with the location of the maximum strategy) depended on the task characteristics, but not on the payoff function. When the tasks were relatively easy, due to low noise or a large radius, participants on average achieved a maximum score on more than half of the trials. The absence of a significant effect of payoff function in this analysis is good. It implies that the manipulation of payoff function did not pose any limitations on participants' ability to adapt performance to the payoff function. Stated differently, if there were a significant effect of payoff function, it would suggest that participants applied more optimal strategies in one payoff condition compared to another payoff condition. This is not the case; participants were equally good in both payoff conditions. As a final analysis, we investigated whether there were individual differences in how frequently the optimum strategy was applied on the last five trials of each block (i.e., 20 trials in total). Optimum strategy was applied here as a strategy that fell in the grey zone of Fig 6 (i.e., with a predicted score within 2 pence of the predicted optimal score). The resulting histogram in Fig 7 suggest that in general, 21 out of the 24 participants applied an optimal strategy on at least half of the trials. Within each bar, the percentage of participants from each payoff condition is highlighted in a different color (accuracy: blue, right tilted lines; speed: red, left tilted lines). Participants in the speed payoff condition applied the optimal strategies more frequently. An analysis of the average minimum distance to the best strategy (i.e., the shortest distance between the applied strategy and the black bars in Fig 6) across participants is plotted as histogram in Fig 8. This data suggests that participants on average were only 2 digits away from a strategy that can be considered optimal given the constraints on performance. Summary of results In an empirical study we demonstrated how dual-task interleaving performance is systematically influenced by task characteristics, monetary incentives, and individual differences in skill. People spend longer on tasks if this is needed either because of the task's difficulty (e.g., when the cursor moved fast), or when this matched their priorities as formalized through an incentive (e.g., when this task is more rewarding). They also calibrate their strategies to their own skill (e.g., typing speed). Using a computational cognitive model we assessed how well participants chose strategies that were best suited for them given task characteristics, incentives, and individual typing skill. The model analysis suggested that participants adapted their performance in such a way as to achieve an (for them) optimum score, as evidenced by high correspondence between the trend in the model and human data (e.g., high R 2 and correspondence in ANOVA results). However, the exact strategies that participants applied were not yet the ones that, on average, achieved the highest mean score, as evident in for example relatively high RMSE values. An analysis of the learning path gave three explanations for why performance did not always achieve the best scores. First, participants sometimes were still adapting their performance to the task at hand by the end of the block. Second, some participants transferred strategies from one block to the next and hardly adapted it to the circumstances. For some participants this was because these strategies optimized, or at least satisficed [67], performance (e.g., see performance of participants 106 and 202 in Fig 6), for others there was no clear explanation for why these strategies were applied. Third, the number of times that a participant applied the optimal strategy was influenced by the task characteristics. On harder tasks (e.g., small radius, high cursor speed), participants were relatively less successful in achieving the optimum score. Relationship to existing literature Systematic influence of task characteristics, in particular task difficulty (e.g., [21,[26][27][28]), on dual-task performance has been well-documented. Consistent with this work, we show how task characteristics influences performance in our set-up: performance declines when tasks are more demanding. In addition, task characteristics influence the strategies that participants choose to interleave between tasks. More time is spent on the more challenging tasks. Incentives were used here to formalize participants' objective (cf. [18]) and to assess in an objective way whether participants achieved the best scores they could. This provides support for the notion that rational agents optimize their performance so as to maximize their payoff [18,35,[53][54][55][56]68]. We showed that incentives have consequences for the strategies that are selected for interleaving attention and for performance on each of the individual task (e.g., Fig 7. Histogram of how frequent participants applied the optimum strategy. Optimum strategies are those that achieved a score that was predicted to fall within 2 pence of the maximum score (i.e., that were highlighted in grey in Fig 6). Within each bar the proportion of participants from each payoff condition group is highlighted. For each participant only the last five trials of each condition are considered. total time spent typing, and maximum deviation of a cursor). Although participants adapted their performance towards optimal performance, they did not reach the overall optimum strategy in all cases. The computational models allowed us to identify reasons why this happened: strategy transfer and longer learning times. We also found that individual differences in skill influenced performance, building on recent observations to include these in our understanding of multitasking (e.g., [10,11,41,52]). Our modeling work is among the first efforts to demonstrate how individual skills systematically influence the strategies with which tasks are interleaved, and thereby performance [41,52,54]. It can sometimes be hard to determine "task difficulty" independently from "skill". For example, cooking a steak exactly medium rare is easy for a seasoned chef, but might pose a significant challenge for a novel cook. In the later case we would perhaps call the preparation of a steak a "difficult" task, relative to the (lower) skill level of the novel cook. In general, experience and training can help to develop skills and can turn a difficult task into a simpler one. Various studies have looked at how the acquisition of new skills can impact performance in dual-task settings (e.g., for recent examples see [69,70]). In our experiment, skill and task difficulty can more easily be distinguished in an objective manner. We manipulate inherent properties of the tracking task, that make the task relatively more easy (e.g., low noise, large radius conditions) or relatively more hard (e.g., high noise, Strategic Adaptation in Dual-Tasking small radius conditions). For the typing task, we do not manipulate the difficulty (e.g., no strings are harder than others). However, we observe that there are differences in typing skill: some participants type faster than others. As typing is a skill that is acquired over years of practice we did not expect that there is significant typing skill acquisition during our experiment (cf. e.g., [59,60]). Limitations and future work The modeling analysis suggested that participants did not consistently apply strategies that the model predicted to be optimal for them given the constraints on performance. If we assume that the model is correct, this discrepancy might be due to several shortcomings in the experiment. First, some participants needed more trials to learn the optimal strategy. Providing more trials for learning would specifically be successful if during some of these trials participants had time to freely explore the value of different strategies without being penalized for this. This can for example be done by using a no-choice/choice paradigm (e.g., [71][72][73][74]) in which the participant is first forced to apply specific strategies (no-choice) to explore performance of various specific strategies, and then allowed to choose their own strategies (choice), given their knowledge of likely success-rate. Performance feedback was only given at the end of the trial. More feedback might be needed to guide the learning of new strategies. Providing feedback during trials (instead of only at the end) increases the amount of information that is available, as in [32]. Such feedback is particularly useful in the high noise condition, where more variability in the position of the cursor makes the outcome of specific strategies more variable from one trial to the next. More generally, the timing, objective function, and magnitude of rewards can influence a model's predictions of optimal behavior [33] and influence whether participants can find the optimum (as for example studied in the context of melioration and maximization of performance, see e.g., [75,76]). Stated differently, different performance might occur when rewards are only a couple of cents (as in our study) versus hundreds of dollars (i.e., a difference in magnitude). To reduce ambiguity for the participant and the modeler on what should be optimized, we provided explicit numeric feedback, so as to have a "golden standard" (cf. [18,32,[52][53][54][55][56]). One conclusion from our analysis is that human participants do not always seem to perform optimally. However, it might also be that human performance was optimal, but that our model was not accurate. For example, although we assumed that participants optimized the objective payoff function, perhaps internally other factors (e.g., motivation, interest) were optimized. Following this line of reasoning, our model can be seen as a method of capturing important aspects of the task environment, individual differences, and the payoff and providing a detailed, normative assessment of what should constitute "rationally bounded behavior" given these constraints. The deviations of the optimal predictions are interesting, as they pose new questions for study of human multitasking behavior. The above consideration reflects a broader concern within the cognitive science community of identifying the appropriate normative theory (or using Marr's parlance: computational level of explanation [64]). Take for example the classic problem of the Wason selection task [77,78]. In this task, participants need to turn around a set of cards to test a logical rule that is provided by the experimenter (e.g., "All cards that have a vowel on one side, have an even number on the other side"). A consistent finding is that participants do not follow the rules of logic in this task. Although this could be interpreted as a deviation from rational behavior, later analyses using a different model and theory demonstrated that behavior in the selection task can actually be cast as optimal data sampling behavior ( [79,80], for a more recent version see [56]). That is, this work demonstrated that behavior that was initially believed to show (and was modeled as) a deviation from optimality could in fact be seen as optimal. In a similar vein, rational explanations have recently been developed for other tasks were the assumption has been that people act suboptimally (e.g., the gambler's fallacy [81] and anchoring [82]). It is possible that behavior in our task is also more frequently optimal when judged on a different criterion than what was used in our analysis. To avoid strong assumptions on human behavior, the components of the model were grounded in measurements that were taken in single-task (e.g., for interkeypress intervals) or that were specified in preceding models of this task setting in which a different payoff function was used (e.g., parameters for the control of the joystick and for switch costs [18]). In this way, we attempted to craft a model that did not go beyond the empirical data. That said, more detailed insights might be gained when the model is refined further. Depending on the nature of the revision, alternative predictions regarding optimality might arise. We see four general ways in which the model can be refined. First, more details of the underlying psychological processes and the moment-to-moment performance could be given for most components of the model. Such theories can provide an account of performance at different levels of abstraction [62]. For example, Zhang and Hornof [41] have developed models that predict performance of various 'microstrategies' for dual-tasking (i.e., systematic combinations of cognitive processes at the millisecond to second level [83]). Similarly, our model does not incorporate a theory of effort or motivation. It provided a normative account for what performance might look like for different strategies for interleaving between tasks. It did not account for different effort levels that can be applied, given the choice for a specific strategy. It is possible that participants adhered to general principles such as a minimization of effort [84,85] and a richer model, with more assumptions, is needed to account for this. Second, the model could be calibrated to take more variability of performance into account. For example, most of the model's parameters are set to a mean value (e.g., mean typing speed). This can be changed to take trial-to-trial variability into account (e.g., by sampling values from a distribution). Third, the strategy space might be broadened in two ways. First, the model was only used to explore simple strategies in which a consistent number of digits was typed during each visit. However, participants might have used more complicated strategies. For example, they might have varied the number of digits they typed per visit, or they might have changed the number of digits they typed based on the occurrence of "structure" in the number (e.g., see [24] for an example where task structure influences interleaving). More fine-grained measurements (e.g., eye-tracking) are needed to accurately model such strategies. As the current model explored performance of extreme strategies (e.g., no interleaving, and interleaving after every digit), as well as many strategies in between these extremes, it is expected that performance of more "complex" strategies falls in the same range as the current model predicted (cf. the bracketing approach see [43,86]). Fourth, the model could be improved by incorporating a formal theory of how people learn to adapt to constraints over time. Although some theories of learning in multitasking have been proposed (e.g., [58,87]), these theories are not yet at a level of sophistication such that they can directly be applied to the current context. In particular, it is unclear at what level of granularity feedback on performance is cognitively processed, and how experience with one strategy is generalized to other strategies. Insights from hierarchical reinforcement learning might prove valuable here, as such models learn both the utility of small consistent action units, while at the same time learning the utility of larger units (e.g., strategies) that are formed out of these smaller units [88]. Conclusion We provided a detailed analysis of how people adapt their interleaving strategies in a dual-task setting to three factors: task characteristics (noise, radius), individual differences in skill (e.g., typing speed), and incentives (a formal way of capturing objective or priority). The modeling analysis suggests that people adapt their performance in such a way as to try and maximize the payoff value. This is not to say that performance was optimal on every trial. Several explanations have been given for this. Some are related to the learning process (e.g., strategy transfer and exploitation of successful strategies), others might have to do with the difficulty of the task (e.g., the noise in the feedback). Supporting Information S1 File. Script and data for analysis of the empirical data. The zip-file contains a R script and .Rdata file that can be used to analyze the empirical data. The script explains the structure of the data file. (ZIP)
v3-fos-license
2022-10-19T15:12:25.396Z
2022-10-17T00:00:00.000
252974683
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "06609d6b7689e391ae59d85d79200bde4007f31a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44088", "s2fieldsofstudy": [ "Biology" ], "sha1": "9e1a9e40bbc83c71edc1cbf83a7fb2b16c106aa3", "year": 2022 }
pes2o/s2orc
Comparative analysis of antigen coding genes in 15 red cell blood group systems of Yunnan Yi nationality in China: A cross‐sectional study Abstract Introduction There are few analyses of the 15 red blood group system antigen coding genes found in the Yunnan Yi nationality. This has caused many poteintial dangers relating to clinical blood transfusion. In this report, the coding genes and distribution of 15 blood group antigens system in the Yi nationality were tested and compared with those of Han nationality and other ethnic minorities. Methods The samples came from the healthy subjects in the first people's Hospital of Qujing, Yunnan Province. Two hundred and three Yunnan Yi and 197 Han nationality individuals were included. Thirty‐three blood group antigens with a low frequency from the 15 blood group systems of Yunnan Yi blood donors were genotyped and analyzed by PCR‐SSP. Sanger sequencing was used to detect A4GALT from the Yunnan Yi nationality. The χ 2 test was used to analyze observed and expected values of gene distribution to verify conformation to the Hardy‐Weinberg equilibrium law. Fisher's exact test was used to analyze gene frequency distribution, and the statistical significance was set at p < 0.05. Results The ABO blood group examination results for the Yi nationality and the local Han nationality in Qujing City, Yunnan Province, showed the majority were type A and type O, while the least prevalent was type AB. RhD+ accounts for more than 98% of the Yi and Han populations. There was a significant difference in ABO blood group antigen distribution between these two nationalities (p < 0.05), but there was no significant difference in the composition ratio of D antigen in the Rh blood group system (p > 0.05). Compared with Tibetan (Tibet), Zhuang (Nanning), and Dong (Guangxi), the gene distribution frequencies of Rh blood group system phenotype CC were significantly lower in the Yunnan Yi nationality (p < 0.05). There were significant differences in six erythrocyte phenotypic antigens in the Yi nationality in Yunnan compared with Han nationality, such as LW(a−b−), JK(a−b+), MMSs, Di(a−b+), Wr(a−b−), and Kp(a−b+) (p < 0.05). There were gene phenotypes with a low frequency in the four rare blood group systems: LW, MNS, Wright, and Colton. Several different mutation types occurred in the P1PK blood group system's A4GALT gene. Conclusion Yunnan Yi nationality has a unique genetic background. There are some significantly different distributions of blood group system genes with a low frequency in different regions and groups in China. Multiple mutations in the A4GALT gene of the P1PK blood group system may be related to their environment and ethnic evolution. Several different mutation types occurred in the P1PK blood group system's A4GALT gene. Conclusion: Yunnan Yi nationality has a unique genetic background. There are some significantly different distributions of blood group system genes with a low frequency in different regions and groups in China. Multiple mutations in the A4GALT gene of the P1PK blood group system may be related to their environment and ethnic evolution. | INTRODUCTION The red cell blood group systems is of great importance to clinical transfusion medicine, and until now 43 have been identified by International Transfusion Association (ISBT). 1 The genetic background of the erythroid blood group system is polymorphism, and the gene frequency distribution is related to ethnicity and region. 2 Erythroblood group system antibodies cause fetal and neonatal hemolytic disease (hemolytic disease of the fetus and newborn, HDFN) and hemolytic transfusion response (hemolytic transfusion reactions, HTRs). The ABO blood group system is one of the most important human erythroid blood group systems. 3 It consists of four antigens (A, B, A1 and A, B). These antigens, called oligosaccharide antigens, are widely expressed in erythrocyte membranes, tissue cell membranes, and in saliva and humoral fluids. 4 It is important for the diagnosis and treatment of cross-mating, neonatal hemolysis, and organ transplantation. The Yi of Yunnan province have a unique genetic background, the antigen distribution of its blood group system with a low frequency has not until now been fully reported, and there is no large sample of gene polymorphism in Yunnan Yi population. We found two cases of P phenotype in a previous study, and gene sequencing found a new A4GALT allele c.456_457_in-sACACCCC homozygous mutation (NCBI number: MG812384), which is the molecular formation mechanism of the p phenotype. 5 A4GALT polymorphisms and rare Au(a−b+) individuals were also found in this pedigree. Therefore, it is speculated that A4GALT and blood type antigen genes with a low frequency have multiple polymorphisms in the Yunnan Yi population. It was reported that the antigen of the P1PK blood group system was not confirmed by the ISBT until 2011. The system has three antigens, respectively: P1, P k , and NOR. 6 The locus of the P1PK antigen is sub-band 2 in region 1 of the long arm of chromosome 22,22q. 7,8 This gene is called A4GALT. The gene product is: 4-galactosyltransferase, consisting of 353 amino acids. The A4GALT gene is polymorphic with 52 alleles, with most gene mutations occurring on exon 3. [9][10][11] The A4GALT locus encodes a glycosyltransferase that synthesizes the terminal galactose α1-4Gal of P K (Gb3/CD77) glycosphingolipid α1-4Gal, which plays an important role in transfusion medicine, obstetrics and pathogen susceptibility. 12 Anti-P1 antibodies is associated with a hemolytic transfusion response, whereas P and PK-related antibodies are associated with hemolytic transfusion response, neonatal hemolytic disease, and spontaneous abortion. 13,14 In addition to the P1Pk blood group system, whether the distribution of other blood group systems is unique in the Yunnan Yi population deserves our study. For example, the Rh blood group system contains two genes, RhD and RhCE, and the expressed antigen is a 12-transmembrane glycoprotein. Due to RhD blood group incompatibility, neonatal hemolysis will occur. 15,16 The Yi nationality is the largest ethnic minority in Yunnan, China, with a unique genetic background due to ethnic migration and integration. In this study, PCR-SSP genotype analysis of P1PK blood type A4GALT, and the study of distribution characteristics in their gene polymorphisms, provide a data basis and assist in the establishment of a comprehensive rare blood type database. | METERIALS AND METHODS 2.1 | Blood samples 2.3 | PCR amplification A PCR reaction system was prepared, and bidirectional primers were added at a concentration of 10 pmol/L (Supporting Information: Table S1 for the primer sequence). The PCR products were subjected to agarose electrophoresis, and the target fragments were cut in a 2.0 ml centrifuge tube and recovered using a glue recovery kit. The recovered products were sequenced. | Genotyping detection Detect 12 clinically important erythrocyte antigen genes by RT-qPCR with human erythrocyte rare blood group genotyping kit, add specific primers (Supporting Information: Table S1) to amplify PA and PB genes by PCR, then sequence amplified products by Sanger sequencing, and finally compare the sequencing results to obtain the mutation point (mutation position) and then determine the genotype. | Detection of blood groups with a low frequency of Yunnan population in China Rare blood type genotype and gene frequency in the Yunnan Yi nationality ( Table 1) | Rare blood group phenotype distribution differences in the Yunnan Yi nationality andother ethnic minorities in different regions of China There was significant difference between Yunnan Yi nationality and other ethnic minorities (p < 0.001). Compared with other ethnic minorities, the Yi nationality in Yunnan has significant difference (p < 0.05). (Table 3). | DISCUSSION In this study, results indicated that the Yi and local Han people in Qujing City, Yunnan Province are mostly A-type and O-type. There was a significant difference in the antigen distribution of ABO blood group system between the two ethnic groups (p < 0.05), while the D antigen composition ratio of Rh blood group system was not HE ET AL. this may be related to the small number of individual samples collected in the clinic, resulting in certain deviations in the statistics. 21 The study of 15 erythrocyte rare blood group system antigen gene poly morphisms in Yunnan population not only provides date for human population genetics, ethnic migration and blood transfusion treatment of blood group patients, but also improves the construction of Yunnan rare blood group gene database, help to solve the problem of difficult blood type identification and cross-matching incompltibility, reduce blood transfusion reactions such as immune hemolysis, and provide a strong guarantee for clinical safe blood transfusion and precise blood transfusion therapy. 22 The results of this study suggest that in the LW blood group system, Lw(a−b+) and Lw(a+b+) blood groups were not detected in the Han population, while in the Yunnan Yi people, the Lw(a−b+) blood group accounted for 0.0099, the proportion of Lw(a+b+) blood group is 0.0099, which is statistically different from the distribution of the Han population in the LW blood group system. 23,24 In the 203 yunnan Yi people, the proportion of the MMss blood type population reached 0.3448, which was significantly higher than the proportion of the Han population of 0.2843. In terms of the NNss blood type, the proportion of the Yunnan Yi population was significantly reduced. In the MNS blood group system, it can be seen that the differences between different blood types are statistically significant. 25 In the Diego blood group system, it can be seen that the Di(a−b+) blood type is significantly increased in the Yunnan Yi population compared with the Han population, while the Di(a+b+) blood type is less in the Yunnan Yi population, with p < 0.05, which is statistically significant. In the Wright blood group system, the expression of Wr(a+b−) and Wr(a+b+) blood groups were not detected in the Han population, while the frequencies of 0.0099 and 0.0493 were found in the Yunnan Yi population, respectively. The expression difference between the two populations has statistical significance. 26,27 In this study, the distribution of blood group system genotypes with a low frequency in Yunnan was found significantly different among ethnic minorities in different regions of China. 28 Subsequently, with the polymorphism of P antigen A4GALT gene polymorphism of P1PK was detected. P1PK blood group system antigen which has a common molecular basis, is encoded by the A4GALT gene. 29 Therefore, detecting A4GALT gene polymorphism is an effective method to understand the antigen distribution of P1PK in the Yi group. It has been reported that Yunnan Province is a region with a high incidence of thalassemia, and ethnic minorities have a higher carrying rate. 30 This means that the unique genetic background of ethnic minorities in Yunnan can easily lead to the occurrence of hematological diseases. In previous studies of two rare P phenotypes in Yi families, 5 it was found that they lack all antigens from the P1PK blood group system. Anti-P1PK (anti-Tja) antibody was present in all individuals with the P phenotype, which coagulates all phenotype red blood cells except the P phenotype, resulting in habitual abortion in women early in pregnancy. 31,32 The A4GALT gene determines the formation of the P phenotype, and its exon 3 variation which causes amino acid changes, may cause CONFLICTS OF INTEREST The authors have no conflicts of interest to declare. All authors have read and approved the final version of the manuscript. They had full access to all of the data in this study and takes complete responsibility for the integrity of the data and the accuracy of the data analysis. Kun-hua He affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; No important aspects of the study have been omitted; Any discrepancies from the study as planned have been explained. DATA AVAILABILITY STATEMENT The authors confirm that the data supporting the findings of this study are available within the article and its supplementary materials. The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions. ETHICS STATEMENT This study was approved by the Medical Ethical Committee of Qujing No.1 Hospital of Yunnan Province, and the ethics number is IRB2018-001(S)-01. TRANSPARENCY STATEMENT The lead author Kun-Hua He affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.
v3-fos-license