added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2019-04-27T13:09:03.395Z
2018-03-01T00:00:00.000
134387898
{ "extfieldsofstudy": [ "Physics", "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/126/1/012027/pdf", "pdf_hash": "ef8c2f72300e1aa856d0d71618faead8d35f43fd", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44587", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "d85bb6ee29f1a22005c7442806a51b7c0b99652c", "year": 2018 }
pes2o/s2orc
Land cover change impact on urban flood modeling (case study: Upper Citarum watershed) The upper Citarum River watershed utilizes remote sensing technology in Geographic Information System to provide information on land coverage by interpretation of objects in the image. Rivers that pass through urban areas will cause flooding problems causing disadvantages, and it disrupts community activities in the urban area. Increased development in a city is related to an increase in the number of population growth that added by increasing quality and quantity of life necessities. Improved urban lifestyle changes have an impact on land cover. The impact in over time will be difficult to control. This study aims to analyze the condition of flooding in urban areas caused by upper Citarum watershed land-use change in 2001 with the land cover change in 2010. This modeling analyzes with the help of HEC-RAS to describe flooded inundation urban areas. Land cover change in upper Citarum watershed is not very significant; it based on the results of data processing of land cover has the difference of area that changed is not enormous. Land cover changes for the floods increased dramatically to a flow coefficient for 2001 is 0.65 and in 2010 at 0.69. In 2001, the inundation area about 105,468 hectares and it were about 92,289 hectares in 2010. Introduction Land cover change is dynamic process-taking place on the biophysical surfaces that have taken over a period and space is of enormous importance in natural resource studies. Land cover change dynamics are substantial elements for monitoring, evaluating, protecting and planning for earth resources. Land cover changes are the major issues and challenges for the eco-friendly and sustainable development for the economic growth of any area. Land cover change is the conversion of open area to area woke indicated it would decrease catchment areas. Nowadays, the efforts to tackle the incident are still conventional runoff to flow quickly into the body of the river flow through the efforts of normalization techniques. Such shunt and water bodies [1]. Social and economic development drives land use and land cover changes, which have potentially enormous impacts on water resources. Changes in land use and land cover affect the partitioning of precipitation through the vegetation and soil into the main water balance components of interception, infiltration, evapotranspiration; surface runoff and groundwater recharge [2]. Land cover is an important determinant of Eco hydrologic processes in watershed systems. Continued urbanization changes the very nature of Eco hydrological regimes of watersheds and increases their vulnerability to flooding, soil loss, and water pollution [3]. The catchment hydrologic response suspects of having multiple impacts on urbanization and population concentration. The impact of land cover originates from a multiplicity of landscape modifications. This modification may potentially lead to compensations and makes it difficult to synthesize the results from the numerous case studies related to the impact of urbanization on catchments hydrologic response. In some cases, the observed data are not in full agreement with the ideas that urbanization tends to increase flood occurrence and intensity while decreasing base flow [4]. The researcher challenges of assessing the impacts of future climate change are enormous. The institutional involved in using that science for making policy are arguably even greater. The development of region has highlighted the difficulties of reconciling. The supply of climate science with the demand for research is more useful, and the policy makers used it [5]. Flood in an urban area can affect all activity in that place; in the long term, it occurs because of land cover change. This study is a modeling and investigates the effect of land use change on the urban flood in the upper Citarum Watershed using a physically based hydrological simulation model. The objective of this paper is to compare the flow changes and flood inundation area detected by a conceptual modeling approach (the residual approach) on the particular case of urbanized catchments. Urban flood modeling is the land cover change for the years 2001 and 2010 in upper Citarum watershed. Comparative analysis of this two term and conditions between 2001 and 2010 inform some conclusions about the tensions between adaptive and risk-based approaches. The role of institutional is risk in adaptation, and the importance of institutional dynamics in shaping the framing climate uncertainties and policy response to scientific knowledge. Study Area Upper Citarum watershed flows through the South of Bandung Regency up to Saguling Reservoir. The upper Citarum watershed spread geographically at 6°43'21.8 "-7°19'38.1" South Latitude and 107°32'2 "-107°53'51.6" East Longitude. Upper Citarum watershed administratively passes several cities and districts, such as Regency Bandung, Sumedang, Bandung, and Cimahi. Upper Citarum watershed defined as one area that serves to accommodate, store and drain the water upstream of Saguling Reservoir (Curug Jompong) with a total area is 1.771 square kilometers. Data Sources The image data products used in this study are land cover data for the years 2001and 2010. In this study, the rainfall data used is the record data from 2001 to 2010. The data of rainfall sourced from the data of Major River Basin Organization of Citarum, Ministry of Public Works. The data of debit recording in this research obtained from Research Centre and Development of Water Resources, Ministry of Public Works. The data of the discharge serves as the calibration and verification reference of the modeling created. The available record data is the measurement data of water level three times a day. Upper Citarum watershed utilizes remote sensing technology to provide information on land coverage by interpretation of objects in the image. The information obtained to show areas that have the most critical and most sensitive land capability. Land Cover Modeling Land cover change is dynamic process-taking place on the biophysical surfaces that have taken over a period and space is of enormous importance in natural resource studies. The data processing of the land cover and its classification system uses the help the geographic information data. Based on the data obtained there is ten classifications of land cover that will be determined the area of each for years 2001 and 2010. Then the data will be compared the amount of change from 2001 to 2010 into the form of bar charts [6]. Flood Modeling Forest cover changes can have a great impact on hydrological characteristics of a watershed. Flood discharge can increase as result of a change in land use. Flood peak flow may increase after trees cut down [7]. A simple modeling framework provided based on accessible input data and a freely available and widely used hydrological model (HEC-RAS) to check the possible effect of LULC changes at a particular sub-catchment on the hydrograph at the basin outlet [8]. In this flood modeling, the first analysis of rainfall design and flood discharge design with return period are 5-year, 10 HEC-RAS is a widely used hydraulic software tool developed by the U.S Army Corps of Engineers, which combined with the HEC-HMS platform for hydrological simulations. HEC-RAS employs 1d flood routing in both steady and unsteady flow conditions by applying an implicit-forward finite difference scheme between sequent sections of flexible geometry. In all the above models, two boundary conditions required, which usually set at the upstream end of the channel through an imposed inflow as well as the assumption of uniform water depths at the upstream and downstream end (kinematic wave condition). Although an imposed depth would result in more stable solutions than the uniform flow, we choose the latter since, in practice, it is rare to know the temporal evolution of the water depth at a particular location. The models compute the appropriate time step based on the Courant number stability criteria [9]. This method applied the Hydrologic Engineering Center River Analysis System (HEC-RAS) model to estimate the potential catastrophes for different peak outflow scenarios with conclusions and recommendations [10]. After incorporating resampling and vertical errors, all resampled raster datasets used to create a 1D HEC-RAS model. HEC-RAS is the most commonly used flood model tool in Indonesia. The HEC-RAS parameters kept unchanged for different DEMs because the goal is to investigate the sensitivity of inundation maps to topographic errors instead of trying to create newly calibrated model for each topographic dataset [11]. Calibration and Validation Method The method compares the observed discharge, and then analyzed the parameters that influence the modeling results. Optimum parameter values in calibration found by statistical measures. The result of comparison then determined the strength in predicting the modeling by Nash-Sutcliffe model of efficiency coefficient (E). The Nash-Sutcliffe efficiency (NI) is the two criteria most widely used for calibration and evaluation of hydrological models with observed data [12]. Nash-Sutcliffe Index can range from -∞ to 1. The value of 1 (NI=1) corresponds to a perfect match of modeled discharge to the observed data. The value of 0 (NI = 0) indicates that the model predictions are as accurate as the mean of the observed data, whereas it is less than 0; it occurs when the observed is the better predictor than the model [13]. The verification process based on the observation flow verified parameter is the runoff. Calibration and Validation Modeling The calibrated HEC-HMS model applied for the land cover scenarios to assess the potential land cover impacts on the Snyder and SCS synthesized hydrographs obtained based on the parameters input into the upper Citarum watershed, they analyzed for calibration and verification. The rain that occurs is the total daily rain that distributed evenly in the watershed. Calibration and verification of Citarum watershed carried out at each of the maximum floods in the study sites for three years i.e. 2007, 2008 and 2010, with calibration results as shown below. Conclusions This study and have shown that land cover change in upper Citarum watershed is not very significant. It based on the results of data processing of land cover has the difference of area that changed is not enormous. it increased dramatically to a
v3-fos-license
2021-06-22T17:55:48.026Z
2021-04-23T00:00:00.000
235505833
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-163X/11/5/450/pdf", "pdf_hash": "ec2344f5f1cdbdb2fa0519db406310707f90232e", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44592", "s2fieldsofstudy": [ "Geology", "Chemistry" ], "sha1": "d80158ab0c4901a9360b2a6e592edfbdbf9c531a", "year": 2021 }
pes2o/s2orc
Combined In Situ Chemical and Sr Isotopic Compositions and U–Pb Ages of the Mushgai Khudag Alkaline Complex: Implications of Immiscibility, Fractionation, and Alteration The Mushgai Khudag complex consists of numerous silicate volcanic-plutonic rocks including melanephelinites, theralites, trachytes, shonkinites, and syenites and also hosts numerous dykes and stocks of magnetite-apatite-enriched rocks and carbonatites. It hosts the second largest REE–Fe–P–F–Sr–Ba deposit in Mongolia, with REE mineralization associated with magnetite-apatite-enriched rocks and carbonatites. The bulk rock REE content of these two rock types varies from 21,929 to 70,852 ppm, which is much higher than that of syenites (716 ± 241 ppm). Among these, the altered magnetite-apatite-enriched rocks are characterized by the greatest level of REE enrichment (58,036 ± 13,313 ppm). Magmatic apatite from magnetite-apatite-enriched rocks is commonly euhedral with purple luminescence, and altered apatite displays variable purple to blue luminescence and shows fissures and hollows with deposition of fine-grained monazite aggregates. Most magmatic apatite within syenite is prismatic and displays oscillatory zoning with variable purple to yellow luminescence. Both magmatic and altered apatite from magnetite-apatite-enriched rocks were dated using in situ U–Pb dating and found to have ages of 139.7 ± 2.6 and 138.0 ± 1.3 Ma, respectively, which supports the presence of late Mesozoic alkaline magmatism. In situ 87Sr/86Sr ratios obtained for all types of apatite and calcite within carbonatite show limited variation (0.70572–0.70648), which indicates derivation from a common mantle source. All apatite displays steeply fractionated chondrite-normalized REE trends with significant LREE enrichment (46,066 ± 71,391 ppm) and high (La/Yb)N ratios ranging from 72.7 to 256. REE contents and (La/Yb)N values are highly variable among different apatite groups, even within the same apatite grains. The variable REE contents and patterns recorded by magmatic apatite from the core to the rim can be explained by the occurrence of melt differentiation and accompanying fractional crystallization. The Y/Ho ratios of altered apatite deviate from the chondritic values, which reflects alteration by hydrothermal fluids. Altered apatite contains a high level of REE (63,912 ± 31,785 ppm), which are coupled with increased sulfur and/or silica contents, suggesting that sulfate contributes to the mobility and incorporation of REEs into apatite during alteration. Moreover, altered apatite is characterized by higher Zr/Hf, Nb/Ta, and (La/Yb)N ratios (179 ± 48, 19.4 ± 10.3, 241 ± 40, respectively) and a lack of negative Eu anomalies compared with magmatic apatite. The distinct chemical features combined with consistent Sr isotopes and ages for magmatic and altered apatite suggest that pervasive hydrothermal alterations at Mushgai Khudag are most probably being induced by carbonatite-evolved fluids almost simultaneously after the alkaline magmatism. Introduction Rare earth elements (REEs) are important resources for highly technological applications and are a fundamental component of a range of low-carbon energy production approaches. REEs are included in the recent and current lists of critical metals due to geopolitical controls on their supply [1]. Even though REE mineralization is associated with a range of rocks, including igneous, metamorphic, and sedimentary rocks, alkaline igneous rocks (either carbonatite or syenite) dominate in hosting giant REE deposits [2]. Some examples of these deposits include Bayan Obo (China) [3], Mianning-Dechang (China) [4], Mountain Pass (United States) [5], and Mushgai Khudag (Mongolia) [6]. The Mesoproterozoic Bayan Obo deposit is the largest REE deposit in the world and serves as the main supply for the world's REE market [7]. Compared to the long discovered and investigated Bayan Obo deposit in North China, the petrogenetic and mineralization history of the Mushgai Khudag complex has not been well studied and remains poorly understood. The multi-element REE-Sr-Ba-P-S Mushgai Khudag complex displays complex mineralogical and paragenetic relations, which suggests the occurrence of primary magmatic accumulation modified by hydrothermal processes [2,6,8]. The alkaline complex hosts significant REE-Sr-Ba-P-S mineralization, and REE mineralization is mainly associated with magnetite-apatite-enriched rocks and carbonatites [8][9][10]. The formation history of the Mushgai Khudag complex has attracted considerable attention. Samoilov and Kovalenko [11] were the first to provide a detailed geological and petrographic description of the complex and put forward its formation sequence. Andreeva et al. [12] inferred that the complex is formed by fractional crystallization and silicate-salt liquid immiscibility based on the chemical compositions of fluid and melt inclusions hosted by silicate minerals (e.g., diopside, garnet, and K-feldspar) in alkaline igneous rocks. Nikolenko et al. [13] presented new Sr-Nd-Pb isotopic compositions as well as geochemical data (LILE/HFSE values), which implies that the parental melts of Mushgai-Khudag were derived from a lithospheric mantle source affected by a mixture of subducted oceanic crust and its sedimentary components. The major and trace element compositions of alkaline silicate rocks suggest that these rocks were formed by fractional crystallization of the nephelinitic parental magma [13]. Magnetite-apatite-enriched rocks within Mushgai Khudag are unique, with the highest apatite contents reaching 80-90 vol.% and REE 2 O 3 concentrations in apatite of up to 12 wt.% [10]. Magnetite-apatite-enriched rocks are more commonly known as the dominant component of iron oxide-apatite (IOA) deposits, which are of great economic significance as a source of iron and potential sources of REEs [14]. Magnetite-apatite-enriched rocks are characterized by variable concentrations of apatite (1-50 vol.%) within IOA deposits. These are commonly associated with (sub-)volcanic rocks in convergent margins and rift-related environments [15,16]. The processes involved in the formation of IOA deposits continue to be a controversial topic, with both magmatic and hydrothermal origins inferred [17][18][19][20][21][22][23][24][25]. In spite of the development of the equivocal magmatic-hydrothermal model of the ore formation for IOA deposits, the potential REE enrichment in these IOA deposits is still poorly understood. As a unique REE-enriched IOA deposit, the REE mineralization processes in Mushgai Khudag have received limited attention [10]. The structure of apatite gives it the ability to incorporate and concentrate trace elements such as Sr, U, and Th, especially REEs [26]. It is sensitive to geochemical changes in magmatic systems and various fluid-induced chemical and textural changes over a wide range of pressures and temperatures [27][28][29][30]. Thus, apatite has been used to trace the petrogenetic processes of magma evolution and hydrothermal alteration [27][28][29][30]. In this contribution, we present the in situ U-Pb ages of apatite within magnetite-apatite-enriched rocks and chemical and Sr isotopic compositions of apatite in magnetite-apatite-enriched rocks and syenite as well as calcite in carbonatite together with the bulk rock chemical compositions, aiming to provide insight into the source and genetic history of the Mushgai Khudag complex. The details of the texture and chemical and isotopic compositions of apatite illustrate the constraints of the contribution of the magmatic and hydrothermal processes to the REE enrichment and mineralization associated with magnetite-apatiteenriched rocks. Geological Background The Mongolian collage is separated into northern and southern domains by the Main Mongolian Lineament. The Mushgai Khudag alkaline volcanic-plutonic complex is located in the southern domain of Mongolia (Figure 1). The Mushgai Khudag complex is hosted by Paleozoic sedimentary-volcanic sequences and Carboniferous granitoids [10]. It is associated with late Jurassic to early Cretaceous alkaline magmatic activities, including alkaline and subalkaline extrusive, subvolcanic, and intrusive rocks, which range in composition from melanephelinite and nepheline melaleucitite to trachyte in the extrusive facies and shonkinite to syenite in the plutonic facies [8,12]. The alkaline-carbonatite complex is composed of various volcanic and subvolcanic silicate rocks including melanephelinite, theralites, and alkali feldspar trachytes, which are cross-cut by stocks and dykes of alkaline syenites, shonkinites, and magnetite-apatite-enriched rocks, as well as numerous small dykes of carbonatites [10,11]. The complex displays a central ring structure that is almost 30 km in diameter, and the Mushgai Khudag REE deposit is located in the central part of this ring ( Figure 2). Twenty ore bodies have been recognized along the endo-and exocontact parts of syenite and syenite-porphyry [2]. Different ore types have been identified at Mushgai Khudag, including those hosted by carbonatite, mineralized breccia with carbonate cement, magnetite-apatite-enriched rock, and complex phosphate-enriched rocks. A drill core and field investigation showed that carbonatitic and apatite-bearing ores are the two dominant types of REE ore [12]. Carbonatites are ubiquitously associated with fluorite mineralization and contain numerous fluorite veins [13]. The ages of K-Ar in the Mushgai Khudag complex vary widely between 179 and 121 Ma, which might reflect secondary processes [11]. The newly obtained Ar-Ar dating of the magnetite-apatite-enriched rocks and associated silicate rocks (e.g., melanephelinte and alkaline syenite) narrowed the measured age range to 145-133 Ma [13]. The Rb-Sr age of the syenite was shown by Baatar et al. [6] to be 139.9 ± 5.9 Ma. Apatite-enriched and magnetite-apatite-enriched rocks are exposed in two stocks of 30 × 70 m and 10 × 30 m in size. The former is known as Apatite Hill and is a typical REE mineralized zone ( Figure 2) [10]. Magnetite-dominant rocks occur in the very center, with apatite dominated rocks on the outside and phlogopite-enriched zones in between. Carbonatite in Apatite Hill occurs as veins and dykes of 0.1 to 10 m in width and is associated with widespread fluorite mineralization. The top of Apatite Hill is usually weathered with britholite and anhydrite. Samples, including fresh and altered magnetiteapatite-enriched rocks, syenites, and carbonatites were collected from Apatite Hill during the 2016 and 2017 HiTech AlkCarb Mongolian Expeditions. Petrographic Analysis Textures and mineral assemblages of samples prepared in petrographic thin sections were studied using an optical petrographic microscope, an optical microscope coupled with cathodoluminescence (OM-CL), and a scanning electron microscope (SEM) coupled with both energy-dispersive spectrometry (EDS) and back-scattered electron imaging (BSE). Cathodoluminescence analyses were collected using a Leica DM2700P microscope coupled with a CITL MK5-2 system at the state key laboratory of Geological Processes and Mineral Resources (GPMR), China University of Geosciences (Wuhan). The system was operated at an accelerating voltage of 12 kV and a current density of about 300 µA for calcite and apatite with an exposure time of up to 3s. The CL system was typically operated with a corresponding voltage of 13 kV and a beam current of 400 µA with an exposure time of up to 4 s for feldspar. Back-scattered electron (BSE) images were obtained using a high-definition back-scattered electron detector coupled to a Zeiss Sigma 300 field emission scanning electron microscope (FESEM) at the GPMR. The instrument was operated with a working distance of 8.5 mm, an electronic high tension of 20 kV, and a magnification of 20-100×. Chemical Analysis Major element analyses of whole-rock samples were carried out using a Philips PW 2400 XRF at ALS Minerals-ALS Chemex, Guangzhou. The samples were crushed and powdered in an agate ring mill to pass a 200-mesh sieve. About 1 g of sample was mixed with lithium borate flux (Li 2 B 4 O 7 -LiBO 2 ) and fused in an auto fluxer at about 1050 • C to form a flat glass disc for analysis by X-ray fluorescence spectrometry (XRF). Major element compositions were determined with the SARM-4, NCSDC-73510, NCSDC-73303, GBW-7238, and SARM-32 standards with analytical uncertainties of better than 5%. Trace element analyses of whole-rock samples were carried out using an Agilent 7500a ICP-MS at the GPMR. About 50 mg of powdered sample was dissolved with an HF + HNO 3 mixture in high-pressure Teflon capsules. The detailed analytical procedure used for the trace element analyses can be found in Liu et al. [31]. The trace elements were measured together with the AGV-2, BCR-2, BHVO-2, GSP-2, and RGM-2 standards. The analytical precision was estimated to be better than 10% for all trace elements based on the standards and duplicate analyses. The major element compositions of apatite were quantified using a JEOL JXA-8230 Electron Probe Microanalyzer equipped with five wavelength-dispersive spectrometers at the Laboratory of Microscopy and Microanalysis, Wuhan Microbeam Analysis Technology Co., Ltd. (Wuhan, China). All thin sections were carbon-coated prior to the analysis. The electron microprobe (EMP) analyses were conducted using an accelerating potential of 15 kV, an incident current of 5 nA, and a spot size of 20 µm. The peak counting time was 10 s for Na, Ca, P, S, Sr, F, Si, Fe, Cl, La, Ce, Pr, Nd, and Sm. The background counting time was half of the peak counting time in the high-and low-energy background positions. The following standards were used: Jadeite (Na), Apatite (Ca, P), Barite (S), Strontium fluoride (Sr), Fluoride (F), Olivine (Si), Pyrope Garnet (Fe), Sodium chloride (Cl), Lanthanum metal (La), Cerium metal (Ce), Praseodymium metal (Pr), Neodymium metal (Nd), and Samarium metal (Sm). The formula of each analyzed spot was calculated based on 25 oxygens as suggested in Ketcham [32]. In situ trace element analyses for calcite and apatite were conducted using a RESOlution 193 nm laser ablation system coupled to a Thermo iCAP-Q Inductively Coupled Plasma Mass Spectrometer (ICP-MS) at the GPMR. The NIST SRM 612 international glass standard was used to correct the instrument drift, and USGS reference glasses (BIR-1G, BCR-2G, and BHVO-2G) were adopted as external standards for concentration calibration [33]. Standards and samples were analyzed with a 33 µm spot size, a 10 Hz repetition rate, and a corresponding energy density of approximately 5-7 J/cm 2 . Each spot analysis incorporated 30 s of background acquisition and 40 s of sample data acquisition. Elements of data reduction, including the concentration determination, detection limit, and individual run uncertainty were calculated using ICPMSDataCal software [33]. The analytical uncertainty for most trace elements in calcite and apatite was within 10% and was better than 5% for REEs. In Situ U-Pb Dating of Apatite In situ U-Pb dating of apatite was carried out using the RESOlution laser ablation coupled to the iCap-Q ICP-MS at the GPMR. The details of the analytical procedure and the method of correction used for the common Pb component can be found in Chen and Simonetti [34]. Madagascar apatite (MAD) was utilized as an external standard to monitor instrumental drift and U/Pb fractionation [35]. Standards and samples were ablated using a spot size of 50 µm, a repetition rate of 8 Hz, and an energy density of 5-7 J/cm 2 . Each spot analysis incorporated 30 s of background acquisition and 40 s of sample data acquisition. The data calculation was carried out using an Excel-based program developed by Chen and Simonetti [34]. Tera-Wasserburg diagrams and weighted mean 206 Pb/ 238 U ages were constructed using Isoplot v3.0 [36]. In Situ Sr Isotope Determinations In situ Sr isotope analyses for apatite and calcite were conducted using the RESOlution laser ablation system coupled to the Nu Plasma II multi-collector (MC) ICP-MS at the GPMR. The measurements involved correction of spectral interference for Kr, Rb, and doubly-charged REE, as described by Chen and Simonetti [37]. Analyses of calcite and apatite were carried out using a spot size of 50 µm, a repetition rate of 10 Hz, and an energy density of approximately 5-7 J/cm 2 . An in-lab coral standard (Qingdao) was analyzed as the external standard to evaluate the reliability of analytical accuracy. The average 87 Sr/ 86 Sr isotopic composition obtained for the coral standard was 0.70917 ± 0.00004 (2σ, n = 24), which is consistent with the recommended value of 0.70923 ± 0.00002, as determined by ID-TIMS at the State Key Laboratory for Mineral Deposits Research at Nanjing University [38]. Magnetite-Apatite-Enriched Rocks and the Dominant Apatite Paragenesis and textural details for apatite from magnetite-apatite-enriched rocks are presented in Figure 3. Some of the magnetite-apatite-enriched rocks show obvious alterations. The fresh magnetite-apatite-enriched rocks are commonly yellow-green to pale green and porphyritic in texture, with apatite phenocryst accounting for 90 vol.% (Figure 3a). Euhedral to subeuhedral magmatic apatite is commonly identified in these rocks, with grain sizes varying from 100 µm to 10 mm (Figure 4b-d). Coarse-grained apatite (Ap-1) displays heterogeneous purple luminescence with dispersed yellowish zones, accompanied by minor fissures characterized by orange luminescence (Figure 3b). Finegrained apatite (Ap-2), commonly 100-500 µm in grain size, shows an internal structure characterized by a darker core and a lighter rim on the BSE images ( Figure 3c). Some finegrained apatite occurs as aggregates with phosphosiderite filled in the fissures (Figure 3d (Figure 3g). Ap-4 displays various levels of blue to purple luminescence with grain sizes of 50-1000 µm and is strongly fractured with fissures in the rim or altered as hollows with small relict apatite mostly distributed along the rim. Fine-grained magnetite and monazite occur in the altered fissures and hollows (Figure 3h,i). The monazite grains or aggregates in the altered zones, accounting for 3-5 vol.%, are secondary in nature. The abundance of magnetite, monazite, and fluorite in the altered magnetite-apatite-enriched rocks is greater than in fresh rocks, whereas the distributions of phosphosiderite and celestine have sharply declined (Figure 3). Syenite and Apatite The paragenesis and texture of apatite from syenite are shown in Figure 4. Mushgai Khudag syenite contains variable amounts of orthoclase and sanidine with minor concentrations of phlogopite, apatite, quartz, magnetite, ilmenite, rutile, and titanite (Figure 4a-d). Orthoclase with grain sizes of up to 1-10 mm is characterized by blue luminescence, whereas sanidine with grain sizes of 100-500 µm displays a red CL color (Figure 4a,b). Most apatite within syenite appears to be euhedral, which is prismatic and 200 to 500 µm in size (Figure 4b). Oscillatory zoning within the prismatic grains is evident in CL images, ranging from purple to yellow in color (Figure 4b). A small number of ovoid or irregular apatite grains display relatively uniform yellow luminescence (Figure 4c). Acicular apatite with grain sizes of 1-10 µm can be identified as being disseminated within quartz, aegirine, and ilmenite (Figure 4e,f). It displays a green CL color with aspect ratios ranging from 5 to 10. Carbonatite and the Dominant Calcite The paragenesis and textural details for calcite and minor minerals from carbonatite are shown in Figure 5. Carbonatite is brown or yellowish-gray in color and mainly consists of calcite, fluorite, celestine, and barite with accessory quartz and REE minerals (e.g., bastnaesite and parasite). Calcite accounts for 70 vol.% with variable grain sizes ranging from 20 µm to 2 mm. Coarse-grained calcite (1-2 mm) appears relatively subeuhedral with jagged grain boundaries, suggesting the occurrence of hydrothermal overprinting (Figure 5a). Fine-grained calcite (20-50 µm) is anhedral and is commonly found to surround the coarse-grained calcite (Figure 5a). Fluorite, accounting for 15 vol.%, varies from 10 µm to 2 mm in grain size (Figure 5a,b). Celestine is anhedral with grain sizes of 10 to 20 µm, and it is widely disseminated in carbonatite, making up 5 vol.% (Figure 5c). Barite occurs as very fine-grained crystals smaller than 20 µm in size and is commonly associated with quartz and REE minerals (e.g., bastnaesite and parasite; Figure 5d,e). The parasite shows zonation in the BSE images due to variable levels of REE abundance, and it commonly occurs in association with celestine as inclusions in calcite (Figure 5e,f). Major and Trace Element Compositions for Magnetite-Apatite-Enriched Rocks, Syenites, and Carbonatites The major and trace element compositions of magnetite-apatite-enriched rocks, syenites, and carbonatites are listed in Table S1. Fresh and altered magnetite-apatite-enriched rocks show variations in concentrations of major elements, especially Fe 2 O 3 , SO 3 , and SiO 2 . The contents of Fe 2 O 3 (1.50-2.75 wt.%) and SO 3 (1.32-2.34 wt.%) in the fresh magnetite-apatite-enriched rocks are lower than in the altered rocks (15.62-23.88 wt.% and 3.57-5.89 wt.%, respectively). The SiO 2 content of the former (7.38-8.48 wt.%) is higher than that of the latter (2.35-5.16 wt.%). As illustrated in the primitive mantle normalized trace element plots (Figure 6a), the Mushgai Khudag magnetite-apatite-enriched rocks are characterized by significant enrichments of REE and U and depletion of HFSE (e.g., Nb, Ta, Zr, and Hf). In addition, the patterns show obvious negative Sr and Pb anomalies, which are consistent with data previously reported by Nikolenko et al. [13]. Magnetite-apatiteenriched rocks display strong REE enrichment (21,660 ppm) compared with the typical magnetite-apatite-enriched rocks found elsewhere in the world (e.g., approximately 120 ppm in the Los Colorados IOA, Chile) [24]. Of note, the altered magnetite-apatiteenriched rocks have distinctly higher REE concentrations (58,036 ± 13,313 ppm) than the fresh magnetite-apatite-enriched rocks (28,681 ± 6752 ppm), a characteristic that was also observed by Nikolenko et al. [10]. Nb/Ta and Zr/Hf values show large variations for both fresh and altered samples, ranging from 22.1 to 35.0 and 37.2 to 127, respectively. The chondrite-normalized REE patterns of magnetite-apatite-enriched rocks are steep and show significant LREE enrichments with (La/Yb) N ranging from 85 to 257 (Figure 6b). The (La/Yb) N of the altered magnetite apatite-enriched rocks (225 ± 33) is higher than that of the fresh rocks (117 ± 32) as well. (Table S1) [41]. The primitive mantle normalized trace element patterns obtained are similar to those reported by Nikolenko et al. [13], showing positive Pb and Sr anomalies and negative Nb and Ta anomalies (Figure 6c). The chondrite-normalized REE patterns of syenite show LREE enrichment (Figure 6d). Syenites display much lower and limited variation in REE contents (519-956 ppm) and (La/Yb) N values (46.1-52.5) compared with magnetite-apatiteenriched rocks. The CaO/(CaO + MgO + FeO + MnO) value of carbonatite is 0.95, which can be classified as calciocarbonatite [42]. Mushgai Khudag carbonatite shows significant enrichment in REE (26,692 ppm), U (229 ppm), Th (259 ppm), and Sr (177,066 ppm) and depletion in HFSE (<10 ppm), which is generally similar to the composition of carbonatites worldwide [43,44]. Obvious positive Pb and Sr anomalies are displayed in the primitive mantle normalized diagrams (Figure 6e). Carbonatites show chondrite-normalized REE patterns that are highly enriched in LREEs (La/Yb N = 191) (Figure 6f). The carbonatite samples are more enriched in trace elements including REEs compared to the results reported by Baatar et al. [6] and Nikolenko et al. [13], which suggests heterogeneous chemical distributions for different carbonatite dykes at Mushgai Khudag. Mineral Compositions for Apatite within Magnetite-Apatite-Enriched Rocks and Syenites The major and trace element compositions for magmatic and altered apatite from magnetite-apatite-enriched rocks and syenites were determined using an electron microprobe and LA-ICP-MS, respectively, and are listed in Table S2 Figure 7). Slightly altered apatite also contains higher concentrations of SiO 2 and LREE 2 O 3 but shows a comparable SO 3 concentration compared with magmatic apatite (Figure 7). An increased LREE content in apatite correlates with an increased Si abundance, which suggests a coupled substitution scheme of Si 4+ + REE 3+ = P 5+ + Ca 2+ (Figure 7a) [26]. The LREE and Si contents of slightly altered apatite (Ap-3) have the same substitution trend as magmatic apatite (Ap-1 and Ap-2) (Figure 7a). Of note, altered apatite (Ap-4) also displays correlated increases in S and LREE contents (Figure 7b). This suggests that the S 6+ + REE 3+ = P 5+ + 2Ca 2+ substitution scheme also contributes to REE incorporation into the apatite structure of Ap-4 in addition to the coupled Si substitution scheme [26]. Apatite with different levels of cathodoluminescence mostly shows variation in the REE content. Magmatic apatite from magnetite-apatite-enriched rocks shows light to dark purple luminescence, which is related to the variable content of Ce 3+ [50]. Dark purple apatite grains are characterized by higher Ce contents (up to 60,304 ppm) compared with the light purple ones (as low as 9748 ppm) (Figure 10a,b). Oscillatory-zoned apatite from syenite displays variable yellow to purple zones in the CL images (Figure 10c). From core to rim, the abundance of REE, Zr, U, and Th varies in the ranges of 17,268-27,521, 9.19-23.1, 8.83-21.9, and 67.2-313 ppm, respectively (Table S2). The oscillatory geochemical variation correlates well with the zonation identified with CL (Figure 10c,d). The purple zone is characterized by enriched REE, Zr, U, and Th, whereas the light-yellow zone is relatively depleted in these elements (Figure 10c,d). Both (La/Yb) N and (La/Nd) N decrease in the core and increase in the rim, changes that are decoupled from the oscillatory variation (Figure 10e,f). Mineral Compositions for Dominant Calcite in Carbonatite The trace element compositions for calcite from carbonatite are listed in Table S3. Calcite from the Mushgai Khudag carbonatite shows strong enrichment of REE, Sr, Ba, and Pb and depletion of HFSE (e.g., Nb, Ta, Zr, Hf), similar to calcite from carbonatites worldwide (Table S3) Figure 9d). The LREE-enriched trend is a typical characteristic of primary calcite in carbonatites worldwide, such as in Oka carbonatite (Figure 9d) [34]. (La/Yb) N ratios of calcite vary from 38.4 to 101 with the majority being between 50 and 100 (Table S3). Some calcite displays a significant negative Ce anomaly (Ce/Ce* = 0.41 ± 0.29; Figure 9d). U-Pb Ages of Apatite within the Magnetite-Apatite-Enriched Rocks Trace element data suggest that both magmatic and altered apatite are characterized by a high U content ranging from 17.4 to 512 ppm, which favors high-quality U-Pb dating (Table S3). U-Pb geochronological data for magmatic apatite (Ap-1 and Ap-2) and altered apatite (Ap-4) from magnetite-apatite-enriched rocks are listed in Table S4 and shown in Figure 11. Both magmatic and altered apatite were dated in situ by LA-ICP-MS and similar U-Pb ages of 140.7 ± 5.4 and 138.0 ± 5.1 Ma were found, as shown in the Tera-Wasserburg plots (Figure 11a,b). The y-intercept corresponds to a 207 Pb/ 206 Pb ratio that represents the best estimate for the composition of the common Pb component. The common Pb corrected weighted mean 206 Pb/ 238 U ages are 139.7 ± 2.6 and 138.0 ± 1.3 Ma, respectively (Figure 11c,d), which is consistent with the Rb-Sr age in the associated syenite (139.9 ± 5.9 Ma) [6]. Figure 11. Tera-Wasserburg plots and weighted mean 206 Pb/ 238 U age diagrams for magmatic and altered apatite within magnetite-apatite-enriched rocks. In Situ Sr Isotopic Compositions The Rb/Sr ratios for both calcite and apatite are extremely low (less than 0.001; Table S3); therefore, the measured 87 Sr/ 86 Sr ratios obtained for individual grains can be considered to be their initial Sr isotopic compositions due to the negligible radiogenic contribution of 87 Sr. In situ Sr, isotopic compositions of apatite from magnetite-apatiteenriched rocks and syenite and calcite from carbonatite are reported in Table S5 and presented in Figure 12 (Table S5; Figure 12a). In addition, calcite from carbonatite shows a similar level of 87 Sr/ 86 Sr variation (0.70619-0.70641) compared with apatite from magnetite-apatite-enriched rocks and syenites (Table S5; Figure 12b). Age and Sources of the Mushgai Khudag Complex The newly obtained 206 Pb/ 238 U age of 139.7 ± 2.6 Ma for apatite within magnetite-apatiteenriched rocks is consistent with the Rb-Sr age of the associated syenite (139.9 ± 5.9 Ma) and is in good agreement with the Ar-Ar dating age range (145-133 Ma) [6,13]. Magnetiteapatite-enriched rocks are considered to be the products of silicate-salt liquid immiscibility from the highly evolved parental alkaline silicate melt based on the melt and fluid inclusion data presented by Andreeva and Kovalenko [9] and Nikolenko et al. [10]. The newly obtained U-Pb ages, which are consistent with alkaline silicate rocks and carbonatite, strongly support the liquid immiscibility model. The Rb-Sr age of syenite (130.6 ± 9.3 Ma) in the Bayan Khoshuu complex, which is not far from the Mushgai Khudag complex, is also similar to the obtained ages for the Mushgai Khudag complex [6]. Other carbonatite complexes in Central Asia include Ulgii Khiid and those in West Transbaikalia and Central Tyva in Russian Siberia [6,11,51]. The age obtained for the Ulgii Khiid complex was 147-158 Ma, and carbonatites and associated alkaline silicate rocks from Western Transbaikalia and Central Tyva yielded ages of 131-118 and 118-117 Ma, respectively [11,[52][53][54]. The similarity in ages (Early Cretaceous) supports the presence of late Mesozoic regional alkaline-carbonatite magmatism in Central Asia, which is attributed to the late Mesozoic global plume activity [51]. The newly obtained in situ 87 Sr/ 86 Sr isotopic compositions for dominant minerals within magnetite-apatite-enriched rocks, carbonatites, and alkaline syenites suggest limited variation, which further implies that they were derived from a common mantle source. The available experimental data provide evidence that partial melting of the upper mantle phosphate-bearing peridotite and pyroxenite can generate phosphorus-rich melts, and these melts evolve into immiscible silicate and salt liquids at the early stages of evolution [9,55]. The oxygen isotopic compositions of apatite and phlogopite (δ 18 O Ap = 5.1-5.6‰; δ 18 O Phl = 7.3‰) from the fresh magnetite-apatite-enriched rocks reported by Nikolenko et al. [10] are typical for mantle-derived igneous rocks, which also supports their origination from the mantle [56]. Combining our newly obtained in-situ 87 [6] and Nikolenko et al. [13], the Mushgai Khudag alkaline-carbonatite complex is believed to have formed from enriched mantle domains involving DMM (Depleted MORB Mantle) and EM2 (Enriched Mantle 2). Bayan Khoshuu and Lugiin Gol are the other two large carbonatite-related multi-element deposits located in Southern Mongolia, and they are also predicted to originate from similarly enriched mantle domains based on Sr-Nd isotope data [6]. Fractional Crystallization and Hydrothermal Alteration Recorded in Apatite It is widely documented that apatite occurs through early magmatic to late hydrothermal stages and is sensitive to physical-chemical changes. It has been used as a petrogenetic and geochemical indicator for the tracing evolution of alkaline rocks [43,44]. Magmatic apatite with internal chemical variations is commonly characterized by a relatively REEdepleted core (29,963 ± 719 ppm) and an REE-enriched rim (31,963 ± 1205 ppm), which can be explained by fractional crystallization (Table S3). Apatite within syenite shows a variable REE content from the core to the rim, which is correlated with the oscillatory zonation identified on the CL images, whereas (La/Yb) N and (La/Nd) N values decrease in the core and increase in the rim (Figure 10e,f). The transition point might mark the fractional crystallization of an HREE-enriched mineral (e.g., garnet) [11]. The oscillatory zonation commonly recorded in apatite within syenite supports the significant influence of melt differentiation and accompanying fractional crystallization of feldspar and garnet in generating the variable REE features (Figure 10c-f). Magmatic apatite in magnetite-apatite-enriched rocks and syenites is characterized by negative Eu anomalies (Eu/Eu* = 0.437-0.779), similar to those present in apatite in granitic rocks [27]. Preservation of the negative Eu anomaly often indicates fractional crystallization of plagioclase and feldspar with low oxygen fugacity [12,27,57]. Thus, the geochemical features of magmatic apatite recorded from the Mushgai Khudag complex further support the fractionation of the alkaline silicate melt and the associated mineral fractional crystallization. Magmatic apatite within magnetite-apatite-enriched rocks is characterized by nearchondritic Y/Ho ratios similar to those of the silicate rocks formed via CHArge-and-RAdius-Controlled (CHARAC) processes (Figure 9d) [45]. The variable Y/Ho ratios of altered apatite are higher than those of magmatic apatite and deviate from the chondritic value, which reflects alteration by hydrothermal fluids (Figure 9d) [49]. Oxidized U 6+ is relatively soluble compared with U 4+ and can be transported as phosphate or carbonate complexes in neutral and alkaline solutions [58]. The generally correlated increases in REE and U contents in altered apatite suggest that the hydrothermal fluids are possibly oxidized (Figure 8a). HREE enrichment has commonly been observed for hydrothermal apatite in Tundulu, Kangankunde, and Songwe Hill [59][60][61]. The HREE-enriched patterns are mostly due to the mobilization and co-precipitation of LREE minerals such as monazite and bastnäsite and/or the differing stability of REE anion complexes between LREE and HREE [59][60][61]. Altered apatite is commonly associated with monazite precipitation at Mushgai Khudag, as described above and observed in previous studies (Figure 3h,i) [8,10]. Nevertheless, altered apatite is still characterized by higher La/Yb N and La/Sm N ratios compared with magmatic apatite (Figure 9c,d), which can be generated with the contribution of extremely LREE-enriched hydrothermal fluids. Of note, altered apatite is also characterized by higher Zr/Hf (179 ± 48) and Nb/Ta (19.4 ± 10.3) ratios compared with magmatic apatite and relatively less Eu anomalies (Figure 8b,c). Carbonatite melts/fluids are known to be characterized by high Zr/Hf, Nb/Ta, and (La/Yb) N ratios without Eu anomalies [6,13,[62][63][64][65]. Thus, the late hydrothermal fluids involved in the pervasive alteration of magnetite-apatite-enriched rocks might be carbonatite exsolved. This is also supported by the fact that the Mushgai Khudag carbonatite magma originates from the common DMM-EM2 mantle domains together with magnetite-apatite-enriched rocks, as suggested by the consistent isotopic compositions outlined above. Moreover, the similarity in Sr isotopic compositions between magmatic and altered apatite also confirms that the hydrothermal fluids evolve from a common source, the same as that of the phosphorus melt. Of note, the newly obtained U-Pb ages for both magmatic and altered apatite are similar within the error range, which indicates that alterations by the carbonatite exsolved fluids probably took place almost simultaneously after the emplacement of magnetiteapatite-enriched rocks. REE Enrichment Mechanism of the Mushgai Khudag Complex The multi-element deposit of Mushgai Khudag is believed to have formed through multiphase liquid immiscibility based on melt and fluid inclusion studies [10,12,13]. The model involves high temperature (1250-1280 • C) carbonate-silicate melt immiscibility and relatively lower temperature (600-1200 • C) carbonate-phosphate-salt immiscibility [10,12,13,66]. The liquid immiscibility model is also supported by our newly obtained U-Pb ages of magnetite-apatite-enriched rocks that are consistent with alkaline silicate rocks and carbonatite and similar Sr isotopic compositions for various rock types, as mentioned above. In an aqueous carbonate-phosphate-silicate melt system, REEs favor carbonate melt during carbonate-silicate liquid immiscibility and phosphate melt during phosphatesilicate liquid immiscibility [67,68]. This is evidenced by the lower REE content in syenite (716 ± 241 ppm) compared with in magnetite-apatite-enriched rocks (28,681 ± 6752 ppm) and carbonatites (26,692 ppm). Moreover, apatite is one of the predominant minerals controlling the REE budget in these rocks. Apatite within syenite and shonkinite also shows a lower REE concentration compared to apatite within magnetite-apatite-enriched rocks (Table S3) [13,67,68]. Magmatic apatite hosting the extremely enriched REE contents (up to 7.0 wt.%; Table S3) in magnetite-apatite-enriched rock suggests that REEs also favor phosphate melt during phosphate-salt immiscibility. Magmatic apatite within magnetiteapatite-enriched rock and syenite exhibits a positive correlation between LREE and Si abundance, which suggests that the coupled substitution scheme of Si 4+ + REE 3+ = P 5+ + Ca 2+ plays the dominant role in REE incorporation within the apatite during magmatic evolution [26]. The bulk rock SO 3 content of the altered magnetite-apatite-enriched rocks is almost twice that in fresh ones (Table S1). Altered apatite in these altered rocks is more abundant in SO 3 compared to in magmatic apatite and is also much higher than the sulfur content of apatite within other magnetite-apatite-enriched rocks, e.g., El Laco (S: 155-4791 ppm; [14]) and Carmen (SO 3 : 0.01-2.39 wt.%; [25]). The correlated increases in S and Si contents together with LREE enrichment in Ap-4 indicates that both the substitution schemes of Si 4+ + REE 3+ = P 5+ + Ca 2+ and S 6+ + REE 3+ = P 5+ + 2Ca 2+ contribute to the incorporation of REE into altered apatite. In addition, the SO 3 content of secondary monazite within the altered Mushgai Khudag magnetite-apatite-enriched rocks (0.56-9.94 wt.%; [8,10]) is much higher than that of other carbonatites and magnetite-apatite-enriched rocks (0.15-1.72 wt.%; [69][70][71][72][73]), which also supports the idea that sulfate plays an important role in REE mobility during alterations. In other words, the unusual sulfur enrichments in altered apatite and deposited monazite indicate that sulfate is an important ligand for REE transportation [74,75]. Of note, experimental work suggests that differences in the stability of LREE and HREE as aqueous chloride complexes can result in REE fractionation, whereas LREE and HREE transported as sulfate complexes show similar levels of stability [74][75][76][77]. Thus, compared with the preferred mobility of LREE as a chloride complex, sulfate-dominated fluids possibly result in relative HREE enrichment during hydrothermal processes, such as those observed in Songwe Hill apatite [27,76,77]. The altered Mushgai Khudag apatite with depleted HREE and high (La/Yb) N ratios implies that the REE patterns of altered apatite are dominantly controlled by LREE-enriched carbonatite-evolved fluids, and the different REE ligands play a limited role in REE fractionation during the pervasive hydrothermal alteration at Mushgai Khudag. Conclusions The newly obtained, consistent in situ U-Pb ages of magmatic and altered apatite (139.7 ± 2.6 and 138.0 ± 1.3 Ma, respectively) within the Mushgai Khudag magnetiteapatite-enriched rocks support the presence of late Mesozoic alkaline-carbonatite magmatism and indicate that pervasive alterations probably took place almost simultaneously after the magmatism. In situ 87 Sr/ 86 Sr isotopic values (0.70572-0.70648) within the reported bulk rock Nd and Pb isotope data suggest that the Mushgai Khudag complex originated from the mantle, involving both DMM and EM2 reservoirs. The variable trace element compositions (especially the REE patterns) and texture of magmatic apatite from both magnetite-apatite-enriched rocks and syenites show melt differentiation and mineral fractional crystallization. Altered apatite is characterized by higher REE, U, Nb/Ta, Zr/Hf, and (La/Yb) N values and a lack of Eu anomalies compared with magmatic apatite, which suggests that the carbonatite-exsolved LREE-bearing fluids overprinting magnetiteapatite-enriched rocks further contribute to REE enrichment with monazite precipitation. The coupled increases in sulfur and LREE contents in altered apatite (Ap-4) associated with sulfur-enriched secondary monazite indicate that sulfate plays an important role in REE transportation and mineralization during hydrothermal alteration at the Mushgai Khudag deposit. Table S4. U-Pb ages for magmatic and altered apatite within magnetite-apatite-enriched rocks from Mushgai Khudag; Table S5. In-situ Sr isotope compositions for apatite within magnetite-apatite-enriched rocks and syenite and calcite within carbonatite from Mushgai Khudag. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2021-11-17T16:22:53.237Z
2021-11-14T00:00:00.000
244145042
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1080/16522354.2021.2002106", "pdf_hash": "9c8958711bbfc28438f615d8d747ab317925fe3d", "pdf_src": "TaylorAndFrancis", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44593", "s2fieldsofstudy": [], "sha1": "54dc2fe0ba70a867ae0f0657843857214157b24d", "year": 2023 }
pes2o/s2orc
Assessing conditions for inter-firm collaboration as a revenue strategy for politically pressured news media ABSTRACT The struggle to find resilient journalism revenue models is nowhere starker than for exiled or politically pressured news media operating in fragile markets. One route forward is to explore inter-firm collaborations as a modus operandi to achieve more financial resilience through a collaborative approach amongst themselves. This article presents findings from a multi-stakeholder atelier that assessed operational revenue conditions for such media. It presents a co-created definition of collaborative revenue capture, then addresses the conditions and forms for collaborative structures. It conceptualises opportunities in four areas: technology, revenue-based systems, coordinating actions and journalism production. The article adds new knowledge by assessing collaborations as a revenue strategy within the under-researched media development area through a participatory mode of inquiry. Introduction There is broad institutional recognition that diversity in the news media should not be hindered due to economic failure, as set out in UNESCO's Mass Media Declaration of 1978 and the Declaration of Talloires in 1981 (Alleyne, 1997). Yet this struggle to find stable and "diversified revenue sources" (UNESCO, 2015, p. 8) is a fundamental challenge since the loss of control over audience-content-advertising relationships has squeezed journalism revenues globally. The precipitous losses from traditional advertising (Hirst, 2011;Turow, 2011), disrupted audience-content relations due to a rise in search engines and social media (Carr, 2009) and news aggregators (Nielsen & Ganter, 2018) has challenged the entire industry. It is a long and more perplexing road than anticipated to find sustainable revenues (Rosenstiel & Jurkowitz, 2012). While there is consensus that a "viable economic model for media in the digital age will necessarily rely on a multi-dimensional revenue model" (Pavlik, 2013, p. 192), the conundrum of how to make a profit or grow remain largely unanswered (Lu & Holcomb, 2016). Scholarly calls therefore seek an on-going process of reflection because "there is no panacea for media development, media sustainability and sustainable journalism and no universally applicable solution" (Picard, 2017, p. 253). The revenue question for exiled and politically pressured news media as a specific type of media organisation is a pertinent challenge (Cook, 2016;Deane, 2013;Ismail, 2018) yet one that is often neglected in media business studies. Such media offer lessons in revenue model adaptation for the media management community seeking models outside corporate legacy media as they survive under the same paradigm shifts in modes of consumption, competition, data and production affecting news media globally, including the dominance of Google, Facebook and Yandex (Gicheru, 2014) and experience similar restrictions to other niche news media in their limited capacity to generate revenues due to a lack of business acumen and skills (Schiffrin, 2017). But they also offer a rich terrain for experimentation as they grapple against short-term shocks, patchy audience data, under-resourced and over-worked teams and from unfavourable political economic factors. Issues of money inevitably fall behind those of safety and security. Poor infrastructure inhibits distribution while limited viable advertising threaten the viability of conventional revenues, and lack of access to training and talent undermines the quality of editorial content (Robinson et al., 2015). They operate either within or outside weak markets dominated by corruption and media capture (IREX, 2018;Pon et al., 2017), where local populations have limited buying power (Jiménez et al., 2017;Schmidt, 2017). As they face particularly bleak economic prospects, they face constraints that require "new kinds of business thinking" (Deselaers et al., 2019, p. 3). The extreme operational cases being faced paves the way for transferable knowledge to media in other developing markets seeking alternative understanding of the sustainability question. Given the economic challenge, there has been long-standing state and philanthropic financial support. Exact figures on how much international aid and philanthropy flows to politically pressured media are very uneven, both for pragmatic and bureaucratic reasons (Myers & Juma, 2018) as an ever-growing range of civil society organisations, donorfunded intermediaries, private foundations as well as those tied to public service broadcasters or similar bodies have emerged. In all, most media development and media for development literature focusses primarily on the implications of international donor aid: how and what media donors are doing (Deane, 2013;Susman-Pena, 2012); the impact of soft censorship (Podesta, 2009) and so on. There is far less scholarly scrutiny of how these decisions impact on economic practice. Yet the overall trend is media operating in these contexts are under increasing pressure to move away from grant dependency and demonstrate some steps towards revenue diversification, triggered by concerns of the constraining and enabling functions of donor funding. Reliance not only entraps media in a cycle of alignment with shifting donor agendas but also hinges on a dichotomy: successful applications for donor funding rely on being financially weak (Cook, 2016a). Private grants can also have the effect of content bias, with funds being channelled to topics of importance to the donor (Schiffrin, 2017;Wright et al., 2019). Against this backdrop, evidence from empirical data on the revenue models of oppositional news outlets identified potential revenue diversification from a partnership approach, finding "revenues may be as likely to emerge from pooling resources and content as they are around discrete media outlets" (Cook, 2016, p. 531). In authoritarian contexts, operating "collaboratively, transparently, and ethically" may have short term implications on productivity and profitability but lay stronger foundations for sustainability among media teams, as well as with readers and advertisers, in the long run (Sakr, 2016, p. 45). Industry practice illuminates potential new revenues from inter-firm approaches: pooled content and common publishing platforms such as Nordot in Tokyo or Dutch online news aggregating platform Blendle; commercial partnerships between media of different sizes such as Mediacités with Mediapart in France; advertising reach and scale from shared platforms such as collaborative advertising networks like publisher-led programmatic advertising marketplace MCIL Multimedia Sdn Bhd for nine participating media (Cook, 2019) or Krama ad platform, a bespoke classified ads platform, developed by and for independent media, especially regional and local as an alternative to Russian-based Yandex ad system, on which many Belarusian online media are dependent (Cook, 2020). A research gap exists at the intersection of these discussions. Firstly, where there are no universally applicable solutions to the revenue challenge of heterogenous exiled and politically pressured media, and we know little about moves away from grant dependency towards more diversified revenue models, this prompts inquiries into new approaches. Then, as practitioners lead the way on new collaborative ways of working, they legitimise therefore research that asks how once-regimented journalism economic logics are being negotiated and, potentially, reshaped. Addressing the research gap, the central inquiry here asks in what ways exiled or politically pressured media might be more financially resilient through a collaborative approach amongst themselves. Rather than approaching the issue of income at the individual firm level, it proposes to address similar needs through a collaborative response. Firms interact with each other and across their networks in complex ways. Therefore, this study draws on an action-oriented two-day atelier to explore collaboration between media facing similar revenue challenges. This was a valid approach as Prenger and Deuze (2017, p. 235) suggest innovation is a moving object raising the issue of "how to adequately study something so dynamic". The intention was to gain a holistic view of the patterns of behaviours and discourse to further understand collaborations as experimental practice in determining revenue opportunities among media in exile or politically pressured environments beyond donor dependency. The article begins with the contextual literature on economic practice within authoritarian regimes and reviews the gap in knowledge on revenues specifically. Methodological considerations are addressed so as to critically review the participatory mode of inquiry. The study contributes to the media management field in different ways. Firstly, it assesses the operational revenue conditions for exiled and politically pressured media at the time of investigation. The co-created definition for collaborative revenue capture as a firm-level approach to form an actionable partnership to open new revenue opportunities in the digital economy follows. Then it assesses the conditions that can facilitate inter-firm collaborations for revenue generation. Most pertinently, it details digital technologies for inter-firm collaborations. Then, four main conceptual areas for collaboration are identified: technology, revenue-based systems, coordinated actions and journalism production. This leads to summative discussions on the appetite for collaboration as a strategic tool for interfirm organising enabled by digital technology. The findings connect to the broader digital journalism field and provide insights that offer utility to media development programme managers at the national and international level. Politically pressured news media Politically pressured environments are identified as such from official development assistance listings such as eligibility from the Development Assistance Committee of the Organisation for Economic Cooperation and media system rankings and indices including the Media Pluralism Monitor, Reporters Without Borders, Freedom House, IREX, and the Friedrich Ebert Foundation. Exiled is a subgroup of diaspora with a temporary and uncertain state, living outside their homeland and providing journalism back in-country. The most common reason to go into exile is the threat of violence, such as from Somalia and Syria. Others flee the threat of prison, especially in Iran, where the government deepened its crackdown ahead of elections (Schilit, 2013). Scholarly scrutiny is limited to studies showing exiled journalists' motivations for societal change (Balasundaram, 2019;O'Loughlin & Schafraad, 2016;Skjerdal, 2011) and an examination of peripheral actors' motivations and conceptualisations of their roles (Belair-Gagnon & Holton, 2018;Eldridge, 2017;Schapals et al., 2019). Ownership structures are often distinct from conglomerate chains, with an unaffiliated output online, complemented by social and mobile, and in some cases with print products (either legacy or new), radio (often shortwave) or broadcast. Teams are typically small and cross-border, relying on some full-time professional journalists. Therefore, while traditional news media structures "are still producing most of the news we consume today" (Domingo et al., 2015, p. 53), this discussion departs from the majority of investigations by moving the lens away from established media organisations and attending to the omission of business concerns and economic practice among exiled and politically pressured news media. Business and revenue challenges Scholarly work on the business aspects of independent media in politically pressured or weaker environments is particularly limited. There is no one successful model and a pressing need to better understand media management and data analytics (Foster, 2014(Foster, , 2017. General economic challenges through a period of political transition, such as in Bosnia (Taylor & Kent, 2000) have been charted as have barriers to opening media operations (Hughes & Lawson, 2005), and looking at issues sustaining operations (Requejo-Alemán & Lugo-Ocando, 2014). A tactical and strategic lens finds oppositional Syrian media operating in exile in Turkey deploying a complex system of self-censorship, official registration and adaptable production in order to survive when "outgunned relative to the institutional actors they need to work with" (Badran, 2020, p. 70). This study goes further by drawing together international perspectives from media organisations in multiple contexts. The revenue model is the overall configuration of incomes that make up financial resources, as only one part of the firm's broader business (Linder & Cantrell, 2000;Picard, 2011). The first academic study of the revenues of news media in exile and politically pressured environments empirically detailed grant funding, earned income and donations (Cook, 2016). A taxonomy model advanced understanding of income in free markets compared to repressed markets, detailing the extensive challenges in economic practice. It found mixed revenue models or cross-subsidising of media businesses with complementary for-profit business activities is growing in developing countries. Alternative revenue strategies have been developed through experimentation into memberships or subscriptions in Latin America (Breiner, 2017); native and affiliate advertising in India (Sen & Nielsen, 2016) or adjacencies to complementary activities such as consulting services, public relations and book sales (Ismail, 2018). Warner and Iastrebner (2017) reviewing 100 digital native start-ups in Argentina, Brazil, Colombia and Mexico found revenue diversity is critical to sustainability. More focus has been given to the African context, where the lack of economic viability is a major constraint. Good content, development of media management capacities, and local audience research are needed to develop local advertising markets that serve local media (Madon et al., 2009;Mhlanga, 2017;Spurk & Dingerkus, 2017). Revisiting journalism startups in the Global South, Schiffrin (2019) found financial survival was the biggest worry, followed by political risk and physical safety. After three years, the outlets remained dependent on donors. Together, these drive inquiries to explore the operational revenue conditions for exiled and politically pressured media and potential new and diversified revenue strategies. Programme work has addressed a gap in this field by focussing on capacity building (Veendorp, 2011) and business skills through toolkits or sprints (Deselaers et al., 2019). The FOJO Media Institute drew the lens specifically on media in exile as "an undervalued area" that "often fall between the cracks" (Hughes, in Ristow, 2011, p. 22), with their intervention programme between 2013-2015. This responded to a number of challenges of business sustainability by journalists in exile (FOJO, 2013) but stopped short of exploring inter-firm collaborations. Conceptual parameters for collaboration Given the particularly stubborn and contested challenge of journalism sustainability for exiled or politically pressured news media, the challenge here was to assess how onceregimented economic logics are being reshaped in ways that might make such media more financially resilient through a collaborative approach amongst themselves. In the business literature, scholars have studied inter-firm collaboration through symbiotic strategy for several decades (e.g. Varadarajan & Rajaratnam, 1986). The most important characteristic is to develop and maintain a mutually beneficial symbiotic relationship with external parties where there is neither significant conflicting interests nor fierce competition for common resources (Li et al., 2012). A symbiotic relation in the proximal environment is defined in terms of collaboration, co-evolution and cooperation (Li et al., 2018;Sagarin, 2013). Dimensions of symbiotic relations include time frame, proximity, number, focus, scope (Varadarajan & Rajaratnam, 1986) and trust (Li et al., 2018). In other forms of strategic alliances, partners may still have conflicting interests or power struggles. The phenomenon of coopetition (Bengtsson & Kock, 2000;Brandenburger & Nalebuff, 1997) reflects an increasing awareness of the complexity of relations between economic agents. By drawing from the business literature towards media development, the paper contributes to new perspectives. Within the journalism literature, there has been extensive assessment of partnerships between media for converged route-maps to overcome competitive pressures and organisational difference. Certainly, tactical and structural convergences are rapidly expanding and there is an appetite in practice to collaborate in ways not seen before by mainstream corporate news (Myllylahti, 2017). Convergence can include cross-promoting content, cloning, coopetition, content sharing, and inter-firm collaborations (Dailey et al., 2005) through partnerships. Dailey and Spillman (2013) examined the level of cooperation that exists between cross-media and the types of newsroom partnerships that have emerged, including those with digital native media. Exploring the willingness of different types of media to collaborate, Hatcher and Thayer (2017) found public broadcasters, community newspapers, and online news startups to have an openness to experiment with collaboration and content sharing but tensions around intense competition for advertisers. Such collaborations can bring win-win (Pathania-Jain, 2001) and prompt the inquiry to explore forms of collaborative structures and success factors. Looking for opportunities beyond closed walls draws from the "open" concept. Sill (2011, p. 1) makes the case for open journalism as a way to save journalism as a company, based on a problem-solving mentality through collaboration between media that "once ignored each other's work". Scholars suggest journalists can adapt to newsroom cutbacks by forming symbiotic relationships with non-media news providers, including local police (Carson et al., 2016) and educational institutions (Kim et al., 2016). The works suggest the need to dive more deeply into collaboration in multiple forms, such as how resources could be pooled (Drew & Thomas, 2018) or economic relations nurtured. The potential of these exchanges to spawn open innovation approaches (Chesbrough, 2003(Chesbrough, , 2010 is based on independent actors possessing diverse knowledge assets that can be used to create novel combinations (Crossan & Inkpen, 1994). Regional cooperation in South Africa between a coalition of media organisations, civil society actors, advocacy platforms, and funders fostered cross-pollination of ideas, enhanced capacities by pooling expertise and resources while also providing solidarity and fraternity (Wasserman, 2021). Within the broader context of who is involved in the news making context, scholars have drawn attention to the importance of rethinking such news firm boundaries. Sakr (2017, p. 298) notes in his assessment of sustainable digital news in Egypt that "the implication is that scrutiny of collaborative and innovative practice may tell us more about the sustainability of journalism in a precarious and rapidly changing environment than a focus on any specific institution." Singer (2011, p. 109) also suggests new economic structures will emerge to sustain journalism asking "what sorts of collaborations will prove valuable and how will they be nurtured, strengthened and extended?" As little is known about how collaborations form or the affordances of such, it is this exploratory gap the paper fills. Through an empirical approach of practitioner-led knowledge creation the focus drills into four questions: • What are the operational revenue conditions for exiled and politically pressured media? • How can collaborative revenue capture be articulated and defined? • What conditions can facilitate inter-firm collaborations for revenue generation? • What are the conceptual areas for inter-firm collaborations? Methodology A participatory mode of action-oriented inquiry was designed in the form of a two-day atelier. It included a series of facilitated panels, activities and knowledge exchange, held in December 2014. Forms of ateliers hold particular interest because they represent an important type of "strategic episode" (Hendry & Seidl, 2003) as they suspend normal structures and engage in new conversations. The overall aim was to step back from daily operations and institutional priorities. Space to think in this way is rare, particularly for oppositional media facing authoritarian pressures. Given that revenue challenges varied across global regions, participants were invited to the atelier in Preston, UK. Journalists operating media under threat representing Iran, Belarus, Uzbekistan, Turkmenistan, Syria and Sri Lanka, Jordan, Azerbaijan and Ukraine (11) were included. No specific exclusions on media platform or company were made, as the goal was to enable broad ideation away from their immediate concerns and firm-contained perspectives. All operated on digital platforms, with varied specialisms across broadcast, print, web and radio subject to licence and distribution restrictions. Representatives from the media development community included Rory Peck Trust, Open Society Foundations and Internews Europe (3) alongside journalism experts and innovators (6) business experts (4) and academics (6). In total, there were 30 participants. As such, the multidisciplinary atelier was designed to integrate numerous perspectives on practice, industry, media development, media for development, academia and activism appropriate for the complex and holistic view needed to stimulate new knowledge on revenue models. The view of Van De Ven (2007) led this inquiry in that knowledge is produced not only in academia but also in industry. There were six genres of activities that formed data collection: idea brainstorming via an ideas wall; small focused discussion groups; structured anonymous question and response sessions; mini presentation panels; plenary discussions; and an ideas lab to move forward towards market and development. The process allowed issues that practitioners themselves regarded as problematic to be singled out and then explored, appreciating the call by Küng (2016) to stay in constant contact with the industry in order to co-formulate a research inquiry. Together, this stimulated ideation on what forms and purpose collaborations could take and what would be of most use to exiled media organisations. Participants were encouraged to develop imagined futures quite freely by developing five collaborations as experimental projects for application. These were developed using mixed media visualisations and into mini presentations for group feedback articulating the key proposition and need met. A further development phase gave teams the opportunity to explore and work up in more detail the strongest ideas, with input from a wider group of participants. Group discussions, constructive critique and next steps drew the atelier to a close. Data was captured by two facilitators and the researcher, and a multimedia practitioner captured photographic evidence of handwritten notes and brainstorming activities. Anonymous digital tablet-activated interactive brainstorming software was used to capture observation discussions. Findings from this workshop form the primary material analysed. The participatory mode of inquiry transferred also to the early data analysis. The embedded nature of analysis in a social setting allowed for ongoing review through interactive brainstorming software viewed on a digital wall and activated from hand-held tablet devices. This enabled participants to vote in real time on responses to structured questions, analyse responses into themes, connect topics and sub-topics, and see word clouds. During idea brainstorming via the ideas wall or World Café discussions, participants synthesised discussions and clustered action points. Developing visualisations of the data assisted in summarising and categorising to aid understanding and interpolating. These included drawings, mind maps and collages in both physical and digital form. In a similarly structured process by which data, analysis, and interpretation can be integrated into the analytical, Northcutt and McCoy (2004) refer to this type of method as interactive qualitative analysis. The second phase of analysis was carried out by the primary researcher after the atelier to offer a systematic reading of the data. An open-access immersive report was produced to offer a first review and collection of the multimedia outputs (Cook, 2015). This atheoretical output included word clouds and images to avoid premature closing and a first filter about what was done, how and why. Writing up in this way can assist learnings and prompt further reflection (Coghlan & Brydon-Miller, 2014). This was also a way to advance the research findings into academic lexicon so as to "pack them differently for different audiences" for scholarly conventions, bridging theory and practice (Rohn & Evens, 2020, p. 22). The qualitative data outputs from the atelier were then outputted in list form and then treated with applied thematic analysis (Guest et al., 2012) to build a comprehensive, contextualised and integrated understanding of the structured data outputs. Preliminary explorations of the free form responses were grouped to find higher level categories appropriate for clustering and emerging interrelationships within responses and discussions. Patterns were noted and developed into "thematic connections" from the data (Bazeley, 2013, p. 192) and further synthesised into table form. While the study is limited due to the time between investigation and publication, revisiting atelier findings is justified as material gathered in this way is rare. It also corresponds with renewed interest in other collaborative journalism business models to which the findings are relevant, including franchise (Arnold & Blackman, 2021). Operational revenue conditions for exiled and politically pressured news media An overarching theme was surviving economically rather than sustainability. In the wider landscape, large state-controlled media extracted the largest share of advertising markets and exiled media were disconnected from valuable domestic advertising opportunities. Participants spoke of challenges around regulation, establishing legal status and cash flow in unbanked situations exacerbated by restrictions on registering businesses. Often state-controlled telecommunications companies blocked exiled media websites which suppressed domestic audiences. They came under serious cyber-attack, needing defences which impact website uptime and performance. Audiences also used proxy servers. For example, a reader based in Iran visiting an exiled media site using a US based proxy server appears to advertisers as being in the United States. An advert served in this context is likely to have limited click-through and financial return. Similar challenges existed using major social media platforms such as Facebook, where audience data was flawed. This affected the ability to connect with and engage with end users. These challenges existed against a backdrop of shifting wider technological changes that were little understood. There was consensus that politically pressured news media had less support than in other media systems. A deeper understanding of lived revenue experiences was achieved. There was a sense that going beyond "living to fight another day" to emerge a longer perspective on funding was a challenge, particularly around revenue diversification away from grant dependency. Expertise in revenue models and innovation were found to be lacking. Output quality levels were variable. Internal to their operations, participants spoke of limited time, resource and capacity for running the business under already strained health and welfare circumstances working long hours alone with limited skills, affecting their quality of life and safety to operate. Many were overworked, under-funded and living hand-tomouth. Challenges included the need to cover their costs for staffing, website development and hosting, administration and their own living expenses. Concerns were raised around the need to have a "proactive rather than a purely reactive strategy" towards their business. Longevity was a concern due to lack of strategy, robust media management skills, lacking skills to facilitate or adapt to change and having funds to hire more people. Networks were recognised as a source of potential strength within the target country. Participants were asked what determines their financial resilience. Responses ranged from internal (needing diverse funding sources, ability to balance the mission with sustainability, business competence) and external (security, anonymity and safety; the scale of the audience and market, cash flow where formal banking was not possible, and challenges around regulation). There was a broad orientation around measures of success. Consensus was around the need to "have an impact -whatever that impact may be". Success definitions oscillated from basic functions such as needing to be kept safe, achieving opportunities to move back in-country and structuring efforts to protect journalists. Many participants had first-hand experience of intimidation or atrocities. Success was getting past government controls with a consistent and regular broadcast, then reaching a well-defined audience or widening visibility. The goal was relevant or credible content, having exclusivity and achieving sharable content focussing on under-reported, misreported or censored stories instead of soft news "being a real voice that's credible". Broader impact success included influencing in-country and diaspora civic values. Some sought governmental change on democratic processes. For others influencing Western news agendas and conversations was impact particularly where those led to policy change for fairer or more just societies. Defining collaborative revenue capture A new term was introduced to develop and debate the proposed concept and its principles, to incite self-reflection and changes in practice. The co-created definition of collaborative revenue capture emerged as a firm-level approach to form an actionable partnership to open new revenue opportunities in the digital economy. Firstly, this served as a sense-maker to summarise a number of initiatives being trialled by journalism sites. As Hautakangas and Ahva (2018, p. 743) note, "new keywords are best understood as lenses through which the journalism profession and its practices can be re-examined." It was also a way to explain emergent approaches to revenue generation by media organisations working together, and served as a starting point for discussions. Conditions for inter-firm collaborations It was helpful to categorise factors affecting collaboration in the areas of conduct, values, resources and operations (set out in Table 1) both as sources of success and tensions. Some were generic to any inter-relational working (such as trust, respect and understanding) while others were specific to cross-territorial working (need for a shared language, allowing for political and cultural differences). Participants identified the need for shared goals, and funding commitments. Several examples of successful collaborations were shared during plenary discussions. These varied from personal relationships such as marriages to creating technical global norms, co-writing industry reports or working with other partner organisations to coordinate a strategy to deliver support to journalists. Knowledge sharing around revenue generation and access to seed or grant money were cited as good starting points for collaboration. Challenges included having unblocked internet provision, unavailable audience data sets from some regions, divergent supplies of in-country advertising, apathetic audiences, and exiled media competing against one another particularly for donor funding. Those relating to revenues included having incompatible partnerships or exclusive relationships with other media organisations, or divergent funding strategies. Regarding editorial practice, these covered the need for synergy across editorial values, agendas and content quality. The challenge of collaborating as a unified body of exiled or restricted media was articulated including practically (language, country-by-country differences) operationally (shared ethical and legal frames) and ideologically (finding a shared mission and vision). Conceptual areas for collaboration Four main conceptual areas for collaboration were then hypothesised: technology, revenue-based systems, coordinated actions and journalism production. Here, the focus was to explore possibilities, with the view of envisaging strategically useful longterm benefits from collaborative approaches to generate revenues (see Table 2). Five ideas were developed. Two involved software-led technology including aggregating and repurposing content in sub channels on different platforms. This allowed media providers to pool content and retain insights on behavioural data. The aim was penetration via proxy internet servers, extension of audiences in country and diaspora, filtering out poor content bounce rates and developing unique proprietary content. New revenues would be generated from selling behaviours to advertisers, new audiences from geotagging, and content production. Issues to overcome included content relevance to different geographic audiences and rights management. Plans for a shared data-insights platform for revenue generation were spearheaded by two participants, who extended their ideas beyond the atelier. A third proposal focussed on leveraging distribution-led technology that would be too expensive or prohibitive taken at the individual firm level, whether from satellite balloons, proximity broadcasting. sensor-based networks, new secret flows of news, or content in disposable places. The fourth focussed on the revenue-led coordinating actions through merchandise, events or products championing impact and freedom of expression. A large internationally supported fundraiser under an umbrella organisation was envisaged. The final idea proposed payment-led revenue-based solutions closed-network payment services and non-monetary exchange transactions to support donations and other revenues to exiled and restricted media. Content would be provided for free, triggering an acknowledgement via SMS triggering a micro-payment opportunity to express support. The specific challenge here was technical faced with government control of the mobile network. Discussion Faced with internal and external barriers, the assessment of economic operations at the time of investigation fits with previous scholarly work indicating capacity for income generation to be weak, due to limited resources and business knowledge, remote teams and weak distribution, poor purchasing power of audiences and limited viable advertising revenues. Co-existing economic orientations existed amongst exiled and politically pressured media. While one was open to the idea that journalism can still "earn" money because "revenue generation is not a dirty word", the other underlined the need to be free from economic pressures and requiring donor funding. The former orientation approached revenue diversification with potential and highlighted an interest in alternative funding sources. The participatory mode of inquiry at least in part motivated some participants to seek "business models not donor models". Moves away from donor dependency would certainly be hard won. The latter justified donor interventions due to the content: often heavy on politics and corruption, but light on topics that create broader audiences and increased engagement. Such views fit with broader assessments on interventionist funding of non-profit journalism as a public good (Konieczna, 2018). The idealised notion of an apolitical landscape of support does not stand and media navigate this landscape in their attempts to secure financial survival. These further illuminate the complexities surrounding grant support. Such diverse assessments of operational revenue conditions would have been beyond the outputs of more orthodox qualitative approaches. Practitioners were re-orientated on their shared practical daily challenges and deeper understanding of revenue opportunities as it "raised awareness of the challenges exiled media face above the obvious security ones". This included thinking differently about in-country and out-of-country audiences. In resolving or renegotiating different perceptions, participants developed a deeper understanding of the economic situation they faced, and how it might be improved. Frank, transparent and anecdotal sharing in this way can move the sector forward. This fits with research elsewhere in authoritarian regimes which found a management culture that values collaborative reflection, ethical practice, and editorial innovation helping to embed lasting professional relationships and codes of conduct (Sakr, 2016). Defining collaborative revenue capture was challenging and the atelier therefore added a cognitive dimension. Collaborative revenue capture was co-defined as a firm-level approach to form an actionable partnership to open new revenue opportunities in the digital economy. Collaboration as a term was particularly loaded with connotations particularly in this context as it was associated with government collaboration for illicit gain, working with government spies or colluding with authorities. The atelier stopped short of challenging the terms, and seeking alternatives such as revenue creation instead of capture. The term capture has been more recently popularised in reference to media capture and may therefore be easily confused. The relevance of symbiosis here was to understand how politically pressured news media could overcome their resource constraints, to reduce external threats and uncertainty, and to achieve long-term cooperation and some stability through inter-firm organising. There was general motivation that working together was possible grounded in "a real desire and potential to innovate in this space". Commonalities were born not in their situation as exiled or politically pressured but in the kinds of values and shared struggles being faced with an "unequivocally ameliorative impulse" (West, 1989. p. 4). We can say that there was an appetite for horizontal approaches which preserve niche and independence while offering strength and resilience. These focussed on leveraging skills and motivations from participating media, as workable exploratory options. The conditions for collaborative approaches were most favourable where interdependence offered editorial, practical and commercial opportunities for mutual support. Success depended as much on conduct and shared values as it did on resources and operational considerations. Participants showed willing to collaborate in a way not seen typically by corporately owned mainstream news. The findings build on earlier literature (Cook, 2016;Sakr, 2016) to identify how stronger foundations for sustainability can be created among media teams, as well as with readers and advertisers as we know little about how to assess the quality of collaboration as an inter-firm dynamic and the merits of collaborative structures for politically pressured news media. Collaboration was not embraced straightforwardly as a guiding notion. Many reflections during the workshop were critical or doubtful. Fitting with coopetition literature, these experiences reflect an increasing awareness of the complexity of conditions between economic agents. These were heightened for politically pressured news media operating in multiple languages, inconsistent availability of audience data and patchy internet access. The diverse contexts and varied pressures present barriers. What happens through the paradox of the simultaneous pursuit of competition and cooperation includes felt tension (Gnyawali et al., 2016) raising pertinent challenges for those working in media development. While the partners in this type of alliances have cooperation among them, they also compete for some common resources that they all need, such as the same market or the same supply, not least for donor funding. As a regional network to promote solidarity in South Africa also found, cooperation needs clear goals and a decentralised structure that avoids imposing hierarchy to encourage unhealthy competition (Wasserman, 2021). The four main conceptual areas for collaboration were grouped into technology, revenue-based systems, coordinating actions and journalism production. In fact, participants were encouraged to be exploratory as to where and with whom collaborations may emerge. Specifically, participants noted learning about possibilities in the post-Soviet space, potential new partners, cross-border collaboration "that could generate revenues". Digital technologies offered clear routes to facilitate such collaborations. Many collaborative ideas were real-world feasible projects that could take shape in an everyday context. Content aggregation, geo-tagging, digital encryption and algorithms presented rich terrains for inter-firm collaborations. There was striking variety and ambition in these imagined future projects. These were particularly illuminating as increased digital and global connectivity are beginning to challenge exile as a concept. The conceptual collaborations were not intended as a systematic typology; this would be better achieved with methodologies better suited to mapping and would be a worthy addition to the literature. Rather, the opportunities pose a powerful argument about the centrality of collaboration in journalism's economic viability in the digital economy: particularly in a time of increased globalisation and the survival of small-to medium-sized news media. These non-linear discussions may not be very helpful in emerging concrete or quick fix knowledge nor are there opportunities to derive extensive follow-up research as the atelier was context specific. While fruitful and meaningful discussions can be generated during intensive episodes of exchange such as this, the risk is that intentions succumb to an action cliff post-event. The scope of collaborative initiatives will also remain limited without levels of coordination or management. This included the desire for a forum for regular contact and exchange, or to seek project funding. The absence of a coordinating or representative body limit the real-world adoption of proposals. Thus, there are implications for media development actors to support or facilitate the necessary coordination activities needed to unlock collaborative approaches that could strengthen fragile media. As feedback indicated, "ideas are great, but you need action to make a difference". Organisations such as United for News or Global Forum for Media Development would be well placed to take further steps. This echoes findings from Schiffrin (2019) who calls for an industry-wide body that would assist small civic-minded outlets in building capacity for doing international fundraising, channel funds from donors and other kinds of efforts to generate revenue with peer-to-peer learning and fundraising. While these conclusions add to the current knowledge about economic experiences, they also propose new directions for future research into journalism sustainability more generally. A fitting next step would be to systematically review and critique collaborative approaches to revenue creation. These would be timely given the industry-led practices of collaboration amongst small-to medium-sized news media, and particularly pertinent helping to avoid exiled and politically pressured news media having to fight the sustainability battle as a singular challenge.
v3-fos-license
2019-08-17T15:59:40.722Z
2019-01-01T00:00:00.000
133793260
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.agronauka-sv.ru/jour/article/download/299/299", "pdf_hash": "1aea08b09ef2785be615d14fe1e3ca1c8e7fb7fe", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44594", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "2dc396dcab3a34b73dc9f160ec2cdfce138954b1", "year": 2019 }
pes2o/s2orc
The development of the Fungal Resources Conservation System «One District, One Herbarium and Five Banks» and its application on Qinghai-Tibet Plateau* Biodiversity is closely related to human health and ecological environment. It has become very important due to its influence on the economic and political strategies of various countries. Fungal diversity is an important part of biodiversity and is of great significance in maintaining the balance of ecosystems. On the basis of fungal diversity studies the innovative conservation system “One district, one herbarium and five banks” was proposed and later applied on Qinghai-Tibet Plateau. In order to protect important fungal resources a conservation district of Yajiang Matsutake Reserve was established in Sichuan Province. To preserve the species, genetics and local natural fungal resources, the herbarium, the spawn bank, the viable tissue bank, the gene bank, the compound bank, and the comprehensive information bank were created, which resulted in the systematic protection of fungal diversity on Qinghai-Tibet Plateau and provided support for the sustainable use of fungi. Innovation and integration of the protection system of fungal resources and key technologies for sustainable utilization not only restore the ecological environment of important wild Fungal resources but also screen and cultivate new varieties of edible and medicinal fungi It has been promoted and demonstrated and gave significant ecological, social and economic benefits. 30 As an independent kingdom of eukaryotes, fungi have a large number of species. Hawksworth [1,2,3] estimated the number of fungal species in the world to be 1.5 million. Subsequently, Blackwell [4] established that the actual range of global fungal diversity was between 800,000 and 5.1 million species. With the advance of sequencing of large-scale environmental samples, the researchers believe that previous reports overestimated the fungal abundance by about 1.5 to 2.5 times [5]. Nonetheless, the global fungal number remains vast. However, only 100,000 species have been reported worldwide [6], which is less than 5% of the estimated number, and only 1% of those known species have been sequenced [7,8]. At present, there are about 27,900 species recorded in China which belong to 3534 genera, 585 families, 192 orders, 56 classes, 15 phyla [9]. Fungal diversity is one of the important factors affecting the balance of the ecosystem. In addition to being an important decomposer of the earth's ecosystem, fungi also form close relationships with other organisms to realize carbon cycle and energy flow in the ecosystem, promote nutrient absorption of plants, enhance plant stress resistance and improve productivity [5,10]. In addition, the diversity of edible and medicinal fungi is closely related to human health and environmental protection, and has become one of the main factors affecting the economic and political strategies of many countries. Currently, 1789 species of edible fungi and 798 species of medicinal fungi have been reported in China [11]. More than 100 species of fungi have been domesticated and cultivated, and 60% of them have been commercially produced. The industry of edible and medicinal fungi not only provided people with food and filled their nutritional needs, but also raised employment level and allowed for earning money [9]. Due to climate and field usage change, environmental pollution, nitrogen deposition, habitat loss and fragmentation, fungal diversity is under a threat that cannot be ignored, and some species have disappeared from the earth before being discovered [12,13]. However, at a global, regional and local scale basic data such as fungal species and their gene sequences are still insufficient, and the influencing mechanism of fungal diversity is to be clarified, which makes the work on fungal diversity conservation extremely difficult [14]. Therefore, large-scale systematic collection, rapid and accurate identification of fungi are important prerequisites for study and protection of fungal diversity. A crucial component of biodiver-sity protection is to establish a protection system of fungal resources. The important tasks in implementing protection of fungal diversity include establishing fungal conservation centers, the mycological herbarium, the culture preservation center, the database of all kinds of information, and evaluation mechanisms of fungal diversity. The "One Conservation, One Herbarium and Five Banks" protection system of mycological diversity includes the Mycological conservation, the Mycological herbarium, the Spawn bank, the Viable tissue bank, the Compound bank, the Gene bank and the Comprehensive information bank to protect fungal resources regarding species, heredity and functions. China has three ecological regions, namely: the eastern monsoon ecological, the northwest arid ecological and the Qinghai-Tibet alpine ecological regions. The Qinghai-Tibet plateau is the largest plateau with the highest average altitude in the world and is known as the "World's Third Pole". This region contains several typical ecological types: the alpine forests, the alpine meadow, the alpine desert and semi-desert. The special environment and physiognomy of the Qinghai-Tibet plateau ensures unique biological diversity. However, the extent of fungal resources existing within the Qinghai-Tibet plateau is still uncertain. With the accelerated development and commercial production of Cordyceps sinensis, matsutake and other rare fungi in recent years, fungal resources have been threatened and destroyed on Qinghai-Tibet plateau. In order to further promote the study and protect the fungal diversity on Qinghai-Tibet plateau from 2012 to 2015 our research group conducted dedicated applied research and implemented the "One District, One Herbarium and Five Banks" system for conservation of the fungal resources in Tibet. Functional positioning of the "One District, One Herbarium and Five banks" fungal diversity protection system. After years of mycological researches and the promoting of the edible and medicinal fungal industry, academician Li Yu took the protection and sustainable utilization of fungal diversity as the basic foundation of the fungal resource protection system and established the operational framework of the "One District, One Herbarium and Five Banks" system. "One District": In the conservation area important and rare fungi (mainly edible and medicinal fungi) are protected in situ; "One Herbarium": the herbarium organizes, identifies and preserves all fungal specimens collected from the area; Аграрная наука Евро-Северо-Востока/ РАСТЕНИЕВОДСТВО/ Agricultural Science Euro-North-East, 2019; 20(1): [29][30][31][32][33][34][35] PLANT GROWING "Five banks": Spawn bank: separation, identification, succession and preservation of the fresh specimens; Viable tissue bank: the samples of fruiting bodies are dried quickly to be preserved; Gene bank: preservation of genomic DNA, ITS and other gene fragment information; Compound bank: preserves the main chemical components, active components and spectral information of important edible and medicinal fungi; Comprehensive information bank: preserves comprehensive information of specimens, including Latin nomenclature, classification status, collection information, photographic images, viable tissue, strain, geographical information (longitude, latitude, altitude) etc. Based on traditional herbaria, spawn preservation centers and biological gene banks, integrated research has been conducted with the professional perspective of spawn resource classification to increase conservation areas, viable tissue libraries, compound libraries and comprehensive information databases. It should improve the function, the efficiency and the application of the spawn diversity protection system. China will play a positive role in biological protection, scientific and technological research and development, industrial promotion, personnel training, and promote the development and innovation of agriculture, biology, medicine and other industries. Establishment of "One District, One Herbarium and Five Banks " system on Qinghai-Tibet plateau. The Qinghai-Tibet plateau has a complex topography. The population is small, many places such as the Yarlung Zangbo Grand Canyon, have high mountains and deep valleys, the greater part of the areas is almost uninhabited and it provides shelter for rare fungal species. However, the increase in collecting activities in some areas will impact the growth of rare mushrooms on Qinghai-Tibet plateau. Therefore, it is very important to strengthen initiatives to protect macrofungi in these areas. The following areas on Qinghai-Tibet plateau are the main collection areas based on their ecological types and vegetation distributions: Nyingchi, Qamdo, Nagqu, Shigatse, Shannan, Aba Tibetan Autonomous Prefecture, Ganzi Tibetan Autonomous Prefecture, Diqing Tibetan Autonomous Prefecture, Sanjiangyuan and Golmud. Through field investigation, classification and identification of fungal resources in the representative areas of Qinghai-Tibet plateau, nearly 13,000 macrofungal specimens, belonging to 27 classes, 22 orders, 70 families and 273 genera were obtained. Nearly 2,800 myxomycetes specimens belonging to 1 class, 5 orders, 8 families, 25 genera and 94 species were defined. Identification was conducted on cellular slime molds from 17 species, belonging to 1 class, 1 order and 2 families. On the basis of fungi diversity investigation, the conservation system of "One District, One Herbarium and Five Banks" of Qinghai-Tibet plateau flora resources was established for comprehensive protection of fungal species diversity, genetic diversity and ecological diversity. The mycological conservation area. In terms of the selection and division of the mycological conservation areas, the priority of our research group was to select the conservation areas with the aim of species protection. At the same time, in the management of conservation areas, not only the population and quantity of protected species should be considered, but also other factors, such as local vegetation, ecosystems, endemic species, threatened species, economic and social conditions [15,16]. From the beginning of 1980s, Tricholoma matsutake (S. Ito & S. Imai) has become the main source of income for Yajiang farmers. A large number of local people flocked to Songrong Mountain for fungal collection. However, due to the lack of biological knowledge on proper collection protocols, they destroyed the environmental conditions for further growth of T. matsuatake. With the economic development, the Matsutake industry has a great potential. For a long time, the development of China's Matsutake industry was low, and it remained at that level of original acquisition, primary processing, and semi-finished products export. There is a serious lack of deep processing with high added value. According to the characteristics of rare fungal resources on Qinghai-Tibet plateau, Sichuan Matsutake is mainly distributed in alpine valleys of 3000-500 m altitude, the main forest types are alpine oak forests, alpine pine and alpine mixed forests. Considering the geographical distribution, biological and habitat characteristics of Tricholoma matsutake resources, in August 2013 at the conference "China Matsutake Industry Development Summit" held in Chengdu, academician Yu Li signed a cooperation agreement on the conservation of matsutake resources with Gexigou Nature Reserve of Yajiang County, People's Government -"Wild Edible Mushroom Resource Conservation Cooperation Agreement". After that the Matsutake conservation area was built in Yajiang, Sichuan. Аграрная наука Евро-Северо-Востока/ РАСТЕНИЕВОДСТВО/ Agricultural Science Euro-North-East, 2019; 20(1): [29][30][31][32][33][34][35] PLANT GROWING Combined with fungal resources research and market exploration, the requirements of collecting, purchasing and standardizing the harvesting of matsutake were put forward for the matsutake harvesting in conservation base, including the collection of matustake, acquisition standards, acquisition requirements, transportation and preservation conditions of T. matsutake. Through contractual operation and closed off hillsides for matsutake conservation the forest ecosystem benefits have been significantly improved. According to the needs of protection management, perennial monitoring of major protected objects such as matsutake and their habitats, the basic data were obtained, laying the foundation for monographic scientific research, provided with a demonstration of sustainable conservation and development of Matsutake resources. The Herbarium. Biological specimens are the basic materials for scientific research, it is a direct document reflecting the diversity of species. Species are the basic unit that constitutes the diversity of ecosystems, and it is also the main carrier of genetic diversity [17,18]. In the historical development of human society, species are the important resources upon which natural productivity depends and there is a need to protect it. The Qinghai-Tibet Plateau mycological herbarium is mainly composed of dried fruiting bodies of macrofungi and myxomycetes collected in this region. At present, from localities such as Tibet, Qinghai, Diqing Tibetan Autonomous Prefecture of Yunnan, Ganzi Tibetan Autonomous Prefecture of Sichuan, Aba Tibetan Autonomous Prefecture of Sichuan and Gannan Tibetan Autonomous Prefecture of Gansu, nearly 13,000 specimens of macrofungi and 2800 specimens of myxomycetes were collected. The Qinghai-Tibet Plateau mycological herbarium was built in Jilin Agricultural University according to the characteristics and technical requirements of the specimens. This mycological herbarium is provided with specimen boxes, specimen cabinets, dryers, low temperature refrigerators (-80 °C), air conditioners, carbon dioxide fire extinguishers and other equipment. The purpose of these items is to prevent fire, excess moisture, insects penetration and to maintain constant temperature conditions for the preservation of specimens. The room where the specimens are to be kept is equipped with air conditioners, liquid nitrogen fire extinguishers and other equipment. The room temperature is kept at 20-23 °C and the humidity is about 40%. The specimens are placed in sterile whirl-pack plastic bags together with the desiccant, and the bags are put into the specimen boxes. These specimen boxes are labeled with the basic information such as the species name, specimen number, the place and date of collection, as well as the name of the collector. These samples are stored in specimen cabinets, chronologically. The status of specimens should be regularly checked. Spawn bank. Edible and medicinal fungal spawns are important biological resources, which are the basis for the implementation of production and scientific research. Good spawns are easily degraded especially after long-term use; it leads to the need for preservation of these spawns for sustained usage. Wild edible and medicinal fungi in Qinghai-Tibet plateau were collected according to the characteristics of high development potential, high economic value, prominent medicinal efficacy, tissue isolation and spawn. The rare edible and medicinal fungi in this region were obtained for breeding and preservation. The Qinghai-Tibet Plateau spawn bank is equipped with a low-temperature warehouse of 4℃, an ultra-low temperature warehouse of -80℃ and a liquid nitrogen storage bank. The fruiting bodies of edible and medicinal fungi were classified in the field, purified cultured and species identification under the laboratory conditions. According to the biological characteristics of these spawns, they were inoculated into suitable media and cultured under appropriate conditions. After growing adequately, the fruiting bodies of edible and medicinal fungi were preserved at 4℃ and sub-cultured regularly. Some mycelia or spores of rare fungi were stored in freezing tubes containing glycerol, in freezers at -80℃ or in liquid nitrogen tanks. At present, there are 426 spawns in Qinghai-Tibet plateau spawn bank, including many rare edible and medicinal spawn. Viable tissue bank. Population genetic structure variation of organisms is the result of a combination of factors such as population evolution, distribution, and breeding, which reflects the adaptability and evolutionary potential of population units, and is also related to the formulation of strategies and measures for species protection and rejuvenation [19]. Efficient access to genetic information of important biological populations is the necessary basis for this work. Viable tissue can preserve the genetic information of fungal species quickly and efficiently. After collecting fresh fruiting bodies, 1~2 cm dissected specimens, wrapped in paper with good air access should be dried in 2 ml Аграрная наука Евро-Северо-Востока/ РАСТЕНИЕВОДСТВО/ Agricultural Science Euro-North-East, 2019; 20(1): [29][30][31][32][33][34][35] PLANT GROWING 33 centrifugal tube containing discolored silica gel. According to the color changes of those silica gel, the silica gel should be replaced for rapid drying of materials. During the time of fungi field collection on Qinghai-Tibet plateau, nearly 3000 living tissues were obtained. The genome information of these samples was effectively preserved, which would provide basic materials for the following research. Gene bank. Mycology genomic DNA is one of the most important genetic materials. Highquality DNA should meet certain standards in genomic integrity, purity, concentration and content. The length of genomic DNA used for preservation should be greater than 15kb; purity should be 1.8 A 260 / A 280 2.0, showing a single bright band in gel electrophoresis detection; genomic DNA concentration must not be less than 100ng·μL -1 , volume no less than 100μL; each sample stored in 3 tubes, stored in -80℃ cryogenic refrigerator for extended periods of time, and the quality of samples regularly sampled at random, and replacement samples are updated in time. Nowadays, the DNA barcode is an important reference for taxonomy and molecular biology. ITS rDNA can clarify the resolution of fungal species up to 72%. It is the single DNA fragment with the highest resolution of fungal species. ITS was officially recommended as the preferred DNA barcode for fungi at the 4th International Conference on Life Barcode held in Adelaide, Australia, in 2011. The ITS1-5.8S-ITS2 region was amplified by ITS universal primers [20,21]. For ITS barcode amplification of important fungal species, it is necessary to maximize the coverage of the survey area in different individual collection sites of the same species during field investigation. According to the requirements of the International Barcode of Life Project (iBOL Project http://ibol.org/phase1/) and the regional ecological environment differences [22], 6-12 individuals of each species are selected for DNA extraction and ITS barcode amplification. Preserved voucher specimens are maintained in the herbarium and with established species identification reference databases help to obtain more complete population genetic information [23,24]. Nearly 7000 genomic DNA fragments and 6000 ITS ribosome fragments were obtained from fungal specimens collected on Qinghai-Tibet plateau. The gene bank of important fungi collected from this region was established in population and above genus levels, including Compound bank. Fungi is one of the important sources of active natural products, and their metabolites play an important role in many different drug researches and developmental strategies. The discovery of new species of fungi and the development of new compounds had great significance to the screening of active ingredients, the discovery of lead compounds and the evaluation of the nutritional quality of edible and medicinal fungi. Compound bank includes pre-isolated active sites extracted from edible and medicinal fungi samples, monomer compounds, and corresponding spectral data information. Pre-isolation active sites were mainly filtrated and crude polar extracts of mycelia, wild or cultivated fruiting bodies from liquid fermentation of edible and medicinal fungi [25]; monomer compounds included pure natural products with purity of more than 80% and no repetitive structure [26]; spectral data were mainly 13 C-NMR,H-NR,DEPT,HMBC,HMQC,H-H COSY corresponding to monomer compounds and GC-MS data of pre-separated lipid-soluble components. The 986 kinds of lipid-soluble components, more than 70 monomer compounds and their related spectral data from nearly 20 species of rare fungi, such as F. luteovirens, Sarcodon imbricatus (L.) P. Karst., T. aurantialba, O. sinensis, collected in Qinghai-Tibet Plateau were obtained. Comprehensive information bank. In order to research and utilize the fungal resources more effectively, a comprehensive database of fungal resources was established by means of computer database technology. Information in the database includes the following: collection number, the date of collection, name of the collector, photographic images, viable tissue, strain, Chinese name, Latin name, geographic information (Longitude, latitude, altitude), names of collecting location, corresponding genetic information and compound information. Information can be quickly searched according to the collection number or Latin name of the specimen. Microsoft Office Access database was used in the information bank on Qinghai-Tibet Plateau. Prospect. At present, the total number of fungal resources in China is still uncertain. Mushrooms of economic importance as well as rare mushrooms are greatly endangered and destroyed because their living conditions are affected by excessive artificial collection and production. Research on fungal resources in China is in the development stage, there are still many unknown areas to explore, and technical problems need to Аграрная наука Евро-Северо-Востока/ РАСТЕНИЕВОДСТВО/ Agricultural Science Euro-North-East, 2019; 20(1): [29][30][31][32][33][34][35] PLANT GROWING 34 be addressed and overcome. The authoritative database of fungal resources can provide strong support for the study of fungal resources, and promote the research in many aspects. These include: the pattern of fungal resources, planning for conservation of fungal resources, its response to global changes, prediction of invasive exotic species, effectively monitoring the flow and protection of fungal germplasm resources, which also play an important role in the study of fungi and provides important support for the study of fungal resources. In the study of Chinese fungal resources, the implementation of effective fungal resource conservation mechanisms is still a weak link, and needs more infrastructure development. The ecosystems of China are very rich, including the main types of terrestrial ecosystems on the earth, such as forests, shrubs, grasslands, meadows, deserts, and tundra. By the end of 2015, China had 2740 nature reserves with a total area of about 1.47 million km 2 : there were 525 nature reserves with wild animals as the main protection type (including 109 national protected areas), covering an area of 387,000 km 2 ; 156 nature reserves with wild plants as the main protection type (including 19 national protected areas), and 17,000 km 2 national protected areas. The nature reserves with fungi as the main object for protection is countless. In addition, research on conservation biology based on biodiversity still lacks information mechanism and theoretical systems, research methods are also controversial [27,28,29]. At present, the work of biodiversity assessment and conservation mainly focuses on animals and plants [30,31], and there is a lack of theoretical research on the distribution pattern of fungal biodiversity, and the implementation of the biodiversity protection system combined with it is also to be explored in many ways. The establishment of the "One District, One Herbarium and Five Banks" conservation system for fungal resources is different from the traditional researches at home and abroad, which is also different from the traditional mycological herbarium (collection of specimens for resource investigation or separation of preserved strain for application), it includes the establishment of the conservation, the viable tissue bank, the gene bank, the compound bank, and the information bank. Fungal resources are protected while these species diversity is preserved in ecological function of population, genetic information, and chemical information. Meanwhile, the Metabarcoding technology has been used to improve the species diversity of fungi in the forests of the Eastern Qinghai-Tibet Plateau. A lot of potential species have been found, which is not only an important supplement to the information of specimens but also fundamental information for the study of fungal diversity on Qinghai-Tibet Plateau. At the same time, it also provided a reference for improving the methods of investigation of fungal resources and the construction of the protection system. On the other hand, the research results of the conservation system "One District, One Herbarium and Five Banks" of fungal resources on Qinghai-Tibet Plateau provided not only the data for fungal diversity and the influencing factors of these changes, but also a firm foundation for a series of key technical researches such as domestication of rare strains, production of active substances and development of functional genes [32,33,34]. It provided the advanced theoretical basis, technical means and scientific methods for the development of edible and medicinal fungal industry on Qinghai-Tibet Plateau, and realized the sustainable development of a variety of rare and common fungi in this region. The developmental direction of conservation and sustainable utilization of fungal resources is supported by the preservation and utilization of fungal resources.
v3-fos-license
2018-08-24T21:51:17.525Z
2018-09-10T00:00:00.000
52070184
{ "extfieldsofstudy": [ "Medicine", "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1088/1361-6528/aadc76", "pdf_hash": "5e6edae03f3c99caa97283e04d9bb844f9808021", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44595", "s2fieldsofstudy": [ "Physics" ], "sha1": "8c8641325fba09234d70a0c386ea79789ba87ff9", "year": 2018 }
pes2o/s2orc
Spectrally resolved x-ray beam induced current in a single InGaP nanowire We demonstrate x-ray absorption fine structure spectroscopy (XAFS) detected by x-ray beam induced current (XBIC) in single n+-i-n+ doped nanowire devices. Spatial scans with the 65 nm diameter beam show a peak of the XBIC signal in the middle segment of the nanowire. The XBIC and the x-ray fluorescence signals were detected simultaneously as a function of the excitation energy near the Ga K absorption edge at 10.37 keV. The spectra show similar oscillations around the edge, which shows that the XBIC is limited by the primary absorption. Our results reveal the feasibility of the XBIC detection mode for the XAFS investigation in nanostructured devices. Introduction X-ray absorption fine structure (XAFS) is an established method for investigating semiconductors, which can give information about the local atomic properties [1][2][3]. Recent developments in x-ray optics has made it possible to investigate single nanostructures, whose properties may differ substantially from bulk material [4,5]. The signal from single nanostructures is inherently weak, which makes detection challenging. For instance, traditional transmission measurements suffer from poor contrast due to the weak absorption. One alternative is to measure the electrical conductance, which gives a weak signal but also a low background for single nanostructures [6,7]. The x-ray beam induced current (XBIC) is a more complex process than absorption and x-ray fluorescence (XRF), since the measured signal depends on local carrier transport properties in semiconductors. This makes it possible to use x-ray beams as a local probe, similar to electron beam induced current and scanning photocurrent microscopy [8,9]. In particular, recent studies have demonstrated that x-ray beams can probe the interior of single nanowire (NW) devices [6,7]. Moreover, NWs have shown a strong electrical response under hard x-rays [7]. The advantage of the x-ray beams over the electron and the laser probe beam is a longer penetration depth and a smaller diffraction limit, respectively [10][11][12]. Thus, nanofocusing of x-rays that reach the sub-10 nm regime [10,11] could significantly enhance the spatial resolution of XBIC. The process of generating charge carriers from x-rays starts with an absorption of a primary x-ray photon which excites an inner core electron that results in a core hole and a photoelectron. The absorption probability, p abs , depends on the photon energy, the sample composition, as well as the geometry of the beam and the sample. Near an absorption edge, the absorption probability of the atoms will also depend on the local atomic environment. The relaxation of an electron from a higher state to the core hole releases an excessive energy in the form of a secondary photon or an electron through the processes called XRF and Auger electron emissions, respectively. Further electrons are excited by these secondary photons and electrons [6], at the same time as the electron-hole pairs thermalize to the band edges. The average number of carriers generated through the primary x-ray photon absorption is given by η=Ε/ε, where Ε is the photon energy, and ε is the ionization energy [13]. For the x-ray energy of 10.37 keV and the sample thickness of 180 nm used in this study, we have η=1867 and p abs =9.7×10 −3 , for bulk In 0.56 Ga 0.44 P. This is in contrast to visible light for which only a single electron-hole pair is created per single photon event, η=1. The non-equilibrium charges will generate a current if there is an electric field, which can be internal, as in p-n junctions, or externally applied as in this work. The charge carriers can also recombine or get trapped in long-lived surface states before being detected, and the signal therefore depends on local carrier lifetimes and mobilities. Thus, XBIC can be used to investigate the local carrier transport properties. In addition, the signal from the studied sample might be affected by the x-ray interaction with the nearby components, such as the substrate, and the metal contacts. Note that the method here is distinct from the detection of Auger electrons, often called electron yield [1]. Evidently, the generation of the XBIC signal is more complex than the XRF process. Here, the XBIC and the XRF signals are compared with spectrally resolved excitation, to attain the XAFS spectrum from single nanowire devices. We find that the spectra are qualitatively similar, despite the underlying differences. Methods The sample was In 0.56 Ga 0.44 P single NWs with axial n + -i-n + doping profiles which were grown via the vapor-liquid-solid method by the use of Au seed particles in a metal organic vapor phase epitaxy system (supplementary material is available online at stacks.iop.org/NANO/29/454001/mmedia). The NWs also had an InP nucleation segment at the base and a GaP segment right below the Au particle. The NW diameter was 175 nm. The nominal length of each section was monitored by LayTec EpiR DA UV optical reflectometry system [14] which gives the lengths of InP nucleation segment 230 nm, n + -InGaP 290 nm, i-InGaP 1200 nm, n + -InGaP 400 nm, and GaP 120 nm. The NWs were transferred to a SiO 2 coated Si substrate with predefined bond pads and alignment markers. Electrical contacts were made to single NWs with electron beam lithography and metal evaporation of Ti and Au (10/230 nm). The NWs were excited with the nanofocused x-ray beam (∼65 nm diameter) at beamline ID-16B at the European synchrotron radiation facility, Grenoble, France (figure 1(a)) [15]. The XRF as well as the XBIC signals were collected over the NW at a bias of 0.05 V ( figure 1(a)). For the spectrally resolved XRF and conductance measurement, the x-ray photon energy was scanned around the Ga K-edge energy (∼10.37 keV) at the position of the NW where we attained the highest photoconductance signal. Spatially resolved XBIC and XRF The I-V characteristics of the device in dark and under x-ray excitation at the flux Φ=1.6×10 8 s −1 is illustrated in figure 1(b). In both cases there is a linear relation between the current and the applied bias. Furthermore, the fluctuating signal manifests the noise of a few fA. The linear currentvoltage relation makes it possible to calculate an electrical conductance which represents the XBIC signal in this report. The electrical conductance is about two orders of magnitude higher under x-ray excitation (σ=1. The superpositioned image of the conductance and XRF signals in figure 1(c) was gathered by a two-dimensional scan over the NW with a step size of 50 nm and a collection time of 0.2 s per point. The conductance is shown as green in this image ( figure 1(c)). The XRF signal was collected as a spectrum at each position from which the intensities at the certain energies, corresponding to the emission from Au and Ga, were extracted (supplementary material). They are displayed in figure 1(c) as blue and red areas for the Au metal contacts and the NW, respectively. Lineplots of the signals in figure 1(c) along the center of the NW are shown in figure 1(d). The strong XRF signal from Ga atoms on the left (figures 1(c) and (d)) indicates the GaP segment near the top of the NW which is used as the reference position (x=0). Then, we could draw the dashed lines indicating the nominal segments of the NW in figures 1(c) and (d). Apparently, most of the XBIC signal (green area) is collected from the middle segment. We observed no significant conductance peak at the contact edge, which would indicate a Schottky like contact between the NW and the metal contacts as previously reported [16]. The conductance profile in figure 1(d) shows an exponential decay on both sides, which could be fitted by G(x)∼exp(−x/L), where L is a characteristic decay length [17]. The decay lengths are L l =203±19 nm and L r =514±20 nm for the left and right slopes, respectively. Comparing these lengths to the beam diameter (∼65 nm), the decay is not limited by the size of the probe beam, so the characteristic decay length revealed here could be used to investigate the local carrier transport properties [16,18,19]. The XBIC peak is related to the n-i-n doping profile of the nanowire. In the highly doped n-segments, the electric field is too weak to drive the charge carriers to the contacts before they recombine. Instead the electric potential falls almost entirely over the middle segment, which gives a strong electric field that efficiently moves the charge carriers. X-ray photon flux variation XBIC Next, the flux dependence of the NW response to the x-rays was investigated. The maximum flux used was Φ=1.6× 10 8 s −1 which gives the conductance as red trace in figure 2(a). By reducing the flux by half to Φ=0.8× 10 8 s −1 , the conductance was decreased almost 3 times ( figure 2(a)). Further decreasing flux generated a weak signal which could not be detected. A super linear relation was already observed in a similar experiment [7], which was attributed to the charge carrier trapping at the NW surface leading to photogating and photodoping effects [20]. In photogating, trapped charge carriers behave like a wrap-gate on the NW, changing the Fermi level. Then, trapped charges would induce an excessive amount of the opposite charge carrier in the center of the NW called photodoping. An expected linear relation between the general photoconductance and the photon flux is G=qηp abs (μτ/l 2 )Φ where q is the elementary charge, μ is the carrier mobility, τ is the carrier recombination lifetime, l is the length of the active region, and Φ is the incident photon flux. With the long-lived traps, the photoconductance after switching the beam on becomes time dependent which can be written as: where t is time, p trap is the trapping probability, and τ tr is the detrapping lifetime. The conductance of this InGaP NW device is assumed to be dominated by electrons with mobility μ=200 cm 2 V −1 s −1 [21]. To further understand the trapping mechanism, we performed time-resolved measurements where the conductance was measured as a function of time after switching the x-ray beam on ( figure 2(b)). At small t, the rate of the photoconductance is constant, dG/dt=qηp abs p trap (μ/l 2 )Φ, giving the calculated trapping probability, p trap =1.45× 10 −10 , from the slope of the plot. Then, we fitted the time resolved photoconductance with equation (1) (dashed line in figure 2(b)) which yields the detrapping lifetime of τ tr = 3.41 s. In a similar experiment performed on a 100 nm diameter InP NW [7], the trapping probability was much higher, p trap =2.3×10 −6 . We speculate that the lower trapping probability is due to the larger NW diameter here, 175 nm. Furthermore, the device performance can be quantified by the photoconductive gain, g, which is defined as the ratio between the conductance from the collected charge carriers and the absorbed charge carriers [7,20]. By dividing equation (1) with the carrier absorption term, qηp abs Φ, and setting t→∞, the photoconductive gain can be written as g=p trap (μτ tr /l 2 ), which yields g=0.34 for our device. The low gain is due to the low trapping probability. Spectrally resolved XBIC and XRF The ability to detect XBIC in a single NW opens up the possibility to use it as a new detection mode for XAFS investigations which are conventionally measured by the XRF or the absorption techniques. The weak signal from those techniques is a general problem for spectroscopic investigations of single nanostructures. Besides, the comparison of the XAFS from the XBIC and the XRF signals at the Ga K-edge energy could shade the light on the underlying mechanism of the XBIC at the atomic level. Each result of the spectrally resolved measurements (supplementary material) shows fluctuations and non-reproducible spikes owing to the sensitivity of the equipment. Those results are averaged and shown in figure 3, with the conductance (black trace) and XRF (blue trace) spectra. Overall, the oscillation of the conductance follows the XRF signal, although there are some differences. In both spectra, we observed a rapidly increasing XRF signal at the Ga K-edge energy ( figure 3(b)). From this result, we can trace the photoconductance signal back to the interaction between the x-ray and the Ga atoms. However, the pre-edge signal in the conductance measurement is not at the minimum level as detected in the post-edge region. The reason is that charge carriers can be generated also by x-ray absorption in In and P atoms. In contrast, the Ga K XRF signal can only result from excitation in the Ga K shell. The signals show post-edge oscillations that are typically known as the extended XAFS, which is part of the XAFS spectrum. The oscillations result from the interference between the emitted photoelectrons from the target atoms and their backscattering waves from neighboring atoms ( figure 3(a)). The change in energy of the electrons, which is coupled to their wavelength, leads to a variation in constructive and destructive interference. Consequently, the interference affects the absorption probability of the target atoms leading to the oscillations in the detected conductance and XRF. The magnified spectra near the edge (figure 3(c)) reveal two consecutive peaks at the edge energy, labeled as peak A and B, in both the XRF and conductance plots. The peaks A and B are at 10.374 keV and 10.378 keV, respectively. The first distinct peak, C, in the post-edge region is at 10.397 keV. Although the XAFS results could reveal many atomic features of the sample, a quantitative interpretation of the XAFS results is beyond the scope of this report. We only qualitatively compare our result with the other relevant studies. For peaks A and B, the XAFS from different Ga composition materials; e.g. GaAs and Ga 2 O 3 [3,22], shown a similar feature at the Ga K-edge energy. Following these studies, we interpret peak A and B as the transitions of the 1s electron to 4p and to the continuum, respectively. In the postedge region where we observed peak C, similar results were found from the measurement of InGaN [23] and GaN [24]. The energy difference between Peak B and C could be related to the distance from Ga atoms to their neighbor atoms, which is related to the crystal structure. In our case, where the energy gap between peak B and C is about 17 eV, the results suggest that the sample has zinc blende structure [24]. The same result was also achieved from a transmission electron microscopy (TEM) study of NWs grown under similar conditions [25]. Similar measurements performed on the heterostructure p-n junction NW device exhibited a slightly inconsistence between these two signals, since the secondary electrons were affected by the heterostructure NW and the existence of the depletion region at the junction [6]. Due to the lack of the built-in electric field within the n + -i-n + doped NW, we did not observe such an effect on the measured XBIC. Conclusion In conclusion, our results from the spectrally resolved XRF and XBIC demonstrate the feasibility to use electrical detection for XAFS measurements on the nanostructure devices. The two detection modes exhibit similar spectra. We observed a super linearly increasing XBIC signal as function of the x-ray photon flux, which is caused by photogating and photodoping effect due to the surface trapping. This technique could be used to study the local atomic environment and carrier transport properties of many categories of nanostructured electronic devices [26,27].
v3-fos-license
2018-01-01T07:52:27.195Z
2011-05-16T00:00:00.000
35271513
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.cl/pdf/maderas/v13n1/ART09.pdf", "pdf_hash": "5978ba65be9acc59dc39736f55d57d1de79b9c03", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44597", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Materials Science" ], "sha1": "5978ba65be9acc59dc39736f55d57d1de79b9c03", "year": 2011 }
pes2o/s2orc
CHEMICAL MODIFICATION OF CELLULOSE BY ACYLATION : APPLICATION TO ADSORPTION OF METHYLENE BLUE ♣ Cellulose was modified under mild conditions in order to increase its capability to trap pollutants. Nicotinoyl chloride hydrochloride (NCHC), with its pyridine ring able to adsorb cations, was grafted on the substrate. This grafting has been monitored by infrared spectroscopy and elemental analysis. We have studied the adsorption of Methylene Blue (MB) onto grafted and unmodified cellulose. It was observed that grafting increases three times the retention capacity of cellulose and the kinetics of adsorption is perfectly represented by a pseudo second order model. The adsorption is well described by a Langmuir-type isotherm showing a homogeneous adsorption phenomenon through the formation of a monolayer. Besides the reaction is spontaneous and exothermic, suggesting the possibility of recycling the substrate by desorbing the dye at elevated temperature. INTRODUCTION The effluents of textile industry contain dyes and heavy metals which are poisonous for fauna and flora, because of their stability and their low biodegradability (Guivarch et al. 2003, Kadirvelu et al. 2003, Jain et al. 2003).Many different methods of elimination have been used such as: precipitation, ions exchange, extraction, physico-chemical or biological treatments.Most of these methods are inefficient, because of their weak selectivity and/or their high cost (Bagane and Guiza 2000, Benguella and Yacouta-Nour 2009, Mazet et al. 1990).For example, adsorption process using activated carbon is one of the mostly used techniques for water purification (Benturki et al. 2008), but its cost limits its use in the developing countries.During the last years, many research teams have had an interest in cellulose compounds for waste treatment, because they are quite easily available and renewable.Cellulose is the most abundant polymer on earth (Satge et al. 2002, Chauvelon et al. 1998, Gourson et al. 1999) and derived substrates are suitable for trapping dyes (Marchetti et al. 2000), organic (Maurin et al. 1999, Aloulou et al. 2006, Alila and Boufi, 2009) and inorganic (Randall et al. 1976, Li and Bai 2005, Navarro et al. 1999) pollutants.Chemical modification of cellulose (e.g.surface fixation or grafting of groups able to interact with pollutants) allows to improve its adsorption capacity and to enhance its reactivity.In this work, we are modifying cellulose under mild conditions by Nicotinoyl chloride hydrochloride (NCHC).Nicotinoyl chloride hydrochloride (NCHC) was chosen because the reaction of an acyl halogenide on an alcohol function is the most frequently quantitative (formation of the ester favored) and quite fast.Moreover, the grafting of nitrogenous functions is well documented due to the fact that these functions both confer properties for ions exchange and greatly modify the affinity of cellulose towards organic or inorganic pollutants (Alila andBoufi 2009, Zghida et al. 2002). After characterization of the resulting material, we investigated its adsorption capability towards a current dye i.e. 3,7-bis (Dimethylamino)-phenothiazin-5-ium chloride (methylene blue MB), by varying experimental conditions (residence time, concentration and temperature).The kinetic and thermodynamic parameters of the reaction have been established. Preparation of modified cellulose The cellulose used is "Kraft pulp", provided by "Morocco Cellulose" company.It was crushed using a crusher SIEBTECHNIK and sieved to keep only particles size 0,5-1 mm.Cellulose was then activated by dipping, during one minute, in a 3 % aqueous solution of sulphuric acid, filtered, washed with distilled water and dried at 40°C in an oven.It was finally washed with methanol in a soxhlet for 6 hours in order to eliminate any residual water and contamination (Krouit et al. 2009).The material obtained was then dried in an oven at 80°C. Its chemical modification took place in two steps: -The first consisted in dissolving cellulose in Dimethyl Acetamide/ lithium chloride (DMA/LiCl).Cellulose (0.22 g) was put in 10 mL of DMA.The mixture was heated at 120°C for 2 hours and then cooled at 100°C.At this temperature, anhydrous LiCl (0.9 g) was added to the mixture and the reaction proceeded for 4 hours at this temperature. -The second step consisted in functionalizing cellulose.Nicotinoyl chloride hydrochloride NCHC (C 6 H 4 ClNO.HCl, 0.9 g) and triethylamine (2 mL) were added to the above mixture.The temperature was then raised up to 120°C, where the mixture was maintained under agitation for 12 hours.The solvent was evaporated to yield a gray solid.This solid was first washed with an aqueous solution saturated with K 2 CO 3 then with distilled water until obtaining neutral rinsing water.To remove any residual NCHC, modified cellulose was washed in a soxhlet with ethanol for 6 hours, vacuum filtered and dried overnight in an oven at 80°C. Characterization of media Grafting efficiency was checked by characterization of raw and modified celluloses with the following methods: -Infrared spectroscopy : Substrate (1 mg) was crushed then mixed with potassium bromide KBr (99 %) provided by ALDRICH TM (50 mg).The powder obtained was then pelletized under 6 bars pressure.The analysis was carried out using a spectrophotometer PERKIN ELMER Spectrum 2000. -Ultimate analyzes were carried out on an elementary analyzer THERMOFINNIGAN EA 1112 fitted with an automatic sampler and a chromatographic column PORAPAK.The results were given at +/-0.2 %. Dye used Preparation of Solutions Methylene blue MB (biological analyzes grade) was provided by JANSEN CHIMICA.Adsorption isotherms of this dye were established from a set of aqueous solutions with concentrations varying in a range from 4.10 -6 to 6.10 -5 mol.L -1 (i.e.1.28 mg.L -1 to 19.19 mg.L -1 ). Residual dye concentration After adsorption, concentrations in residual MB were determined by UV-visible spectrophotometry (SHIMADZU UV-2550) via the BEER-LAMBERT's law. Equilibrium time Raw or functionalized cellulose (0.1 g) was put in 40 mL of a 6.10 -5 mol.L -1 (19.19 mg.L -1 ) MB solution.This system was maintained under agitation in a regulated bath at temperature 25°C.The residual concentration of MB was determined for different contact times (from 3 to 140 min) so that the equilibrium time can be evaluated. The quantity of dye fixed at the time t is given by the following relation: (1) With Q t : the amount of dye adsorbed in mg.g -1 at t ; C 0 : the initial concentration of dye in mg.L -1 ; C r : the residual concentration at time t in mg.L -1 ; m: the mass of adsorbent media in g and v: the volume of solution in L. Isotherms of adsorption To establish adsorption isotherms, aqueous solutions of MB were prepared, in a concentration range from 4.10 -6 to 6.10 -5 mol.L -1 .Once the equilibrium has been reached, the quantity of adsorbed dye, as well as the residual concentration of dye in solution, was evaluated.This set of values gives one point of the isotherm. Thermodynamic parameters We have studied the influence of the temperature in the adsorption of MB on both cellulosic materials.For this series of experiments, the maximum quantity adsorbed by 100 mg of substrate in contact with an aqueous solution of MB at 6.10 -5 mol.L -1 (19.19 mg.L -1 ) was determined in a temperature range from 25 to 60°C. Grafting Evidence Infrared spectroscopy Figure 1 shows the infra-red spectra of raw and functionalized cellulose. Figure 1. IR Spectra of unmodified and grafted cellulose A broad absorption band which corresponds to valence vibrations of hydroxyl groups can be observed around 3350 cm -1 .In the case of treated cellulose, this band is slightly narrowed and shifted to 3460 cm -1 .The other characteristic bands of OH groups (1455 and 1205 cm -1 ) are also slightly affected.These modifications are explained by the disappearance of the alcoholic functions and thus of the hydrogen bonds in the starting cellulose.We can notice that the intensity decrease is correlated with the appearance of a band around 1740 cm -1 allotted to the vibration of the carbonyl bond C=O and characteristic of ester functions.The reaction of esterification is also confirmed by the appearance of new bands around 1300 cm -1 .Another band, around 1410 cm -1 is related to the elongation of a C=N bond and proves the presence of the pyridinic cycle on cellulose after reaction.All these experimental observations show that cellulose was modified and that an ester bond was created. Elemental analysis Table 1 shows the results of the centesimal analyzes carried out on the various substrates.We were only interested in the elements C, H and N. Table 1.Elemental analysis of cellulose before and after reaction. The analysis of cellulose leads to percentages of carbon and hydrogen in strong correlation with the empirical formula C 6 H 10 O 5 .The carbon rate of cellulose in solution seems identical whereas the hydrogen one is slightly reduced due to the rupture of intermolecular hydrogen bonds (Joly et al. 2004) in cellulose.Raw cellulose is able to trap water molecules between its polymer chains and during dissolution this adsorbed water is released.A decreasing in the hydrogen content confirmed by the elementary analysis, is then observed.In the case of dissolved cellulose, the presence of nitrogen traces corresponds to the remaining solvent (dimethyl acetamide). The results for modified cellulose enable us to state that the grafting took place.During the reaction, we notice that the H/C ratio decreases significantly from 1.67 to roughly 1.08.Moreover, the analyses show the presence of nitrogen element which previously appeared only in the form of traces.This highlights the presence of the pyridinic cycle grafted on cellulose.From these results, we evaluate that the grafting rate is close to 0.7.This could indicate that only 70% of cellulose (C 6 H 10 O 5 ) is monosubstituted to C 12 H 13 O 6 N. The results obtained by ultimate analysis thus confirm the previous observations from infrared spectroscopy.The reactivity of cellulose towards acyl chlorides being well-known (Krouit et al. 2009), one can suppose the following reaction pathway as shown on figure 2: It appears that the adsorption of MB is faster on modified cellulose than on cellulose.The adsorption equilibrium is reached in 80 minutes.After 140 minutes, the adsorption rate is about 0.35mg.g - for the grafted substrate against 0.11mg.g - for pure cellulose.This can be related to the presence of an electron rich pyridinic cycle which is able to interact with cationic MB. From these experimental results we applied different kinetic models to derive a reaction mechanism.These mathematical models have been chosen because they are quite simple and thus commonly used when dealing with the adsorption of organic compounds on various adsorbents. The kinetic model of adsorption of Lagergreen (Lagergreen 1898) gives, for first order reaction: While carrying log (Q e -Q t ) versus t, a straight line should be obtained whose slope gives K 1 , the rate constant of adsorption (min -1 ). For a pseudo second order reaction, the rate constant K 2 is given by the following relation (Ho andMckay 1999, Ho andMckay 2000): . By plotting t/Q t versus t, K 2 (g.min.mg - ) can be derived. For a second order reaction, the rate constant K 3 results from the following relation (Ho andMckay, 1999, Ho andMckay 2000): With, in all the cases: Q e : quantity adsorbed at equilibrium (mg.g -1 ), Q t : quantity adsorbed at time t (mg.g -1 ), t: time of contact (min). Even if, we obtain straight lines with the three models, the best suitable model to describe the adsorption of MB on both substrates is the pseudo second order (Fig. 4). 2 summarizes all these results which are in good agreement with those of the literature.Indeed Uddin et al. (2009) showed that the adsorption of MB on cellulose substrates followed a pseudo second order law.Moreover, MB is a cationic dye and the adsorption of cations on such substrates also obeys this law (Shin et al. 2007). Adsorption isotherms The adsorption isotherms of MB on both cellulosic materials were obtained by plotting the amount of dye adsorbed by the substrate (Q e ) as a function of the residual dye concentration in solution at equilibrium (C e ) at 25°C.The contact time between dye and adsorbent was fixed at 6 hours, assuming that equilibrium is reached and that there is no significant variation in the concentration of dye (80 minutes only were necessary with MB solution at 6.10 -5 mol.L -1 (19.19 mg.L -1 ) as previously mentioned).Figure 5 shows the isotherms obtained.Whatever the medium, the same trend is observed: the amount of adsorbed dye increases rapidly and then levels off to reach a plateau.The increase observed at low concentration is significantly faster for grafted cellulose.Saturation is reached at a concentration of only 0.15 mg.L -1 versus 0.9 mg.L -1 for unmodified cellulose giving evidence of the efficiency of grafted substrate for discoloring low concentrated aqueous solutions.The adsorption capacity of the new substrate is about 0.35 mg.g -1 versus 0.112 mg.g-1 for cellulose.Grafting improves therefore the adsorption capacity by a factor of 3. The plateau observed can only be explained by the saturation of adsorption sites.The isothermal curves obtained are similar to Langmuir-type isotherms, as generally observed for colored effluents from textile industry (Khalfaoui et al. 2002). Thermodynamic Analysis Adsorption Isotherms Modeling Modeling of adsorption isotherms is very important and mandatory, as it allows to better understand the mechanisms involved in adsorption.The three most commonly used models are: Langmuir, Freundlich and Jossens derived from Redlich and Perterson (Kumar and Porkodi 2006).The Langmuir's model is based on the following assumptions: the surface is uniform with no interactions between adsorbed molecules (Langmuir 1916).It presents specific sites for adsorption and the adsorption occurs through the formation of a monolayer of adsorbate.The Freundlich's model is based on an empirical equation which expresses a change in adsorption energy with the amount adsorbed (Freundlich 1906).This distribution is explained by the heterogeneity of adsorption sites and this model admits the existence of interactions between adsorbed molecules (Yang 1998).In most cases, the adsorption of dye does not follow a simple law and both previous models are not always convenient.In the case of grafted cellulose, Khalfaoui et al. (2002) showed that the adsorption results from the superposition of two mechanisms: on one hand, adsorption and saturation on homogeneous sites according to a curve of Langmuir type and on the other hand, adsorption on sites with heterogeneous energy according to a Freundlich-type isotherm.The model which better describes such a mechanism is that of Jossens (Redlich and Peterson 1959).Our calculated values are summarized in table 3. Table 3. Maximum amounts adsorbed and different constants calculated for both materials according to studied models with Qm: maximum adsorption capacity (mg.g -1 ), Ce: concentration of adsorbate at equilibrium (mg.L -1 ), K L : equilibrium constant, characteristic of the adsorbent (L.mg -1 ), K F : constant related to the adsorption capacity, K I and K J , two constants characteristic of the material, n: heterogeneity factor. The theoretical curves corresponding to the three modeling systems are represented and compared with experimental results for cellulose (Fig. 6a) and modified cellulose (Fig. 6b).It appears that the Freundlich's model is not suitable for modeling the adsorption of MB on both studied substrates.The Langmuir's model fits correctly with the experimental results obtained for grafted cellulose over the whole concentration range.Jossens's model does not give clear improvements since the heterogeneity factor n is close to 1. Regarding cellulose, Jossens's model seems to be the most representative for the adsorption mechanism even if the factor of heterogeneity remains close to 1. Besides, the Langmuir's model leads to satisfactory results and the slight discrepancy between experimental and theoretical values can be explained by the calculated value of Qm (0.140 mg.g -1 ) which is 25 % greater than the experimental value (0.112 mg.g -1 ). Generally speaking, it seems that dye adsorption occurs through a monolayer on sites with the same energy.These results seem to be in good agreement with those of the literature (Uddin et al. 2009). The thermodynamic parameters. The equilibrium constant for MB adsorption onto a substrate is given by the relationship: where C 0 : initial concentration (mg.L -1 ) and C e : equilibrium concentration (mg.L -1 ). Kc is related to the Gibbs energy of reaction (ΔG°) by ΔG°= -R.T.Ln(K c ) and therefore: Ln(Kc)=-ΔH°/RT+ΔS°/R where ΔH° and ΔS° are respectively the enthalpy and standard entropy of adsorption which can be assessed by plotting Ln(Kc) versus 1/T.The results are reported in table 4. On the modified cellulose, the negative value of Gibbs free energy ΔG ° indicates that the adsorption process is spontaneous and favored at low temperature (Benturki et al. 2008).For this substrate, the standard enthalpy is negative indicating that the process is exothermic.Consequently increasing temperature reduces the adsorption phenomenon and lowers the maximum adsorbed amount Q m by decreasing ionic interactions between the electron-rich pyridinic ring and cationic dye.If this temperature effect is unfavorable for the adsorption, it can be interesting, regarding the adsorbent media.Saturated substrate could be recycled by desorbing dye at high temperature (by hot washing for example). The ΔH° value close to -40 kJ.mol -1 shows that the interactions between adsorbate and adsorbing material are strong and that the adsorption is not limited by the diffusional step.The negative sign of ΔS° is in good agreement with the adsorption mechanism i.e. from a random state (dye in solution) to a more ordered one (dye interacting with the substrate). For cellulose, the adsorption process is endothermic, which is consistent with an increase in the amount adsorbed with temperature.The value of enthalpy shows that the interactions between the substrate and MB are weak.The value of the entropy change suggests that structural variations take place in the adsorbing material and the adsorbate during the adsorption. CONCLUSION By using infrared spectroscopy, TGA and elemental analysis, we confirmed that cellulose can be grafted by Nicotinoyl chloride hydrochloride (NCHC) with a degree of substitution DS lower than 1 (0.7).This support is about three times more efficient in the removal of methylene blue from aqueous solution than raw cellulose (adsorption capacity = 0.35 mg against 0.11 mg per gram of solid).The adsorption process depends on the contact time and equilibrium is reached in 80 minutes.A comprehensive study has shown that kinetics is in good agreement with a pseudo second order model.The results show that the adsorbed amount depends on the initial concentration of dye and that the adsorption obeys a Langmuir-type law.Thermodynamics analysis showed that adsorption occurs spontaneously on this modified cellulose but is disadvantaged at elevated temperature since the process is exothermic.Note that this result shows interesting prospects for recycling.Indeed desorption of MB from saturated substrate is made possible by hot washing as we have observed from preliminary experiment.Regarding this aspect, further experiments are scheduled to approach the kinetics of MB liberation as a function of temperature.Besides it would be interesting to investigate the influence of DS on its adsorption capacity, on the kinetics and thermodynamics of this phenomenon.In addition, the synthesized media will now be tested in its ability to trap cations from heavy metals like cadmium Cd and lead Pb Figure 2 . Figure 2. Reaction pathway proposed for the grafting of celluloseMethylene Blue Adsorption Kinetic studyFigure3shows the adsorption kinetics of methylene blue on both substrates studied: Figure 3 . Figure 3. Variation of the amount of MB adsorbed versus time Figure 4 . Figure 4. Determination of the rate constant K 2 by applying the pseudo second order kinetic model Figure 5 . Figure 5. Quantity of adsorbed MB (mg.g -1 ) on unmodified and grafted cellulose versus dye concentration Figure 6 . Figure 6.Fittings of different models to the adsorption results obtained for a) unmodified and b) grafted cellulose
v3-fos-license
2024-02-06T17:07:19.364Z
2024-02-01T00:00:00.000
267418167
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.jbc.org/article/S002192582400084X/pdf", "pdf_hash": "7fa60acdf31046d9c11234e353e26834c5c6e7ae", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44600", "s2fieldsofstudy": [ "Biology" ], "sha1": "03f8b3d01b75032a921fddb025a9cc35427a1090", "year": 2024 }
pes2o/s2orc
Loss of Dna2 fidelity results in decreased Exo1-mediated resection at DNA double-strand breaks A DNA double-strand break (DSB) is one of the most dangerous types of DNA damage that is repaired largely by homologous recombination or nonhomologous end-joining (NHEJ). The interplay of repair factors at the break directs which pathway is used, and a subset of these factors also function in more mutagenic alternative (alt) repair pathways. Resection is a key event in repair pathway choice and extensive resection, which is a hallmark of homologous recombination, and it is mediated by two nucleases, Exo1 and Dna2. We observed differences in resection and repair outcomes in cells harboring nuclease-dead dna2-1 compared with dna2Δ pif1-m2 that could be attributed to the level of Exo1 recovered at DSBs. Cells harboring dna2-1 showed reduced Exo1 localization, increased NHEJ, and a greater resection defect compared with cells where DNA2 was deleted. Both the resection defect and the increased rate of NHEJ in dna2-1 mutants were reversed upon deletion of KU70 or ectopic expression of Exo1. By contrast, when DNA2 was deleted, Exo1 and Ku70 recovery levels did not change; however, Nej1 increased as did the frequency of alt-end joining/microhomology-mediated end-joining repair. Our findings demonstrate that decreased Exo1 at DSBs contributed to the resection defect in cells expressing inactive Dna2 and highlight the complexity of understanding how functionally redundant factors are regulated in vivo to promote genome stability. Homologous recombination (HR) and nonhomologous end joining (NHEJ) are the canonical pathways of DNA doublestrand break (DSB) repair.HR is an error-free pathway requiring extensive 5 0 end resection, and NHEJ is an errorprone pathway whereby the ends are joined after minimal processing.DNA resection is the major deciding step between these two pathways (1).However, if resection initiates and HR is not possible, then a more mutagenic alternative (alt) repair pathway can be used as a last resort.Microhomology-mediated end joining (MMEJ) is an alt-end-joining (alt-EJ) pathway that occurs at a high frequency in the absence of yKu70/80 (Ku) or when broken ends are not compatible for direct ligation.MMEJ requires 5 0 resection; however, in contrast to HR, the extent of resection in MMEJ is believed to be coordinated with the process of scanning for microhomology in adjacent regions flanking the DSB.The mechanism remains ill defined; however, the repair product from MMEJ contains a deletion corresponding in size to the fragment between the annealed microhomology sequences, which were revealed during resection. The first responders to a DSB are Ku and Mre11-Rad50-Xrs2 (MRX), and they are important for recruiting additional NHEJ and HR factors (2)(3)(4)(5)(6).The Ku heterodimer also protects the ends from nucleolytic degradation and aids in the localization of Lif1-Dnl4 and Nej1 (3).Dnl4 ligase completes EJ by ligating the DNA with the help of Lif1 and Nej1 (4,(7)(8)(9).The central role of the MRX complex is to tether the loose DNA ends mainly through the structural features of Rad50 (10,11) and to initiate resection through the nuclease activity of Mre11 (12).Sae2 interacts with the MRX complex and activates Mre11 nuclease activity, which forces Ku dissociation.Ku disengagement at the DSB coincides with the initiation of 5 0 to 3 0 end resection by two long-range resection nucleases, Dna2 in complex with Sgs1 helicase, and Exo1 (12)(13)(14).Dna2 and Exo1 nucleases show functional redundancy as Exo1 drives long-range resection in the absence of Dna2 and vice versa (14).The interplay between repair factors in the two canonical pathways regulates the initiation of resection in part through antagonistic relationships between Ku and Exo1 and between Nej1 and Dna2, wherein Nej1 inhibits interactions of Dna2 with Sgs1 and with Mre11 and Sae2 (5,6,15,16). Here, we determined that the dominant negative effects of dna2-1 were caused by decreased localization of Exo1, a nuclease functionally redundant with Dna2 in DSB repair.In dna2-1 mutant cells, Ku-dependent NHEJ increased and Exo1dependent 5 0 resection decreased.By contrast, in dna2Δ pif1-m2 mutants, Ku70 and Exo1 recruitment to the break remained indistinguishable from WT, but EJ repair occurred mainly through MMEJ.These results demonstrate that Exo1 recovery is impacted by the physical presence of Dna2 and that the interplay between these two nucleases regulates key events that drive repair pathway choice, including the ratio of NHEJ and MMEJ. Results Nuclease-deficient dna2-1 shows abrogated resection at DSB Cells harboring nuclease-dead dna2-1 were more sensitive than dna2Δ pif1-m2 mutants to phleomycin, an agent that causes DNA DSBs, but less sensitive to hydroxyurea, an agent Loss of Dna2 fidelity impacts Exo1-mediated DNA resection inducing replication stress (Fig. 1A, (26,27)).While sensitivities to various genotoxic stressors have been previously reported with dna2 mutants, there has been little work explaining why dna2-1 mutants show greater sensitivity to DSB-causing agents compared with cells harboring the deletion of DNA2 or its binding partner, SGS1 (Figs. 1A and S1).This prompted our side-by-side investigation of dna2Δ pif1-m2 and dna2-1 in DSB repair.Our aims were to evaluate DNA resection, a key early step in HR and then to determine the impact of the dna2 mutations on the functionality of the other DSB repair factors.Dna2 functions in long-range resection and can compensate for Mre11 to initiate resection (14).To this end, resection was determined at two locations, 0.15 and 4.8 kb from the HOinduced DSB using a quantitative PCR (qPCR)-based approach that relies on RsaI as previously described (16,28).Resection produces ssDNA, and if resection goes beyond the RsaI recognition sequence, then the site is not cleaved and can be amplified by PCR (Fig. 1B, loci in blue and purple).Resection at the time points (0-150 min) was similar at both distances from the break in pif1-m2 and WT, indicating that the loss of PIF1 activity did not impact DNA processing at DSB (Fig. 1, C and D).Furthermore, when we performed chromatin immunoprecipitation (ChIP) at the HO-induced DSB, Dna2 HA levels in pif1-m2 mutants were indistinguishable from WT (Fig. 1E), reinforcing earlier work showing that the disruption of PIF1 did not impact DSB repair (22).Upon deletion of DNA2, resection decreased by approximately twofold, with a slightly greater defect at the distance 4.8 kb from the break (Fig. 1, C and D).A more pronounced defect was observed in dna2-1 mutants as resection was abrogated at both distances (Fig. 1, C and D).Dna2 recovery in dna2-1 mutants was unaltered (Fig. 1E), suggesting that the physical association of this nuclease-dead mutant at the break had a dominant negative impact. We also observed that dna2-1 mutant cells survived better than dna2Δ pif1-m2 and WT on 2% GAL (Fig. 1A).The genetic background of these cells includes hmlΔ hmrΔ, which prevents HR.Survival on galactose therefore correlates with mutagenic EJ repair, which prevents subsequent HO-cutting as opposed to survival on phleomycin, which creates multiple DSBs throughout the genome that can repair by HR. To complement the survival assays, we performed an EJ ligation experiment where the DSB was induced with galactose for 2 h before cells were washed and released into glucose to prevent further recutting.At the indicated time points, genomic DNA was prepared, and qPCR was performed with primers spanning the HO recognition site as previously described (Fig. 1B, locus in green; 9).The rate of EJ increased more in dna2-1 compared with dna2Δ pif1-m2 mutants (Fig. 1F).Increased EJ might arise naturally because of decreased HR but could also arise from more NHEJ factors at the DSB in dna2-1 mutants. NHEJ factors at DSBs in dna2 mutants Prior to comparing the impact of the dna2 mutants on factors driving resection, we determined the localization of proteins essential for NHEJ.Ku70 recovery at DSB increased in dna2-1 mutant cells but not in dna2Δ pif1-m2 mutants (Fig. 2A).By contrast, the recovery of Nej1 increased significantly in both mutants, with dna2Δ pif1-m2 showing a greater increase (Fig. 2B).These data highlight the antagonistic relationship between Dna2 and Nej1 at DSBs (5).The recovery of the other canonical NHEJ factors, Lif1 and Dnl4, in cells harboring either of the dna2 mutant, was indistinguishable from WT (Fig. S2, A and B). We wanted to determine whether preventing NHEJ would reverse the resection defect of the dna2 mutants and potentially the dominant negative effect of dna2-1 in HR repair.In addition to their essential function in NHEJ, both Ku70 and Nej1 inhibit resection (Fig. S2E).In alignment with our earlier work, deletion of KU70, but not the other core NHEJ factors, reversed the resection defect in both dna2 mutants (Figs. 2, C and D and S2, C and D; (9)).These data indicate that alleviation of the resection defects by ku70Δ was independent of PIF1 status, which differs between the two dna2 mutants.Correlating with the rescue in resection, deletion of KU70, but not NEJ1, suppressed the phleomycin sensitivities of both dna2Δ pif1-m2 and dna2-1 mutants (Fig. 2E).Altogether, suppressing the dominant negative effect of dna2-1 was specific to the loss of Ku70 rather than disruption of NHEJ by ku70Δ as HR-mediated repair was restored in dna2-1 ku70Δ mutants but not in dna2-1 nej1Δ mutants. To determine the type of EJ that can proceed in these mutants, we utilized a reporter system containing a URA3 marker flanked by two inverted HO recognition sites (Fig. S2F) (29).If both sites are cleaved simultaneously, noncompatible ends are generated and alt-EJ-MMEJ is used.However, because cutting at both sites is not perfectly coordinated, each single cut can still be repaired by NHEJ as previously described (29).In WT cells, the relative frequency of NHEJ and alt-EJ-MMEJ as determined by growth on -URA was 37% and 63%, respectively (Fig. 2F).Notably, the relative frequencies of NHEJ and MMEJ differed between the two dna2 mutants.The frequency of NHEJ increased to 55% in dna2-1 mutants (Fig. 2F).By contrast, when DNA2 was deleted, alt-EJ-MMEJ was the preferred EJ pathway, which we found surprising given Ku70 was similarly recovered at the break site in dna2Δ pif1-m2 and WT cells (Fig. 2, A and F).Consistent with previous work, upon KU70 deletion, EJ occurred through alt-EJ-MMEJ, and the increased NHEJ seen in dna2-1 mutants was reversed by ku70Δ (Fig. 2F). Nuclease-deficient dna2-1 suppresses Exo1 recruitment to DSB To bring further insight to events underlying the dna2-1 phenotype, we next determined the impact of dna2-1 and DNA2 deletion on the localization of other factors important for resection, namely Exo1 nuclease and the nucleaseassociated factors, Sae2 and Sgs1.The recovery of Sgs1 and Sae2 was not altered in either mutant background (Fig. S3, A and B).By contrast, Exo1 was significantly reduced in dna2-1 to almost the level of the nontagged control, whereas its recovery in dna2Δ pif1-m2 was like WT (Fig. 3A).These data Loss of Dna2 fidelity impacts Exo1-mediated DNA resection indicate that Exo1 localization was inhibited by the presence of nuclease-deficient Dna2 at the break rather than the intrinsic loss of Dna2 nuclease activity. We next compared resection and survival when the dna2 mutants were combined with deletion of EXO1.In line with previous observations, short-range resection (0.15 kb) decreased modestly in exo1Δ single mutant cells (Fig. 3B), but there was an approximately twofold decrease in long-range resection (4.8 kb) 80 to 150 min after DSB induction (Fig. 3C) (14,16).We could not determine resection when Loss of Dna2 fidelity impacts Exo1-mediated DNA resection both nucleases were deleted because of synthetic lethality (SL) (14).However, resection at both distances from the break and phleomycin sensitivity in dna2-1 exo1Δ mutants was indistinguishable from dna2-1 (Fig. 3, B-D).One explanation for the marked decrease in resection in dna2-1 was that expression of this mutant directly blocked the association of Exo1 with DSBs.However, Exo1 was similarly recovered in WT cells, which are DNA2+ PIF1+ and dna2Δ pif1-m2 mutant cells, suggesting neither PIF1 status nor the presence of Dna2 per se directly affected Exo1 recovery.A more plausible model, based on previous work showing Exo1 to be negatively regulated by the presence of Ku (15), is that the pronounced resection defect in dna2-1 mutants resulted from decreased Exo1 because of increased Ku, in addition to the loss of Dna2 nuclease activity.Indeed, upon deletion of KU70, Exo1 recovery increased at the DSB in dna2-1 mutants (Fig. 3E).These data were also consistent with suppression of dna2-1 phleomycin sensitivity by KU70 deletion being Exo1 dependent (Figs.2D and 3D).Resection remained low in dna2-1 exo1Δ ku70Δ triple mutants, as did growth on phleomycin and 2% GAL because both main DSB repair pathways, HR and NHEJ, were disrupted (Fig. 3, D and F).However, alt-EJ-MMEJ was still functional, which could provide some insight as to how a small percentage of triple mutants survived on phleomycin (Figs.3D and S3C).Finally, Exo1 recovery also increased when KU70 was deleted in dna2Δ pif1-m2 (Fig. 3E).These data help explain why the resection defect in dna2Δ pif1-m2 mutants was suppressed by ku70Δ but not by nej1Δ (Fig. 2C), as our earlier work showed the inhibitory effect of Nej1 to be unrelated to Exo1 activity (5,6,9). Overexpression of Exo1 restores the resection defect in dna2-1 cells Our data thus far support a model whereby the dominant negative effect of dna2-1 on resection stemmed from increased Ku70 at the DSB, which in turn inhibited Exo1 localization.This was supported by genetic analysis where deletion of KU70 in dna2-1 resulted in increased Exo1 recovery, increased resection, and decreased sensitivity to phleomycin.We next wanted to determine whether increasing the level of Exo1 could rescue the dominant negative effect of dna2-1 mutants.We utilized a 2μ URA3 plasmid encoding Exo1 (pEM-EXO1) that was previously engineered to investigate the inhibition of Exo1 by Ku (15).Of note, expression of Exo1 did not alter Dna2 recovery in WT or dna2-1 mutants (Fig. 4A).Resection at 0.15 kb in dna2-1 + pEM-EXO1 increased to the level seen in dna2Δ pif1-m2 mutants, although resection in both remained lower than WT (Fig. 4B).At the further distance, 4.8 kb from the DSB, Exo1 expression in both dna2 mutants resulted in a greater rescue where resection in dna2-1 + pEM-EXO1 was like WT + empty vector, and resection in both dna2Δ pif1-m2 and WT + pEM-EXO1 was similarly increased (Fig. 4C).Highlighting the link between resection and in vivo DSB repair, phleomycin sensitivity decreased in both dna2 mutants expressing Exo1 most notably in dna2-1 mutants after 3 days of growth (Fig. 4D).Resection and phleomycin sensitivity in pif1-m2 + pEM-EXO1 was indistinguishable from WT (Figs. 4D and S4, A and B). Finally, we wanted to determine whether ectopic expression of Exo1 in dna2-1 mutants could restore the balance of NHEJ and alt-EJ-MMEJ repair.Again, we utilized the reporter system where NHEJ and alt-EJ-MMEJ were distinguished by growth on -URA media and where cells were transformed with either pRS425-EXO1 for ectopic expression of Exo1 from a 2μ LEU2 plasmid or empty vector (Fig. S2F).Increased NHEJ and the relative frequency of EJ in dna2-1 mutants was reversed upon Exo1 expression (Figs. 2F and 4E).These findings are consistent with short-range resection and decreased Ku promoting EJ by alt-EJ-MMEJ (Fig. 4F).In WT cells, Exo1 expression also decreased the level of Ku recovered at the DSB; however, the frequency of alt-EJ-MMEJ to NHEJ did not change nor did short-range resection (Fig. 4, E and F). Discussion We set out to understand why dna2-1 nuclease-dead mutants were more sensitive to DSB-causing agents than dna2Δ mutants.Our results showed that Exo1 localization decreased in cells expressing nuclease-dead Dna2 and that overall resection in dna2-1 was reduced more than in cells where DNA2 or EXO1 were individually deleted.The negative effect of dna2-1 at DSB was caused by Ku-dependent inhibition of Exo1 localization (Fig. 5, A and C).The dna2-1 dominantnegative phenotype was largely overcome by either deleting KU70 or expressing Exo1, as both rescued the resection defect and phleomycin sensitivity.By contrast, neither Exo1 recovery nor Ku70 recovery changed in dna2Δ pif1-m2; however, Nej1 levels markedly increased (Fig. 5B).Our work corroborates previous synthetic genetic array screening where dna2-1 in combination with exo1Δ was viable (30) and suggests that the SL resulting from deletion of both DNA2 and EXO1 stems from something other than a combined loss of nuclease activity, unless dna2-1 has activity in vivo below the level of detection.One possibility is that the pif1-m2 mutation, which is not present in dna2-1, might contribute to the genetic SL interaction as Pif1 and Exo1 were previously shown to coordinate checkpoint signaling at uncapped telomeres (22,31). The loss of Dna2 nuclease activity by a point mutation had a profound impact on the pathway used in DSB repair, and 5 0 resection was central to the whole process.Ku and Nej1 are negative regulators of resection, and previous work by our laboratory and others showed there to be a division of labor for nuclease inhibition by these NHEJ factors (5,6,8,9,15,16).Nej1 interacts with the binding partners of Dna2, including Mre11, Sae2, and Sgs1 to inhibit Dna2 activity.However, Nej1 does not inhibit Exo1 recruitment.By contrast, Ku binding at DSB directly inhibits the accessibility of Exo1 nuclease to DNA ends, and its recovery at the break site, whereas Dna2 can initiate resection in the presence of Ku (24,25).Thus, the antagonistic relationship between Nej1 and Dna2 is distinct, and independent, from the antagonistic relationship between Ku and Exo1 at DSB.The data presented here highlight a layer of complexity not previously reported in earlier work, where Dna2 fidelity impacts the balance of Ku and Exo1 at the break, a relationship paramount for ensuring DSBs are repaired through the least mutagenic pathway available. The dna2 mutants impacted EJ differently in the reporter cell line where HR was precluded, a scenario relevant for G1 in the cell cycle.While NHEJ is error prone with the formation of small insertions and deletions, a more harmful outcome arises when resection initiates and more mutagenic alt-EJ-MMEJ proceeds.In dna2-1 mutants, the relative frequency of NHEJ increased and alt-EJ-MMEJ decreased, which was consistent with more Ku recovered at the break.The presence of Ku impacted the type of EJ repair in dna2-1 more than the resection defect.Both dna2-1 exo1Δ and dna2-1 exo1Δ ku70Δ mutants showed similar resection defects, but the increased frequency of NHEJ was Ku dependent (Fig. S3C).On the contrary, there was marked increase in alt-EJ-MMEJ in dna2Δ pif1-m2 mutants, which might have something to do with resection proceeding, albeit at reduced rate, in cells where DNA2 was deleted.However, resection also equally decreased when EXO1 was deleted, and repair occurred predominately through NHEJ (Figs. 3B and S3C).In all, these data might provide insight to the underlying cause of SL resulting from deletion of EXO1 and DNA2, as HR and both EJ pathways would be disrupted. The EJ observations also suggest that the presence of Ku was not the only regulatory factor determining the type of EJ repair.Rather, our data support a model wherein the frequency of NHEJ to alt-EJ-MMEJ was altered by the relative level of Ku in relation to other repair factors, namely Exo1 and Nej1.Exo1 promoted alt-EJ-MMEJ in the presence of Ku in dna2 mutants, and we observed this under two experimental conditions.First, alt-EJ-MMEJ in dna2-1 mutants increased upon Exo1 expression (Fig. 4E), and second, alt-EJ-MMEJ increased in dna2Δ pif1-m2 mutants, where the level of Nej1 increased, but the levels of Exo1 and Ku70 were unaltered (Figs. 2, A and B and S3C).These phenotypic differences between dna2Δ and dna2-1 indicate the possibility of increased Nej1-dependent alt-EJ-MMEJ repair in dna2Δ cells (32,33).Further work is needed to elucidate whether Nej1 has a role in regulating EJ; however, in support of this model, we recently reported that alt-EJ-MMEJ increased in aging cells as Ku declined, and Nej1 persisted DSBs (34). Taken together, our data point out the dynamic interplay between Dna2 and Exo1 in DSB repair pathway choice and broaden the understanding of nuclease localization versus nuclease activity in DNA processing at break sites.The characterization of the two dna2 mutants demonstrated that loss of Dna2 nuclease activity through different mutations resulted in different repair outcomes.The work has health relevance, and although DNA2 deletion is embryonic lethal, point mutations are observed in diseases like Seckel syndrome and various kinds of cancer, with the corresponding P504→S mutation of dna2-1 in yeast seen in human cancers (35,36). Media details All the yeast strains used in this study are listed in Table S1 with new ones obtained by crosses.The strains were grown on various media in experiments as described.For HO induction of a DSB, YPLG medium is used (1% yeast extract, 2% bacto peptone, 2% lactic acid, 3% glycerol, and 0.05% glucose).For the continuous DSB assay, YPA plates are used (1% yeast extract, 2% bacto peptone, and 0.0025% adenine) supplemented with either 2% glucose or 2% galactose.For the mating type assays, YPAD plates are used (1% yeast extract, 2% bacto peptone, 0.0025% adenine, and 2% dextrose). ChIP ChIP assays were performed as previously described (5).Cells were cultured overnight in YPLG at 25 C. Cells were then diluted to equal levels (5 × 10 6 cells/ml) and cultured to one doubling (3-4 h) at 30 C. 2% GAL was added to YPLG, and cells were harvested and crosslinked at various time points using 3.7% formaldehyde solution.Cut efficiencies for all strains are shown in Table S2.Following crosslinking, the cells were washed with ice-cold PBS, and the pellet was stored at −80 C. The pellet was resuspended in lysis buffer (50 mM Hepes [pH 7.5], 1 mM EDTA, 80 mM NaCl, 1% Triton, 1 mM PMSF, and protease inhibitor cocktail), and cells were lysed using Zirconia beads and a bead beater.Chromatin fractionation was performed to enhance the chromatin-bound nuclear fraction by spinning the cell lysate at 13,200 rpm for 15 min and discarding the supernatant.The pellet was resuspended in lysis buffer and sonicated to yield DNA fragments (500 bp in length).The sonicated lysate was then incubated with αHA-, αFLAG-, or αMycantibody-conjugated beads or unconjugated beads (control) for 2 h at 4 C.The beads were washed using wash buffer (100 mM Tris [pH 8], 250 mM LiCl, 150 mM [αHA and αFLAG] or 500 mM [αMyc] NaCl, 0.5% NP-40, 1 mM EDTA, 1 mM PMSF, and protease inhibitor cocktail), and protein-DNA complex was eluted by reverse crosslinking using 1% SDS in TE buffer, followed by proteinase K treatment and DNA isolation via phenol-chloroform-isoamyl alcohol extraction.qPCR was performed using the Applied Biosystem QuantStudio 6 Pro machine.PowerUp SYBR Green Master Mix was used to visualize enrichment at MAT1 (0.15 kb from DSB), and PRE1 was used as an internal control (Table S2).HO cutting was measured in strains used to perform ChIP in Table S3. Continuous DSB assay and identification of mutations in survivors Cells were grown overnight in YPLG media at 25 C to saturation.Cells were collected by centrifugation at 2500 rpm for 3 min, and pellets were washed 1× in ddH 2 O and resuspended in ddH 2 O. Cells were counted and spread on YPA plates supplemented with either 2% GLU or 2% GAL.About 1 × 10 3 total cells were plated on glucose, and 1 × 10 5 total cells were plated on galactose.The cells were incubated for 3 to 4 days at room temperature, and colonies were then counted on each plate.Survival was determined by normalizing the number of surviving colonies on the GAL plates to the number of colonies on the GLU plates.About 100 survivors from each strain were scored for the mating type assay as previously described ( 16), and at least 100 survivors were used to make a master plate, which was later replica-plated on -URA plates.The number of survivors on -URA plates was counted to determine the ratio of NHEJ and alt-EJ repair frequencies. qPCR-based ligation assay As described previously (9), cells from each strain were grown overnight in 15 ml YPLG to reach an exponentially growing culture of 1 × 10 7 cells/ml.Next, 2.5 ml of the cells were pelleted as "no break" sample, and 2% GAL was added to the remaining cells, to induce a DSB.About 2.5 ml of cells were pelleted after a 3 h incubation as time point 0 sample.After that, GAL was washed off, and the cells were released into YPAD, and respective time point samples were collected.Genomic DNA was purified using standard genomic preparation method by isopropanol precipitation and ethanol washing, and DNA was resuspended in 100 μl ddH 2 O. qPCR was performed using the Applied Biosystem QuantStudio 6 Flex machine.PowerUp SYBR Green Master Mix was used to quantify resection at HO6 (at DSB) locus.The PRE1 locus was used as an internal gene control for normalization.Signals from the HO6/PRE1 time points were normalized to "no break" signals, and % Ligation was determined.The primer sequences are listed in Table S2. qPCR-based resection assay Cells from each strain were grown overnight in 15 ml YPLG to reach an exponentially growing culture of 1 × 10 7 cells/ml.Next, 2.5 ml of the cells were pelleted as time point 0 sample, and 2% GAL was added to the remaining cells, to induce a DSB.Following that, respective time point samples were collected.Genomic DNA was purified using standard genomic preparation method by isopropanol precipitation and ethanol washing, and DNA was resuspended in 100 ml ddH 2 O. Genomic DNA was treated with 0.005 μg/μl RNase A for 45 min at 37 C.About 2 μl of DNA was added to tubes containing CutSmart buffer with or without inclusion of the RsaI restriction enzyme and incubated at 37 C for 2 h.qPCR was performed using the Applied Biosystem QuantStudio 6 Flex machine.PowerUp SYBR Green Master Mix was used to quantify resection at the RsaI cut site 0.15 Kb DSB (in the MAT1 locus) and 4.8 Kb.PRE1 was used as a negative control, and the primer sequences are listed in Table S2.RsaI cut DNA was normalized to uncut DNA as previously described to quantify the %ssDNA (28).HO cutting was measured in strains for resection (Table S4). Figure 5 . Figure 5. Model depicting how dna2 mutants impact DSB repair, where the presence of nuclease-dead Dna2 at the break inhibits Exo1 nuclease through increased Ku.A, the schematic shows the antagonistic relationship between Ku70/80 (Ku) and Exo1 and Nej1 and Dna2.Ku binding to DNA ends at the break, inhibiting the access of Exo1 nuclease to perform 5 0 resection.Nej1 is a competitive inhibitor of Dna2, blocking interactions between Dna2 and its binding partners at DSBs(1,5,6,8,9,15,16).B, upon deletion of DNA2, Nej1 increased, but Ku and Exo1 levels did not change.In dna2D pif1-m2 mutants, resection decreased approximately twofold as Exo1 was the only functional long-range nuclease at the DSB, and the frequency of alt-EJ-MMEJ markedly increased.C, in dna2-1 mutants, nuclease-dead Dna2 was recruited to the break site.Under this condition, there was a minor increase in Nej1.However, Ku increased, which in turn resulted in Exo1 inhibition.Therefore, in this mutant background, the functionality of both nucleases was compromised, resection was abrogated, and the frequency of NHEJ increased.The dna2-1 resection defect and phleomycin sensitivity were largely reversible either by ectopic expression of Exo1 or by deletion of Ku70.However, only Exo1 expression restored the balance of NHEJ and alt-EJ-MMEJ frequencies to levels observed in WT cells.alt-EJ, alt-end-joining; DSB, double-strand break; MMEJ, microhomology-mediated end joining; NHEJ, nonhomologous end-joining.
v3-fos-license
2017-09-18T18:20:17.304Z
2008-12-01T00:00:00.000
20170444
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/sa/a/L5KpT4ZSWRygmRQMd45ffZj/?format=pdf&lang=en", "pdf_hash": "fe72551bb4cc9842d462f8e17334bfd4ce3241b7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44601", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "sha1": "fe72551bb4cc9842d462f8e17334bfd4ce3241b7", "year": 2008 }
pes2o/s2orc
PERFORMANCE OF CYLINDRICAL LEAF WETNESS DURATION SENSORS IN A TROPICAL CLIMATE CONDITION Leaf wetness duration (LWD) measurements are required for disease warning in several agricultural systems, since it is an important variable for the diagnose of plant disease epidemiology. The cylindrical sensor is an inexpensive and simple electronic LWD sensor initially designed to measure this variable for onions, however some studies show that it may be helpful for standard measurements in weather stations and also for different crops. Therefore, the objective of this study was to assess their performance under tropical climate conditions, in Brazil, having as standard measurements those obtained by flat plate sensors, which have presented very good performance when compared with visual observations. Before field assessments, all LWD sensors used in our study (flat plates and cylinders) were white latex painted and submitted to a heat treatment. Laboratory tests were performed in order to determine the resistance threshold for the sensor to be considered wet and the time response of the sensors to wetness. In the field, all cylindrical sensors were initially deployed horizontally 30-cm above a turfgrass surface in order to assess the variability among them with respect to LWD measurements. The variability among the horizontal cylindrical sensors was reduced by using a specific resistance threshold for each sensor. The mean coefficient of variation (CV) of LWD data measured by the cylindrical sensors was 9.7%. After that, the cylindrical sensors were deployed at five different angles: 0o, 15o, 30o, 45o, and 60o. Data of measurements made at these angles were compared with the standard measurement, obtained by flat plate sensors at the same height and installed at 45°. The deployment angle had no systematic effect on LWD measurements for the local tropical conditions, since the correlations between flat plate and elevated cylinder measurements were very high (R > 0.91), which differed from the results obtained under temperate climatic conditions, where LWD measured by cylinders were two hours longer than by flat plate sensors. MATERIAL AND METHODS Laboratory assessment LWD was measured using cylindrical sensors (Weather Innovations Inc., ON, Canada) and flat plate sensors (Model 237, Campbell Sci., Logan, UT, USA) (Figure 1).The cylindrical sensor is made of an acrylic tube, 20 cm in length, outer Ø = 1.3 cm, on which two nickel wires are rolled to obtain two parallel spirals.The distance between spirals is 1 mm.The flat plate sensor consists of a circuit board with interlacing gold-plated copper fingers, 7.6 cm in length and 6.3 cm in width.For both sensors, the condensation on their surfaces reduces the resistance between the wires or fingers.In order to measure this resistance, a data logger was used to provide an alternating current input (~ 5 V) and to record the output signal of the sensor.The use of low alternating currents in the sensor minimizes self heating and electrolytic depositions on the wires or fingers as suggested by Gillespie & Kidd (1978) Both cylindrical and flat plate LWD sensors were white latex painted in order to increase their sensitivity to microscopic wetness droplets as well as to simulate leaf optical proprieties, following the recommendation of Gillespie & Kidd (1978) and Sentelhas et al. (2004a).The sensors were submitted to an oven heat treatment at 65ºC for 12 h in order to remove the hygroscopic components of the paint, following the procedures proposed by Gillespie & Duan (1987).After this, laboratory tests were performed in order to establish a threshold of resistance, expressed by the ratio between the measured voltage and the excitation voltage provided by the data logger (Vs/Vx), for which the sensors were considered initially wet.In addition, the time response of the sensors was assessed to ensure that all sensors had similar response to water deposition on their surfaces.The time response was defined as the time that a dry sensor achieved the Vs/ Vx value equal to 3 × 10 -4 , after receiving a water droplet (Ø ≈ 1 mm), which corresponds to a change INTRODUCTION Leaf wetness duration (LWD) is defined as the period during which rain, dew or fog droplets are retained on aerial plant surfaces at a microscopic scale (Wal, 1978).LWD and temperature are the most important environmental variables for the control of the majority of plant diseases, since they affect the infection and sporulation processes of many fungal pathogens (Vale et al., 2004).For this reason, several plant disease-warning systems are based on LWD and temperature measurements (Berton & Melzer, 1989;Carisse & Kushalappa, 1990;Huber & Gillespie, 1992).However, LWD is more difficult to be measured or estimated than air temperature, since wetness varies considerably with the weather conditions and also with the type of crop, position, angle, and geometry of the leaves, and the specific location on the individual leaf (Sutton et al., 1984). Several instruments have been developed to measure wetness duration (Gillespie & Kidd 1978;Smith & Gilpatrick, 1980;Weiss & Lukens, 1981;Weiss & Hagen, 1983;Gillespie & Duan, 1987;Giesler et al., 1996).Gillespie & Duan (1987) developed a low cost cylindrical sensor of easy construction.Its sensing surface faces a solid angle of 2π radians (360 degrees), which represents better some organs, such as stems.However, some studies have shown that cylindrical sensors should be used with caution, since there is no protocol for their deploying for standard measurements (Gillespie & Duan, 1987;Sentelhas et al., 2006).Gillespie & Duan (1987) verified that LWD measured by cylindrical sensors vertically deployed in an onion crop was two to three hours shorter, on average, than measurements obtained by flat plate sensors.On the other hand, Sentelhas et al. (2006) used cylindrical sensors deployed horizontally to measure LWD over turfgrass and other crops in a temperate climate, and observed that the mean LWD was about two hours longer than that measured by flat plate sensors, which was adopted as standard measurement. Considering that there is no protocol to install LWD cylindrical sensors for standard measurements, the objective of this study was to assess the perfor-Sci.Agric.(Piracicaba, Braz.), v.65, special issue, p.1-9, December 2008 of 1% in the signal of the dry cylindrical sensor with highest value of Vs/Vx.For the time response test, the time spent for each sensor to achieve its wetness resistance threshold (generally about 9,000 kΩ) was recorded using a chronometer.This procedure was replicated four times for each sensor.The average and standard deviation of the time response were calculated for all sensors. Field assessment The measurements of LWD and other weather variables were performed during 93 days of the dry season (July to October 2005) in Piracicaba, São Paulo State,Brazil (22º43' S,47º30' W,546 m).The LWD sensors were installed facing south over mowed turfgrass (5-cm tall) at 30-cm height in relation to soil surface, following the recommendations of Sentelhas et al. (2004b).These sensors were attached to adjustable-angle clamps allowing the angle between the sensor and the horizontal position to be changed.During an initial period of 40 days, all cylindrical sensors were maintained in the horizontal position in order to verify the variability among the LWD measurements.After this, the following deployment angles were evaluated: 0º, 15º, 30º, 45º, and 60º, with two replicates for each angle (Figure 2).The height of the sensors was adjusted in order to maintain their middle point at 30 cm above the turfgrass surface. Two flat plate sensors were mounted on PVC tube sections and deployed at 45º to horizontal, facing south, at the 30-cm height over the turfgrass.Sentelhas et al. (2004b) observed that LWD measurements obtained by flat plate sensors deployed in this position showed good agreement with visual observations of wetness over turfgrass, with errors smaller than 30 min.For this reason, in this study, flat plate LWD measurements were considered standards in order to assess the cylindrical sensor's performance. In addition to the LWD measurements, air temperature (T) and relative humidity (RH), at 30 cm over turfgrass, and rainfall were also measured.T and RH were obtained by an aspirated copper-constantan thermocouple psychrometer, and rainfall was measured by a tipping bucket rain gauge (TE525WS-L, Texas Electronics, TX, USA).The sensors were connected to data loggers (CR10 and CR23X, Campbell Sci., Logan, UT, USA), programmed for readings each five seconds.Averages of air temperature, relative humidity and the signals provided by each LWD sensor; the total precipitation; as well as the histogram with the proportion of time in which each LWD sensor was wet, were recorded for 15-min intervals.LWD was totaled for 24-hour periods, starting at 12h15 of day "n" and finishing at 12h00 of day "n + 1".By using the mean values of the signal recorded for each sensor during the dry period, individual thresholds for each LWD cylindrical sensor was established for field conditions. Data analysis The variability among LWD measurements, obtained for the cylindrical sensor deployed horizontally, was assessed using standard deviation (SD), coefficient of variation (CV), mean absolute difference (MAD), which indicates the absolute magnitude of the mean difference, and mean difference (MD), which describes the direction of the bias, as follows: (2) where: xc i are the LWDs measured by the cylindrical sensors, xf i the LWDs measured by flat plate sensors, and n the total number of measurements.LWD data, provided by cylindrical sensors deployed at different angles, and the measurements of the flat plate sensors at 45º were compared by regression analysis.The precision of the measurements obtained by cylindrical sensors in relation to those obtained by flat plate sensors was determined by the determination coefficient (R 2 ), which expresses data dispersion in relation to the simple linear regression equation, while the accuracy was determined by the agreement index (D) (Willmott et al., 1985), which expresses data dispersion in relation to 1:1 line: where: xc i is the LWD measured by the cylindrical sensors, xf i the LWD measured by the flat plate sensors (reference) and xf is the average LWD obtained by the flat plate sensors.D ranges from zero (no agreement or no accuracy) to one (perfect agreement or very high accuracy). The wetness onset (time of wetness beginning) and dry-off (time of wetness ending) measured by flat plate sensors and cylindrical sensors deployed at different angles were compared using the mean difference (MD) and its standard error (SE). Variability among LWD sensors The thresholds of resistance ratio (Vs/Vx), obtained during the laboratory tests, were 1.2 × 10 -4 and 3 × 10 -4 for the flat plate and cylindrical sensors, respectively.Differences among individual sensors were not noticed in the laboratory tests.All LWD sensors showed short time response to the water deposition on their surface under laboratory conditions, always below 3 min (Table 1).The cylindrical sensors showed average time response of 73 s.The average time response for the flat plate sensors was 12 s. LWD measurements obtained by cylindrical sensors over turfgrass showed higher variability than under laboratory conditions.Using different thresholds obtained from field data, it was possible to improve the cylindrical sensor measurements (Table 2).Compared to the flat plate sensors, values of MD for LWD ranged from -70.3 to 111.2 min when the same threshold for all sensors was used, and from -38.5 to 54.1 min when specific thresholds for each sensor were adopted.The MAD for LWD ranged from 41.1 to 112.4 min for sensors with the same threshold, and from 26.1 to 59.2 min when specific thresholds were used. The increase in the variability among the cylindrical sensors when carried from the laboratory to the field can be explained by the fact that under field conditions, only a small amount of water initially condenses over the sensor.When using one wetness threshold for all sensors at wetness onset, in the absence of rain, this results in an increase of variability among the sensors.On the other hand, in the labora- tory, a larger amount of water was deposited on a single point over the sensor, and, consequently, the sensors presented a more uniform response.The sensor variability can be related to the lack of uniformity of construction and paint, which may lead to differences in the signal value and time response among the sensors.The use of specific thresholds for each sensor is a way to reduce the variability among sensor readings.Although it has some practical implications, as for example, an increase in the size of the data logger program, which can lead to errors, the use of different thresholds is advantageous, since it improves precision and accuracy of the measurements. The CV of LWD measured by cylindrical sensors ranged from 3.2 to 33.7%, with an average of 9.7%, during the period that all sensors were deployed horizontally (Figure 3).The CV changed as a function of the day, with 35% of the days presenting CV above the average.Similar results were presented by Sentelhas et al. (2004a) who found CV values ranging from 0 to 31.2% and average CV = 9.2% for flat plate sensors painted with white latex paint deployed at 30º to horizontal and 30-cm height in Piracicaba, SP, Brazil.They also observed a decrease of CV for days when the LWD was longer.However, the CV variability does not necessarily mean that an increase of sensor variability occurred, as the CV has an inverse relationship with the average of the observations.For some days when the average LWD was short, the CV tended to be higher, even when the standard deviation was low.The variability among sensors may also be expressed by the mean absolute and mean differences between measurements provided by cylinders (Figure 4).The absolute mean difference was 29.5 min for onset and 13 min for dry-off.The mean differ-ence ranged from -25.7 to 26.2 min for wetness onset and -10.4 and 21.9 min for wetness dry-off. Low rates of RH variation before wetness onset lead to an increase of the variability among sensors (Figure 5).Although there are other variables involved in dew deposition on the sensor surface and measured RH variation rate in the free air is not exactly the same as for the air layer close to the sensor, these results show that LWD sensor performance is related to weather variables controlling the dew deposition, as well as variations in vegetative and soil moisture conditions.Weather conditions leading to rapid deposition of a larger amount of water on the sensors tend to reduce the variability among them.However, the same relationship was not observed between the dry-off and the relative humidity variation rate, indicating that other variables, such as solar radiation and wind speed, could have more influence on wetness dry-off than RH. , 15º, 30º, 45º, and 60º) with two replicates (_1 and _2), using a single threshold for all cylinders and a specific threshold for each one, in Piracicaba, SP, Brazil. Cylindrical Sensor Position Single threshold Specific thresholds Effect of the deployment angle on LWD measurements by cylindrical sensors The difference between the mean LWD obtained by cylindrical and flat plate sensors was smaller than 1 h with no difference between the averages (Table 3).The variability of the measurements among the cylindrical sensors, deployed at different angles, Rate of UR variation (% h -1 ) SD (min) Figure 4 -Mean absolute difference (MAD) and mean difference (MD) for wetness onset and dry-off, obtained by cylindrical sensors, using the sensor average as reference, when they were horizontally positioned.The solid line represents the mean absolute difference during the period for all sensors and symbols in the x-axis represent the sensors used in the different angles of deployment (0º, 15º, 30º, 45º, and 60º) and their replicates (_1 and _2). Cylindrical sensors was less than the mean CV (9.7%) shown by the cylindrical sensors deployed horizontally.The comparison between daily LWD measured by flat plate at 45º and cylindrical sensors deployed at different angles to horizontal, during the 53 days of the dry season, is shown in Figure 6.The regression analysis coefficients were significant by a t-test (p < 0.05), showing that the intercepts were not different from zero and the slopes were not different from Sci. Agric.(Piracicaba, Braz.), v.65, special issue, p.1-9, December 2008 one.Regardless of the deployment angle, daily LWD data obtained by cylindrical sensors presented high correlation with data obtained by flat plate sensors, since R 2 ranged from 0.92 to 0.96.The accuracy of the measurements, expressed by D, was also high, ranging from 0.96 to 0.99. Comparing flat plate sensors in the standard position (30-cm height and deployment angle of 45º) with cylindrical sensors deployed horizontally, in Elora (43º38' N, 80º24' W, 369 m Altitude), ON, Canada, Sentelhas et al. (2006) noticed that LWD obtained by cylindrical sensors was systematically longer than the flat plate measurements.According to these authors, LWD overestimation may be related to the occurrence of droplets of large volume hanging along the bottom of the sensors at the end of the wetting period.Such droplets require a long time to evaporate, especially within canopies with less available energy, causing LWD overestimation.Users of these sensors in Ontario accept this overestimation as a "safety factor" when employing cylindrical sensor measurements as inputs to fungicide spray scheduling schemes. The differences between cylindrical sensor performance in Canada and Brazil are probably related to the weather conditions of these places.During the dry season in Piracicaba, the amount of dew deposited on the sensor surface was not enough to provide a large amount of water deposition on the bottom of cylindrical sensors, but heavy dew occurred frequently at Elora.This overestimation trend was not observed in Piracicaba, even when only rainy days are considered.On these days the wetness can be provided by dew as well as rain, therefore the deposition of a larger amount of water on the sensor surfaces is expected.The larger evaporation rates occurring in the dry season in Piracicaba, compared to Elora, may explain why no LWD overestimation trend was observed within the variability among sensors. The deployment angle had no systematic effect on the cylindrical sensor onset and dry-off (Table 4).The cylinders, except for those positioned at 60°, measured onset later than the flat plate sensors at 45º.The onset differences between the elevated and horizontally positioned cylinders were within the mean standard error, except for the cylindrical sensors at 15º.All cylindrical sensors dried later than flat plates; however the dry-off difference between the elevated and horizontal cylinders was within the mean standard error.In this study, observations were not taken to determine if the deployment angle affected the deposition of large droplets on the bottom of the cylindrical sensors.For the conditions in which this study was carried out, the deployment angle did not have a systematic effect on LWD measurements of cylindrical sensors.The wetness onset and dry-off were detected later by the cylindrical sensors than by the flat plates.Sentelhas et al. (2006) also observed that cylindrical sensors positioned horizontally indicated the dry-off later than flat plate sensors installed over turfgrass at 45º.This cylindrical sensor later dry-off may indicate that the cylindrical sensor cools down and warms up slower than the flat plate sensor, due to its higher heat capacity and different radiation geometry, resulting in a delay of the wetness onset and dry-off.However, the mean differences found for wetness onset and dry-off between cylindrical and flat plate sensors did not result in large errors in LWD measurements, as onset and dry-off differences had similar magnitude, so the onset delay was compensated by the late dry-off. For some kinds of sensors, the deployment angle has a strong effect on LWD measurements.Lau et al. (2000) observed that non-painted flat plate sensors deployed at 30° and 45° responded later to the onset than sensors deployed at horizontal.Sentelhas et al. (2004b) reported longer mean LWD for painted flat plate sensors deployed at 0° and 15° than for sensors installed at 30° and 45°.The cylindrical sensor seems to be less sensitive to the deployment angle than the flat plate sensors.It is probably because the cylindrical sensor has its sensitive surface exposed to 360 degrees; therefore angle variation has a smaller effect on the water accumulation on the sensor surface, especially when the amount of water condensed on the sensor surface is not enough to form large droplets along its bottom.Moreover, the variation of the sensors angle may have less influence on the energy balance of cylindrical sensors than flat plate sensors. CONCLUSIONS The variability among the cylindrical sensors was reduced by using resistance thresholds determined in the field to totalize the time during which the sensors were initially wet.Cylindrical sensors can be used to monitor LWD in tropical dry-season climate conditions, as their measurements presented high correlation with the reference measurements provided by flat plate sensors deployed at a standard position (30-cm height and deployment angle of 45º to horizontal), with the advantage of this sensor be easily constructed.The deployment angle variation of cylindrical sensors did not have systematic effect on the LWD measurements for the local tropical conditions.However, the performance of cylindrical sensors is dependent on weather conditions, so it may not be desirable to position the sensor horizontally, since previous studies have shown that this position can lead to LWD overestimation.Based on that, it is recommended for LWD standard measurements to deploy the cylindrical sensors with an angle varying from 15º to 30º in relation to the horizontal plane.----------------------------------------------min ---------------------------------------------- Sci. Agric.(Piracicaba, Braz.), v.65, special issue, p.1-9, December 2008 Figure 2 - Figure 2 -Cylindrical sensors deployed at different angles to horizontal and flat plate sensors installed at 45°.All LWD sensors were at 30-cm height facing south in Piracicaba, SP, Brazil. Figure 3 - Figure 3 -Daily mean leaf wetness duration (Mean LWD -closed bars) and coefficient of variation (CV -open bars), and average (line) coefficient of variation of leaf wetness duration measurements obtained by cylindrical sensors deployed horizontally over turf grass in Piracicaba, SP, Brazil.The arrows indicate the rainy days. Figure 5 - Figure 5 -Relationship between the rate of relative humidity (UR) variation in the 30 minutes before the onset and the standard deviation (SD) of the onset time indicated by the average of the cylindrical sensors, in Piracicaba, SP, Brazil. Table 1 - Mean and standard deviation (SD) of time response for two flat plate sensors (FP) and ten cylindrical sensors (CYL) at different angles of deploy (0º, 15º, 30º, 45º, and 60º) with two replicates (_1 and _2), used to measure the leaf wetness duration. Table 2 - Mean difference (MD) and mean absolute difference (MAD) between leaf wetness duration (LWD) measured by flat plate sensor and by cylindrical (CYL) sensors with different angles of deploy (0º Table 4 - Differences for wetness onset and dry-off between measurements done by cylindrical and flat plate sensors (CYL -FP) and standard error of the mean (SE) for the periods in which the cylindrical sensors were deployed horizontally and at different angles of deploy (0º, 15º, 30º, 45º, and 60º), in Piracicaba, SP, Brazil.
v3-fos-license
2014-10-01T00:00:00.000Z
2010-01-01T00:00:00.000
14124436
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/jinsectscience/article-pdf/11/1/74/18184662/jis11-0074.pdf", "pdf_hash": "31676b319fc70598c40572dfa5c30e3033ce4e60", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44604", "s2fieldsofstudy": [ "Biology" ], "sha1": "31676b319fc70598c40572dfa5c30e3033ce4e60", "year": 2010 }
pes2o/s2orc
honeybee, Apis mellifera The European honeybee, Apis mellifera L. (Hymenoptera: Apidae), has a full set of machinery for functional CpG methylation of its genome. A recent study demonstrated that DNA methylation in the honeybee is involved in caste differentiation. In this study, the expression and methylation of the hexamerin 110 gene (Hex110), which encodes a storage protein, was analyzed. High levels of the Hex110 transcript were expressed in both worker and queen larvae. Low levels of this transcript were also detected in adult fat bodies, and the expression level was higher in the queen than in the worker. Bisulfite sequencing revealed that the Hex110 gene is overall methylated at a low level, with a limited number of CpG sites methylated at relatively high levels. These highly methylated sites were exclusively located in the exon regions. The average methylation rate of the Hex110 gene was higher in the adult stage than in the larval stage. Furthermore, several CpG sites were differentially methylated between the worker and queen larvae. These observations suggest that the methylation of the Hex110 gene is regulated at the developmental stage and in a caste-dependent manner. Introduction A honeybee colony consists of a queen, many nonreproductive females (workers), and male bees (drones). The mechanism of sex determination in honeybees is a haplodiploid system in which the males develop from unfertilized eggs and the females develop from fertilized eggs. Differentiation of a female larva into a particular caste is not determined genetically, but by environmental factors. A larva hatched in a special cell called a queen cell is fed with only royal jelly, and differentiates into a queen. Larvae hatched in ordinary bee cells are fed with worker jelly containing pollens and develop into workers. The European honeybee, Apis mellifera L. (Hymenoptera: Apidae), has the de novo cytosine methyltransferase Dnmt3, and the two maintenance methyltransferases Dnmt1a and Dnmt1b, which are involved in CpG methylation (Wang et al. 2006). Because other insects such as Drosophila melanogaster, Anopheles gambiae, and Bombyx mori do not have fully functional machinery for CpG methylation, A. mellifera is expected to be an important model for studying epigenetics in insects (Schaefer and Lyko 2007). Interestingly, it has been reported that the knockdown of Dnmt3 by RNAi leads to the alteration of caste differentiation so that larvae fed on an artificial worker jelly develop into queens or queen-like adults, thereby suggesting that DNA methylation mediates caste differentiation (Kucharski et al. 2008). Hexamerins are a family of major storage proteins in insects (Telfer and Kunkel 1991). In holometabolous insects, hexamerins are usually synthesized in fat bodies and secreted in the serum at the larval stage. They are collected back into fat bodies just before metamorphosis under the regulation of ecdysteroids, and utilized as a source of amino acids during metamorphosis (Ueno et al. 1983;Telfer and Kunkel 1991;. Furthermore, hexamerins appear to play an important role in caste differentiation in social insects such as Polistes wasps (Hunt et al. 2007) and the termite Reticulitermes flavipes in which the simultaneous suppression of hexamerins 1 and 2 promotes differentiation toward the caste of soldiers (Zhou et al. 2006). A. mellifera has four hexamerins: hexamerins 70a, 70b, 70c, and 110, of which hexamerin 70a is expressed in both larvae and adults, and high levels of the other three are expressed at the larval stage and drastically decrease thereafter (Cunha et al. 2005;Bitondi et al. 2006). In addition, hexamerin 110 (Hex110) has been implicated in ovary development. Workers are basically sterile, but they can occasionally lay unfertilized eggs, particularly when the queen is absent from their colony. It was reported that Hex110 expression was elevated in workers that developed ovaries in the absence of the queen (Bitondi et al. 2006). This observation in honeybee workers, together with that of hexamerins in other social insects, implies that Hex110 may also be involved in caste differentiation in honeybees. In this study, cytosine methylation in Hex110 was analyzed to clarify the methylation pattern of A. mellifera genes, and the possibility of epigenetic regulation of Hex110 expression was examined. Insect materials and nucleic acid isolation The European honeybee, A. mellifera, was purchased from a local supplier (Nonogaki Apiary in Aichi, Japan) and maintained in an apiary of Tamagawa University in Machida, Japan. The queens were reared by transferring the 1st instar larvae from worker cells to artificial plastic queen cell cups. RNA and DNA were extracted from the whole tissues of the last instar larvae and the fat bodies of adults younger than 24 h following emergence. Because isolating fat bodies was not feasible, the entire digestive system was removed from the abdomen and the resulting abdominal integument was used as the fat body sample (Bitondi et al. 2006). RNA was isolated using ISOGEN (Nippon Gene, www.nippongene.com) following the manufacturer's instructions. DNA was extracted by proteinase K digestion and phenol extraction followed by ethanol precipitation. Determination of the full-length sequence of the Hex110 transcript Rapid amplification of cDNA ends (RACE) was performed using RNA extracted from a worker larva and the ExactStart Eukaryotic mRNA 5 -and 3 -RACE Kit (Epicentre Biotechnologies, www.epicentre.com/main.asp). An adaptor oligoribonucleotide (5 adaptor) was ligated to the 5 end of the RNA, and cDNA was synthesized using an oligo(dT) primer that contained another adaptor sequence (3 adaptor). The 5 region of Hex110 was amplified by PCR using a 5 adaptor primer, 5 -TCATACACATACGATTTAGGTGACACT ATAGAGCGGCCGCCTGCAGGAAA-3 , and a gene-specific primer, 5 -GGACGGCCTGCGAATTCAAGTCCATTG AGACCGCG-3 . The PCR products were cloned into the pCR2.1-TOPO vector (Invitrogen, www.invitrogen.com) and were sequenced using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, www.appliedbiosystems.com) and an ABI PRISM 3100 genetic analyzer (Applied Biosystems). The full-length cDNA was amplified with a 3 -adaptor primer, 5 -TAGACTTAGAAATTAATACGACTCACT ATAGGCGCGCCACCG-3 , and a genespecific primer, 5 -ATCGCATCCCATCATTGAATTTCGC-3 , which were designed to hybridize with the 5 end of Hex110. The PCR product was cloned and sequenced as described above. Northern blot analysis RNA was electrophoresed on a 1.5% formaldehyde-containing agarose gel and transferred to a GeneScreen Plus Hybridization Transfer Membrane (PerkinElmer, www.perkinelmer.com). To prepare a specific probe for Hex110, a 341-bp fragment corresponding to the 2nd exon was amplified by PCR from genomic DNA using the primers 5 -CTGACCAGGATCTCCTTAAC-3 and 5 -CTTAAGAAATTGTCCTTCATTAAC-3 , and the fragment was cloned into the pCR2.1-TOPO vector. The Hex110 fragment was amplified once more from the cloned plasmid with the same primer set, and the resulting product was purified using the MinElute PCR Purification Kit (Qiagen, www.qiagen.com). Probe labeling with alkaline phosphatase, hybridization, and signal detection were performed using the AlkPhos Direct (GE Healthcare, www.gehealthcare.com) and a VersaDoc Imaging System (Bio-Rad, www.bio-rad.com) according to the manufacturers' instructions. First-strand cDNA was synthesized from 4.6 g of total RNA by SuperScript III reverse transcriptase (Invitrogen) using an oligo(dT) 12-18 primer. A 181-bp fragment of Hex110 was PCR-amplified for 25, 30, or 35 cycles using the primers 5 -GAACTTGATCAATTTATCC-3 and 5 -AACTGAAGATTTGATGTG-3 . Bisulfite sequencing DNA extracted from individual A. mellifera was either single-digested with BamHI or double-digested with BamHI and AseI, and subjected to bisulfite conversion using the Epitect Bisulfite Kit (Qiagen). The doubledigested DNAs were used to analyze subregions 6 and 7 (Table 1). The other regions were analyzed using the singledigested DNAs. The target regions were amplified by nested PCR using the primer sets listed in Table 1. The resulting DNA fragments were cloned into the pCR2.1-TOPO vector (Invitrogen), and 8 or more clones were sequenced per amplicon. Cloning and expression analyses of the Hex110 transcript The full-length sequence of the Hex110 transcript was identified by 5 -and 3 -RACE. The 3190-nt sequence determined (accession no. AB549723) consisted of a 26-nt 5 untranslated region (UTR), 3024-nt coding sequence (cds), and 140-nt 3 UTR. The cds determined was 9 nt shorter than the cds predicted from the A. mellifera genome sequence in the NCBI database (NM_001101023). Except for this deletion, the determined cds was 99.9% identical to the predicted sequence. The Northern blot analysis demonstrated that the expression of Hex110 was much higher during the larval stage than at the adult stage regardless of the caste ( Figure 1A). Although the expression could not be detected from adult fat bodies in the Northern blot analysis, semiquantitative RT-PCR results suggested that Hex110 was expressed in the adult fat bodies and that the expression level was higher in the queen than in the worker ( Figure 1B), which is consistent with the view that Hex110 is involved in ovary development (Bitondi et al. 2006). Methylation pattern in the A. mellifera Hex110 gene The methylation profile was analyzed for a 4242-bp sequence of the Hex110 gene including the 244-bp region upstream from the transcriptional start site (Table 2). Genomic DNA was extracted from whole larval tissues and the adult fat bodies of workers and queens with three biological replicates for each group. The examined region contained 96 CpG sites (Figure 2A) of which cytosine methylation was detected in 66 sites in one or more samples. The methylation profiles were almost the same among the three replicates in each group with a few minor differences ( Figure 2B). Furthermore, the methylation patterns were similar between the groups. Overall, the methylation level was low throughout the Hex110 gene. There were several highly methylated CpG sites that were limited to a few exon regions, such as exon 2, exon 3, and the N-terminal region of exon 8. The methylation level in the upstream region of the transcriptional start site was low. Methylation levels were higher in the adults than in the larvae regardless of the caste: the average methylation rates were 3.3% in worker larvae, 5.5% in worker adults, 3.0% in queen larvae, and 5.3% in queen adults. There were 6 CpG sites (sites 20, 21, 36, 64, 68, and 69; Figure 1B) where the methylation levels were significantly higher in the adults than in the larvae (Mann-Whitney U-test, P < 0.05, Table 2), while no larval-biased methylation site was found. The average methylation rates were nearly identical between workers and queens in the larval and adult stages. In the larval stage, however, 4 CpG sites (21, 22, 68, and 87) were significantly different between castes (Mann-Whitney U-test, P < 0.05). In addition, almost no methylation was observed in the 3 -half region exclusively in the queen larvae. A computational analysis of the Hex110 sequence identified two CpG islands: one in the 5 region containing the transcription start site and translation start codon, and the other in the 3 region containing the stop codon ( Figure 2A). The methylation levels of the CpG island in the 5 region were low with no remarkable differences among the four groups ( Figure 2B, Table 3). The CpG island in the 3 region overlapped with the region where no methylation was detected in the queen larvae. Discussion This study analyzed cytosine methylation of the Hex110 gene with a single-base resolution by bisulfite sequencing. DNA methylation in A. mellifera has been examined for several genes, but these analyses were limited to partial regions of gene fragments (Wang et al. 2006;Kucharski et al. 2008). To our knowledge, this is the first report on the methylation pattern of an A. mellifera gene encompassing the 5 -upstream and full-length cds. Overall, the methylation level of the Hex110 gene was low, while several CpG sites within the exons were highly, but not completely, methylated. The methylation pattern observed in the Hex110 gene is consistent with the previously reported feature of the methylation of A. mellifera genes, suggesting that partial and moderate methylation is a general characteristic of A. mellifera DNA. Such a methylation landscape appears to be quite different from that of other animals. The DNA of mammals is highly methylated throughout the genome with the exception of short unmethylated CpG islands in the promoter regions (Ball et al. 2009;Lister et al. 2009). In invertebrates, DNA methylation has been analyzed in detail in the tunicate Ciona intestinalis whose genome shows a clear mosaic pattern consisting of relatively long, almost completely methylated and nonmethylated regions (Suzuki et al. 2007). The methylated regions are restricted to gene bodies, while the non-methylated regions are found in both gene bodies and in the intergenic regions in C. intestinalis. The Hex110 gene has two CpG islands, one of which is located in the 5 region, which also contains the transcriptional start site. In mammals, methylation of the CpG island in a promoter region typically correlates with the transcriptional silencing of imprinted genes (Stöger et al. 1993) and with genes on the inactive X chromosome in females (Mohandas et al. 1981). The CpG island found in the 5 region of Hex110 was methylated only at low levels in the four groups examined, revealing no indication of their function. Judging from its location, however, it is conceivable that the CpG island may have some roles in Hex110 transcriptional regulation. Because methylated cytosines are hypermutable due to spontaneous deamination, which causes a gradual depletion of CpG dinucleotides from methylated DNA regions on an evolutionary time scale, the frequency of CpG dinucleotides is a robust measure of the level of DNA methylation (Bird 1980;Elango et al. 2009). A computational analysis of CpG contents in the A. mellifera genome indicated that approximately 35% of the genes are expected to be methylated, and the microarray profiling of several tissues suggested that most of the genes predicted to be methylated are associated with housekeeping roles (Foret et al. 2009). In this context, the Hex110 gene may be a rear gene that is methylated and exhibits temporal-and tissue-selective expression. Hex110 expression was high in larvae, but the methylation levels were lower in the larvae than in the adult fat bodies. Although there is insufficient information to discuss the relationship between transcriptional activity and gene body methylation in A. mellifera, it is possible that Hex110 expression is epigenetically regulated. It was also demonstrated that the Hex110 gene was methylated in a caste-selective manner at the larval stage: there were several CpG sites highly methylated either in workers or queens, and the 3 region containing a predicted CpG island was not methylated exclusively in queens, which was repeatedly observed in the three biological replicates derived from three colonies. This difference might be due to varying tissue compositions between the queen and worker larvae. However, we think that it is more likely that the differences reflect caste-specific epigenetic regulation because the larval body is relatively simple in structure and the tissue composition is more or less similar between the castes. Studies have suggested that gene body methylation is associated with a variety of epigenetic phenomenon (reviewed in Suzuki and Bird 2008). In the flowering plant Arabidopsis thaliana, heavily methylated regions include transcriptionally inactive heterochromatin and transposons, in line with the classical view that DNA methylation contributes to gene silencing (Zhang et al. 2006;Zilberman et al. 2007). However, CpG methylation in A. thaliana also covers the transcribed regions of many genes, especially those of housekeeping genes. The preferential methylation of housekeeping genes is also observed in C. intestinalis (Suzuki et al. 2007). A recent high-throughput analysis of the human genome revealed that high methylation levels in the gene body tend to correlate with higher expression of a proteincoding gene (Lister et al. 2009). The accumulating evidence will require comparative biological analysis in the future to reassess the functions and biological significance of DNA methylation. The A. mellifera is a new model that provides the opportunity to study the epigenetic regulation of phenotypic plasticity in social insects. Hex110 is a luxury gene that exhibits developmental stage-, tissue-, and casteselective expression. We expect that further detailed analyses on the methylation of this gene will provide insight into the functions of DNA methylation in A. mellifera.
v3-fos-license
2023-01-19T22:12:23.888Z
2017-04-20T00:00:00.000
255989893
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13071-017-2124-6", "pdf_hash": "cbf74e068c00f6acfe014404883f8e415da1290f", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44608", "s2fieldsofstudy": [ "Medicine" ], "sha1": "cbf74e068c00f6acfe014404883f8e415da1290f", "year": 2017 }
pes2o/s2orc
Household level spatio-temporal analysis of Plasmodium falciparum and Plasmodium vivax malaria in Ethiopia The global decline of malaria burden and goals for elimination has led to an increased interest in the fine-scale epidemiology of malaria. Micro-geographic heterogeneity of malaria infection could have implications for designing targeted small-area interventions. Two-year longitudinal cohort study data were used to explore the spatial and spatio-temporal distribution of malaria episodes in 2040 children aged < 10 years in 16 villages near the Gilgel-Gibe hydropower dam in Southwest Ethiopia. All selected households (HHs) were geo-referenced, and children were followed up through weekly house-to-house visits for two consecutive years to identify febrile episodes of P. falciparum and P. vivax infections. After confirming the spatial dependence of malaria episodes with Ripley’s K function, SatScanTM was used to identify purely spatial and space-time clusters (hotspots) of annual malaria incidence for 2 years follow-up: year 1 (July 2008-June 2009) and year 2 (July 2009-June 2010). In total, 685 P. falciparum episodes (in 492 HHs) and 385 P. vivax episodes (in 290 HHs) were identified, representing respectively incidence rates of 14.6 (95% CI: 13.4–15.6) and 8.2 (95% CI: 7.3–9.1) per 1000 child-months at risk. In year 1, the most likely (128 HHs with 63 episodes, RR = 2.1) and secondary (15 HHs with 12 episodes, RR = 5.31) clusters of P. vivax incidence were found respectively in southern and north-western villages; while in year 2, the most likely cluster was located only in north-western villages (85 HHs with 16 episodes, RR = 4.4). Instead, most likely spatial clusters of P. falciparum incidence were consistently located in villages south of the dam in both years: year 1 (167 HHs with 81 episodes, RR = 1.8) and year 2 (133 HHs with 67 episodes, RR = 2.2). Space-time clusters in southern villages for P. vivax were found in August-November 2008 in year 1 and between November 2009 and February 2010 in year 2; while for P. falciparum, they were found in September-November 2008 in year 1 and October-November 2009 in year 2. Hotspots of P. falciparum incidence in children were more stable at the geographical level and over time compared to those of P. vivax incidence during the study period. Background Despite a decline in the global malaria burden over the past 15 years, about 3.5 billion people were at risk worldwide in 2015, and millions of them are still not accessing the services they need to prevent and treat malaria k. Of 438,000 registered malaria deaths in 2015, approximately 80% of the deaths were concentrated in just 15 countries, mainly in Africa [1]. According to the Ethiopian Ministry of Health [2], 2,174,707 malaria clinical cases and 662 deaths due to malaria were registered between September 2014 and August 2015 (Ethiopian fiscal year -EFY 2014/2015). Laboratory confirmation of malaria by either light microscopy (LM) or rapid diagnostic tests (RDTs) was performed in 1,867,059 (85.9%) clinical cases, showing a predominance of Plasmodium falciparum (63.7%) over P. vivax cases (36.3%). Oromia is the second regional state of Ethiopia with the highest malaria incidence, accounting for about 20% (430,969 cases) of total reported clinical cases in the country (430,969 cases), but the first one in terms of malaria mortality, representing about one third (214 deaths) of total malaria-related deaths in Ethiopia [2]. Malaria transmission is mainly seasonal and unstable throughout the country and varies due to differences in altitude, season, and population movement [3,4]. A good understanding of the local epidemiology and transmission dynamics of malaria infections is key for better targeting the control measures [5][6][7][8]. Many factors have been reported to significantly influence malaria transmission in Ethiopia with likely different levels of interaction across space and time [9,10]. Ecological factors facilitating breeding sites of Anopheles arabiensis (i.e. dams, irrigation canals, floods on shorelines, agricultural field puddles, wet lands, man-made pools, and rain pools) [11,12] or resting places for adult mosquitos (i.e. surrounding vegetation, housing characteristics) are thought to be the main factors for malaria transmission [13]. Conditions that increase exposure to infectious mosquitos' bites (e.g. agriculture and livestock economic activities) [14], and human behavioural factors that limit the coverage and effectiveness of malaria control interventions (e.g. outdoor sleeping habits, low utilization of long-lasting insecticidal nets, poor treatment seeking behaviours, and low treatment adherence) may also influence the malaria risk [15]. A previous analysis of malaria surveillance data based on passive case detection (PCD) in villages located around the Gilgel-Gibe hydroelectric power dam in Southwest Ethiopia suggests different spatial and temporal variations of malaria episodes for both P. falciparum and P. vivax [10]. Until now the use of spatial-temporal tools to detect malaria hotspots (i.e. single villages or groups of households within villages with increased risk of malaria transmission) has not been applied to analyse malaria transmission in this area. Capitalising on the availability of two-year longitudinal malaria cohort data, this study explored the spatial and spatio-temporal distribution of P. falciparum and P. vivax malaria episodes in 2040 children aged < 10 years living in 16 villages around the Gilgel-Gibe hydropower dam. The study was conducted as part of several other studies, intended to assess the impact of the Gilgel-Gibe hydroelectric dam on health and other sectors (environment, agriculture and economy) following its starting operation in 2004 [16]. Study area The study was conducted in Gilgel-Gibe dam area, in Jimma zone (Fig. 1), which is located 260 km south-west of Addis Ababa, in the Oromia region of Ethiopia. The study area lies between latitudes 7°42′50″N and 07°53′ 50″N and between longitudes 37°11′22″E and 37°20′ 36″E, at an altitude of 1734-1864 m above sea level. Sixteen villages within a 10 km radius (265-9046 m) from the dam reservoir shore were randomly selected based on similar eco-topography, access to health facilities, and homogeneity with respect to socio-cultural and economic activities [17,18]. The main socio-economic activities of the households are mixed farming involving the cultivation of staple crops (maize, teff and sorghum), cattle, and small stock. All the households residing in the study villages belong to the Oromo ethnic group, which is one of the largest ethnic groups in Ethiopia [11]. Study design and population A longitudinal 2-year malaria cohort study was conducted in children under 10 years old, living in the 16 selected villages around the Gilgel-Gibe hydroelectric power dam. A total of 2040 children aged less than 10 years were enrolled in July 2008 and completed weekly follow-ups until June 2010 [17]. Each child was identified with a unique code, and selected villages and households were geo-referenced using a handheld global positioning system (GPS) device (Garmin's GPSMAP 60CSx, Garmin International Inc., Olathe, Kansas, USA). Follow-up, identification and management of malaria episodes Active case detection (ACD) through weekly household visits was conducted to identify and register all febrile malaria episodes in the study population during the two-year follow-up period. During the household visits, axillary body temperature was taken, and the caregiver was asked about fever history. If a child had a fever (temperature ≥ 37.5°C) or reported a history of fever in the past 24 h a finger-prick blood sample was taken for immediate diagnosis by LM in the same site, or at Omo-Nada District Health Center Laboratory. Trained laboratory technicians conducted LM diagnosis. Thick smears were used to confirm the presence or absence of parasites, whereas the thin smear was used to identify the Plasmodium species. All children with malaria confirmed by LM were treated according to the national treatment guidelines [19]. Treatment was administered by the parents and/or caregivers of the children, and consisted of 25 mg/kg of chloroquine (CQ) over three consecutive days for P. vivax, and artemether-lumefantrine (AL) for P. falciparum twice daily according to the body weight as follows: 5-14 kg, one tablet per dose; 15-24 kg, two tablets per dose; 25-34 kg, three tablets per dose; and adults, four tablets per dose. Treatment adherence was monitored during household visits by asking for the medication packages and the remaining pills. Absent children were followed in the next visits, and their caregivers were asked about the occurrence of symptomatic episodes and the confirmation of malaria at the health facility. In addition, all the health facilities near the study communities were visited monthly to verify their clinical records, checking if any enrolled children had presented a confirmed malaria episode in the past month, not being detected during the weekly visits. Data analysis Global spatial clustering Data were double entered, validated and cleaned in Excel (Microsoft Corp, USA). A univariate Ripley's K-function was used to assess whether children with malaria episodes tended to be near other children with episodes (determination of the expected number of children with episodes within distance h of a given child with episodes), while a bivariate K-function explored the spatial independence between children classified into two groups according to specific conditions (determination of the expected number of children with condition 1 within distance h of children with condition 2) [20]. For each malaria species and year of follow-up (first or second year), a file was created with the coordinates of children's households. The file was read in R software as a table, and a grid was created using the maximum and minimum values of latitude and longitude coordinates. After creating an object of class "ppp" for the point pattern distribution of children in a polygonal window, the Kest and Lest functions from the package Spatstat were applied to plot both observed and expected K-values over a range of distances [21]. Expected K-values and corresponding 95% confidence envelopes (CEs) were calculated using 999 Monte Carlo simulations to test the null hypothesis (H o ). The univariate K-function tested the H o that the children with malaria episodes were randomly spatially distributed (complete spatial randomness, CSR), while the bivariate K-function tested two separated H o : a) the children with and without malaria episodes were independently spatial distributed, and b) the children with malaria episodes, younger than 3 years old and older than 3 years old, were independently spatial distributed. In the univariate analysis, observed K-values between low and high CE at specific distance h indicated random distribution of the children with malaria episodes for that distance h, while those larger than the high CE values or those smaller than the low CE values at specific distance h indicated respectively significant spatial clustering or spatial dispersion of children with malaria episodes for that distance h. In the bivariate analysis, observed difference of K values between the children group 1 and group 2 which were between low and high CE at a specific distance h indicated spatial independence between the children groups for that distance h. Instead, a difference of K values larger than the high CE values at specific distance h indicated that the children group 1 tended to be more clustered than children group 2 for that distance h, while a difference smaller than the low CE values indicated that the household group 1 tends to be more dispersed than household group 2. In all analyses, distances h ranged from 0 to the maximum distance between the two closest children's households (about 5 km). Local spatial clustering The QGIS software v.2.16 (QGIS developer team, Open Source Geospatial Foundation) [22] was used to map all households with children aged less than 10 years old in the study area, and the SaTScan software v.9.3 (M Kulldorff and Information Management Services Inc, Boston, USA) [23,24] was employed to detect spatial and space-time clusters of P. falciparum and P. vivax malaria episodes using the Bernoulli probability model [25]. The Bernoulli model in SaTScan requires input data as cases and controls. For each week of follow-up, cases were children with species-specific malaria episodes, while controls were children without episodes. Of note, malaria episodes were only counted for the week they were initially identified, and a child with a malaria episode (treated according to national guidelines) was censored for 21 days to prevent double counting of episodes in successive weeks. The spatial analysis tests the null hypothesis of no clustering of children with malaria episodes. Different windows with varying size, from zero to a maximum radius of less than 15% of the total children, were allowed to move across the study area. This maximum radius was selected to avoid large non-populated areas within the identified malaria clusters by SaTScan. More details about the selection of maximum windows size can be found in Additional files 1 and 2. Each circle was a candidate cluster for which the log likelihood ratio (LLR) and the relative risk (RR) were obtained. The circular window with the highest LLR was defined as the most likely cluster (hotspot) if the P-value < 0.05 [24]. Once the hotspot was identified, a re-analysis of the children within that hotspot was conducted to identify whether that hotspot hid a smaller and more homogeneous area with the highest malaria incidence (i.e. a hotspot within hotspot) [5]. The space-time analysis was performed under the null hypothesis that the risk of having malaria episodes was the same in all households and over time, with cylindrical windows having a circular geographic base and height corresponding to the time scale in weeks. The radius of each circular base was allowed to vary in size, to include up to as many as 15% of the total children. Comparably, the height of the cylinder varied in size up to a maximum of 50% of the study period with a time precision of one week. An unlimited number of overlapping cylinders with different dimensions were obtained, each cylinder corresponding to a possible space-time cluster. For each space-time cluster, the LLR was calculated and the most likely cluster defined as the cylinder with the maximum LLR. The statistical significance of the clusters was tested through 999 Monte Carlo simulations (the default value of the software) to achieve strong power, and the null hypothesis was rejected when the resulting p-value was below 0.05. Results Of the total 2040 followed-up children, 981 (48.1%) were females and 1059 (51.9%) males. The mean age at enrollment was 4.9 ± 2.0 years, not varying significantly across villages (P > 0.05) ( Table 1). Of the total reported 1070 malaria episodes, 685 (363 episodes in year 1 and 322 episodes in year 2) were due to P. falciparum in 492 HHs, and 385 (296 episodes in year 1 and 89 episodes in year 2) were due to P. vivax in 290 HHs. P. falciparum incidence rates were respectively 15.5 and 13.7 episodes/ 1000 child-months in the first and second year of follow-up (14.6 episodes/1000 child-months thorough the study period), while P. vivax incidence rates were respectively 12.6 and 3.8 episodes/1000 child-months in the first and second year of follow-up (8.2 episodes/1000 child-months thorough the study period). Additional files 3 and 4 show videos with the spatio-temporal distribution of species-specific malaria incidence by HH. The visual inspection from first video suggests seasonal spatial distribution of P. falciparum incidence, with increased occurrence of P. falciparum episodes in households located at south of the dam mainly in the last months of the long rainy season (August and September), as well as, in the first months of the dry season (October and November). Instead, the second video shows a decreasing trend in the occurrence of P. vivax episodes along the study period, with a less clear spatial and seasonal pattern. Global spatial clustering Univariate K-function values for both species indicated that children with episodes were significantly clustered at all distances up to 5.0 km in both years of follow-up (Additional file 5). According to the bivariate K-function plots, the distribution of children with P. falciparum episodes in the first year were significantly more clustered than children without episodes only at distances greater than 4.0 km (Fig. 2a); while in the second year, this comparative increased clustering pattern of children with episodes occurred at all distances (Fig. 2b). Regarding P. vivax, children with episodes in the first year were significantly more clustered than those without episodes at distances lower than 1.5 km, and those larger than 3.5 km (Fig. 2c). In the second year, children with P. vivax episodes were significantly more clustered only at distances larger than 4.2 km (Fig. 2d). On the other hand, K-function analyses also indicated that children younger than 3 years and those older than 3 years, both with malaria episodes, were independently spatial distributed (Fig. 3). Local spatial clustering Purely spatial analysis by SaTScan confirmed that malaria episodes due to both species were not randomly distributed. The most likely spatial cluster of P. falciparum incidence in year 1 was a 5.5 km radius area south of the dam, composed of 167 HH presenting a total 81 episodes ( Fig. 4a; Table 2). Households within this cluster belonged mainly to Kara and Yasso villages and were 1.8 times more at risk of acquiring P. falciparum infections than households outside the cluster (RR = 1.8, P = 0.02). In year 2, the most likely cluster of P. falciparum episodes was also located south of the dam with a radius area of 3.1 km, including 133H of mainly Kara and Yasso villages and accounting for 67 episodes (RR = 2.2, P < 0.001) ( Fig. 4b; Table 2). Interestingly, the re-analysis of the children within the most likely clusters for P. falciparum did not identify a further hotspot in both years. In addition, two secondary clusters were identified at south and west of the dam only in the second year ( Fig. 4b; Additional file 6). Both the purely spatial (Fig. 4c) and the spatio-temporal analysis (Table 3) during the twoyear period consistently confirmed the location of the most likely cluster of P. falciparum incidence south of the dam, with the latter analysis identifying 11 weeks with the highest incidence in year 1 (September 14th -November 29th 2008) and 5 weeks in year 2 (October 11th -November 14th 2009). The most likely spatial clusters of P. vivax incidence were a 4.1 km radius area located south of the dam in year 1 (Fig. 4d; Table 2) and a 1.3 km radius area west of the dam in year 2 ( Fig. 4e; Table 2). The first cluster included 128 HHs of mainly Yasso village and presenting 63 episodes (RR = 2.1, P = 0.004) while the second one included 88 HHs in Buddo village presenting 16 episodes (RR = 4.4, P = 0.008). The re-analysis of the children within the most likely clusters for P. vivax did not identify a further hotspot in both years. In addition, two Comparatively, hotspots of P. falciparum incidence in children were more stable at a geographical level and over time than those of P. vivax incidence, with consistent locations at the south of the dam in the two successive study years. The level of statistical significance is an important factor in determining whether a certain geographical area forms a plausible hotspot of malaria transmission. In this study, the global clustering K-function test suggested the existence of clustering of children with malaria episodes without pinpointing specific locations, while its variant, the bivariate K-function, was able to demonstrate that children with malaria episodes tended to be more aggregated than children without episodes. Although a previous study using recurrent-event models to analyse incidence data in the same study children showed contrasting associations between the age of children and the species-specific malaria incidence (i.e. P. vivax were mostly observed in younger age groups, while P. falciparum episodes were mainly seen in older children) [18], the age would not have any influence over the spatial distribution of children presenting species-specific malaria episodes according to the bivariate K-function analysis. In contrast to global clustering tests, local clustering tests (i.e. the Kulldorf spatial scan statistic) were able to identify the most likely location of hotspots of malaria incidence in the two consecutive years of study. Indeed, hotspots of P. falciparum incidence suggested a higher exposure to infectious mosquitoes in southern villages, especially after the long rainy season (peak of cases between September and November according to space-time analysis). After rains, intermittent streams could create pockets or pools of water which can serve as potential breeding sites for mosquitoes Anopheles arabiensis, contributing to an increase in mosquito density and vectorhuman contacts, and consequently to a greater number of malaria episodes during the dry season [18,27]. The characteristics of the southern land (i.e. wet, flat and silted) [28], as well as, the landslides that often occur there [29,30], would additionally increase the accumulation of water in shallow pits that act as excellent mosquito breeding habitats. Of note, as previously found in a recent article [18], the Gilgel-Gibe dam reservoir would not have a significant impact on the malaria transmission in the study area, since the design and automatic operation of the dam would be able to prevent the appearance of shoreline puddles and consequently the formation of mosquito breeding sites near the reservoir. In contrast to P. falciparum, the hotspots of P. vivax incidence were less stable in place and time during the study period, suggesting that the occurrence of P. vivax clinical episodes are less sensitive to seasonal and environmental changes than P. falciparum, and that other factors should be considered to understand the spatialtemporal heterogeneity of infections due to this species [18]. The biological features of P. vivax infections may also influence the spatial distribution of the disease, particularly the ability of parasites to relapse weeks or months after a primary parasitaemia [31]. However, the characterization and prediction of spatial patterns remain challenging because of the difficulty distinguishing between a hypnozoite-triggered relapse, a resurgence of erythrocytic parasites (i.e. recrudescence) due to a failure in the therapy, or reinfection of an individual with a new parasite strain following a primary infection [32,33]. This challenge is even greater considering that children with confirmed P. vivax episodes in the study received chloroquine (CQ) but not primaquine (PQ), following the national guidelines for areas where the glucose-6phosphate dehydrogenase deficiency (G6PD deficiency) is not known, and where tests to detect that condition are not available [19]. The immunity to the malaria infection is another factor that should be considered when interpreting the spatial and spatial-time patterns of malaria clinical episodes in study children. As previously hypothesised in a recent article [18], differences in clinical malaria incidence between species with respect to age may be related to different species-specific acquisition rates of immunity [34], with immunity acquired more rapidly with P. vivax than with P. falciparum. Taking this into account, the fast development of clinical immunity in areas with the highest P. vivax exposure and incidence (i.e. southern villages) identified during the first study year may also explain why the hotspots did not remain in the same location in the following year. Similarly, immunity would be a factor to be considered in the analysis of the reduction of P. vivax clinical incidence rates during the study period [18]. The main limitations of our study may be related to the malaria metrics and the geospatial analysis used for the identification of clusters of malaria transmission in the area. The spatial analysis of clinical malaria incidence obtained through rigorous weekly active case detection of symptomatic episodes in enrolled children may be the best method for detecting malaria hotspots if most malaria infections occurred in those children were symptomatic and microscopically detected [6]. However, this cannot be confirmed in this study because the methodology did not include the screening for asymptomatic and sub-microscopic malaria infections. Malaria surveys in other endemic regions of Ethiopia have reported that asymptomatic and sub-microscopic infections can represent an important proportion of total malaria infections [35,36]; however, the cross-sectional design in the latter studies did not take into account the incubation period for some of those infections, hence the potential development of symptoms and the increase of parasite density levels at a later stage [37], which would facilitate their detection by surveillance methods with strict follow-up like those included in our study. Further research is needed to better understand the impact of asymptomatic and sub-microscopic infections in endemic areas of Ethiopia, and to assess whether their spatial distribution differs from that of symptomatic infections. On the other hand, despite the recognition of SaTScan as a powerful tool to analyse spatial patterns of vector-borne diseases such as malaria [38,39], a number of studies have pointed out that setting critical parameters in the analysis such as the maximum window size is not straightforward and suggested that this task should consider the application goals of the cluster detection and geographic scale of processes leading to the clusters [40]. Following these suggestions, our analysis set the maximum window size at 15%, instead of the default value of 50%, with the purpose of detecting the zones with the highest malaria incidence within the entire study area. This chosen parameter value did not only avoid large non-populated areas within circular clusters but also reduced (without eliminating) the influence of the uneven inter-and intra-village distribution of children/households in the study area. Moreover, the no identification of smaller and more homogeneous clusters with even higher transmission among the most likely malaria clusters of children/households (i.e. no more hotspots within hotspots), further supports our selection for the maximum window size, as well as the validity of the, detected most likely clusters for P. falciparum and P. vivax incidence (despite being composed of most households of a village or a group of contiguous villages). Other limitations of the study may be related to the absence of data on other potential risk factors for malaria infection (e.g. household size, education level in parents, malaria prevention practices at household, treatmentseeking behaviour, vegetation coverage, etc.) preventing the evaluation of their influence on the spatial distribution of the clinical malaria incidence. Gender and distance to the Gilgel-Gibe dam were not considered as covariates for the spatial analysis since those variables were not significantly associated with both P. vivax and P. falciparum malaria incidence in a longitudinal modelling approach [18]; while the children's age (despite being associated with malaria incidence according to the same model), which was found to be evenly distributed across villages, may not have influence over the spatial distribution of children with malaria episodes in the study area according to the K-function test, hence not meeting the criteria to be catalogued as covariate [24].
v3-fos-license
2017-09-07T06:23:04.235Z
2010-01-01T00:00:00.000
1454713
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://symphonya.unicusano.it/article/download/2010.1.06gennari/8852", "pdf_hash": "f62f4586bb2acccc4d67b6319663f87ba8d83917", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44609", "s2fieldsofstudy": [ "Business" ], "sha1": "8dc71968437f5ab6135e1324cf6605a27d676e67", "year": 2010 }
pes2o/s2orc
Management Control in Global Mass Market Retailers For large retail firms, management control is a val id tool with which to face the competition of global markets and to manage corpora te complexity. The management control systems of global retailers have specific c haracteristics that stem from the geographical dispersion of their organisational uni ts and frequently from the existence of cooperative alliances with other companies In recent decades, globalisation has also involved the marketing companies that operate on mass retail markets.Global retailers are generally characterised by product ranges that are able to satisfy increasingly broad and varied consumer demand and by the territorial expansion of their sales network. On global markets, where traditional space-time barriers are instable, and competition space is constantly being redefined, strongly competition-oriented logics are establishing themselves, in which time and the ability to anticipate one's rivals are the main critical success factors (market-space and time-based competition) 1 .Retailers who intend to maintain or improve their performance on these markets, often find themselves having to deal with problems related to their increasing complexity, in terms of both the composition of the product/market combination (strategic complexity) and their organisational structure (organisational complexity).In fact, the levels of complexity identified must also be assessed from a broader viewpoint, i.e. one that does not only take the individual company into account but also considers the relations established with other companies (commercial and/or industrial).From a global perspective, processes to produce commercial services may involve companies/groups of companies in which the individual structures maintain their importance, although shared transverse processes also acquire significance. In order to deal with situations of growing complexity, companies express the need to acquire tools which, by helping to clearly guide a company towards its goals, emerge as a significant support to corporate governance, while they respect value creation and minimise risk 2 .Management control is a collection of structures and processes designed to facilitate the implementation of top management decisions in the organisation, through the performance of activities and the related verification of the results achieved. The creation of an effective management control system becomes all the more necessary for global companies that have geographically dispersed organisational units (often separated by significant physical and cultural distances) but centralised top corporate governance organs, which are responsible for outlining the strategic guidelines that are valid for the entire organisation 3 .On one hand the control mechanisms convey the strategic guidelines to the units responsible for their organisation and, on the other, they guarantee feedback to the management organs regarding the results achieved. On the basis of these considerations, this article intends to analyse the specific nature of the components of the management control system, with reference to global retailers that operate on mass markets.Our analysis will focus in particular on the problem of defining the degree of corporate complexity, an indispensable step before any control mechanism can be implemented correctly.On the basis of the degree of complexity identified, we will specify the optimal characteristics for both the structural aspects of management control (organisational structure and information system) and the more specifically procedural aspects (process). Mass Market Retailers and Corporate Complexity Global markets where traditional competition boundaries are instable, force retail companies to constantly verify their choices in terms of: strategic complexity, i.e. the global composition of supply (in terms of products/services and market served) and its consistency with purchasing expectations; organisational complexity regarding: relations between ownership and management; the existing organisational structure (in terms of the delegation of responsibility and the contributions demanded of the company's various operating units) and binding agreements with other companies. Factors of complexity tend to influence each other in response to the company's need to adapt to or anticipate a contextual situation: expanding the products offered on the global markets necessarily demands organisational changes.Similarly, the decision to share part of the commercial processes with other companies can have strategic implications with regard to the markets served. By combining the variables that generate global corporate complexity, it is possible to identify three basic levels of complexity: low, medium and high.In retail companies with a low level of complexity, there is a substantial overlapping of roles between owners, administrators and organisation, and one area of business, which is limited to the local context.These companies do not generally show any need for advanced control systems to support management. Global mass market retailers, on the other hand, tend to have a medium or high level of complexity.Medium complexity companies reflect a clear differentiation of the decision-making organs inside the corporate structure and more significant processes to delegate responsibility, primarily induced by the expansion of the markets served.The strategy pursued in terms of the market served may be a qualifying element in decisions regarding the maintenance of a position of independence rather than the creation of forms of legal, operational or contractual integration 4 and these situations demand the creation of macro-organisations with a higher level of strategic-organisational complexity. The highest level of complexity is linked to structures founded on cooperative alliances (networks).Networks are extremely ramified organisations because of the relationships that develop between participants, but at the same time they are flexible and dynamic because they are strongly market-oriented.In networks of companies operating on global markets, traditional models of competition between retail companies are backed up by competition systems between channels, which can also involve industrial companies.In these macro-organisations, new demands for governance and control take hold, to guarantee correct and responsible behaviour by all company operators, with the goal of meeting the expectations of increasingly broad categories of stakeholders, according to criteria of equity and transparency 5 . It is therefore essential to analyse a company's degree of complexity in order to organise the structural and procedural aspects of the management control system correctly. Global Retailers and Structural Components of Management Control The structural components of management control stem from the organisational structure and control system information. The organisational control structure defines the responsibilities of the company's various organisational units, by defining the type and entity of the resources that each unit may employ in the performance of its duties, in relation to the targets assigned to it.This breaks the existing organisation down into responsibility centres, which are attributed specific governance roles and expected to contribute functionally to achieve the company's strategic objectives. The organisational control system of global retailers must first consider whether there are any existing integrated relationships or cooperative alliances with other companies.The more these relationships are based on shared corporate risk, the more stable and durable they are, and they can significantly influence the attribution of strategic and managerial responsibilities. The time-based competition typical of global markets suggests that it is worth taking advantage of organisational structures that effectively delegate power but are also flexible, in other words, able to adapt rapidly and inexpensively to changes in the appropriate context.This results in an emphasis on decentralised decision-making by units dislocated around the territory 6 and a reassessment of matrix-based organisational structures 7 .The latter seem to adapt most successfully to continuing changes in competitive boundaries, taking into consideration the fact that, alongside permanent responsibility centres (for example, purchasing department, sales department, etc.) temporary centres may also develop (for example logistics manager, retail manager, etc.), with powers cutting across the organisation, and responsibility for managing critical economic levers when specific situations arise. □ 'METRO GROUP aims to improve its process efficiency to be able to tap existing and new markets even better.This is why Shape 2012 employs the maxim: as decentrally as possible, as centrally as necessary.Shape 2010 will markedly reduce the Group's complexity.The new organization is characterized by progressive structures with full operational responsibility at the level of the sales divisions.This facilitates greater customer orientation, improved cost management and gains in efficiency.The sales divisions are given the entrepreneurial freedom they need to meet the centrally defined strategic goals and return targets'(www.metrogroup.de,last update 19 October 2010). Control system information has to support all decision-making processes by collecting and processing the information used by corporate governance organs and organisation.We can rightly claim that retail companies have based their developmentand their relations with customers/consumers and with industrial companieson their ability to govern information flows, creating a virtual channel of information in parallel to the physical channel of goods.This concept is strengthened on rapidly evolving markets where a company's success depends on its ability to govern this dynamism, at least in part. Information technology, which is now widespread, cuts data collection costs enormously, and has made a huge amount of information available, but it poses problems related to its selection.For global companies, decision-making times and subsequent actions must be extremely short: the bottom-up information flow usually has to be managed in real time, by channelling the data arriving from the many company units in a single organisation or network, into a single information platform. □ 'Information technology is a success factor for Autogrill and a great opportunity for development in all the Company's operating activities. […] Over ten thousand check-outs across four continents, around 1.5 million receipts issued everydaythis is just the basis of a system that enables us to analyze and anticipate customers' needs from day to day -. […] In the meantime, projects are continually underway to develop common applications platforms to upgrade management of key Group processes.In particular, the European associates have launched a plan to extend the common IT system for managing business functions in branches and points of sale (administration, performance management reporting, f&b and supply chain. […]' (www.autogrill.com,last update 21 July 2009). Any opportunities for competitive advantage are therefore linked to the ability to select relevant data and process them, on each occasion, according to prevailing information priorities.In the past most problems regarded the ways data were collected; today the systematic selection of the significant data and the activation of a top-down feedback flow to support daily management processes (changes to the prices of products being promoted, replacement of a shelf brand, design of promotional leaflets, etc.) appears more critical.The effectiveness of the control system information can therefore encourage the implementation of time-based strategies, based on the ability to act before competitors, by constantly adapting supply. The result is that the correct planning of the information control system should simultaneously consider the corporate complexity on one hand and the reference timeframe on the other.The organisation of objectives and the measurement/collection of the results that must reflect the breakdown of the organisational structure into responsibility centres, and the division of the company's global activities into significant business areas, both derive from this complexity.The timely structure of the information is linked primarily to significant moments in the control process (forecast values and actual values) and to the frequency with which information is prepared. In low-complexity retail companies, decision-making processes are sufficiently supported by the traditional accounting system, which is designed to record trade with other economic entities.On the other hand, companies with a high degree of complexity, like global retailers, which repeatedly reveal a need for information, also need a better structured management accounting information system.On a recurrent and systematic level (not only final results but also forecasts) this can provide both global economic measurements for the entire company, and partial economic measurements, referred to items of observation that are deemed significant. The implementation of formal control systems should therefore guarantee the programming and a level of detail of the information that is consistent with governance requirements and the need to link the organisational units that make up the company or the network. Global Retailers and Procedural Components of Management Control The correct configuration of the organisational structure and control system information emerges as a condition for an effective control process.The latter defines the manner and frequency of activities to define objectives (long-term and short-term planning) and to observe results, on the basis of the company's degree of complexity, as well as the dynamism of the outside world. It is crucial to correctly translate corporate strategies into managerial targets, particularly for global companies, which usually have a single strategic management body and fragmented operating units.In these hypotheses, the concept of 'glocalisation' seems to be acquiring particular importance.It was coined to define supply systems that respond to global strategies but also change to adapt the product to the needs of specific local contexts.In other words, the definition of global strategic guidelines must be suitably translated into managerial objectives that respect the decisional-making and operational autonomy of the units disseminated through the territory. □ 'If you come to China with preconceived ideas after having been successful in Europe or in the United States, you make mistake after mistake'.Jean-Loc Chéreau, President of Carrefour China (interviewed by McKinsey, 2006) The geographical dispersion of the organisational units draws attention to the problem of enhancing the unitary character of the company while also guaranteeing the necessary flexibility to adapt governance and control processes to the characteristics of the local socio-geographic and competitive context 8 .The current trend can be attributed to both the centralisation of methodological processes in the parent company, and to the strategic importance of control processes (medium and long-term planning, the design of the information platform, the definition of general policies to advance human resources, risk mapping, outlining of global internal auditing plans, etc.) with the operating units responsible for the implementation stage and for monitoring actions (the performance indices and monitoring of results, the introduction of user-friendly IT tools, the definition of personnel assessment systems, monitoring of exposure to the risks identified, specific checks to control the activities performed, etc.). As a result, in global companies, the effectiveness of the management control system is clearly reflected in the cultural and communicational aspects consolidated in the context of the organisation, and this results in the adoption of common languages, mechanisms and tools that can foster shared knowledge and managerial alignment between geographically distant individuals and structures. Moreover, the control process must not overlook the changes taking place in traditional competition levers.On one hand, sudden changes in the key success factors impose a reduction in observation times, the intensification of the control communications and development that reflects different economic scenarios, for example reflecting the most rational management development hypothesis, and a disaster case hypothesis. On the other hand, corporate intangible assets play an increasingly important role, i.e. those variables of success which, for their very intangibility, escape easy identification and direct quantification, although they play a leading role in the achievement of company performance.Obviously, monitoring intangible assets does not replace traditional control variables, but supplements them with elements correlated to the growth of the intangible value of the company. Modern management control systems should therefore elaborate both new processes to translate intangible key success factors into performance indicators to guide the activities of the responsibility centres, and new report models that highlight the contribution to competitive economic and social corporate performances 9 . And finally, the management control processes of global companies must be designed and implemented by carefully studying possible moments of contact or potential overlapping with tools typical of other controls undertaken inside the company (for example internal auditing, risk management, etc.).In other words it is necessary to adopt a systemic, integrated view with regard to all the controls that are
v3-fos-license
2019-03-15T13:02:47.258Z
2016-09-01T00:00:00.000
77704449
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.rppede.2016.03.006", "pdf_hash": "75d1e59a1536b90f37883c0dd852658b68b5d76e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44611", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d89213ff8281a718c9384bd716b51452bb6cf1fb", "year": 2016 }
pes2o/s2orc
Insulin therapy in patients with cystic fibrosis in the pre-diabetes stage: a systematic review Abstract Objective: To elucidate whether insulin is effective or not in patients with cystic fibrosis before the diabetes mellitus phase. Data source: The study was performed according to the Prisma method between August and September 2014, using the PubMed, Embase, Lilacs and SciELO databases. Prospective studies published in English, Portuguese and Spanish from 2002 to 2014, evaluating the effect of insulin on weight parameters, body mass index and pulmonary function in patients with cystic fibrosis, with a mean age of 17.37 years before the diabetes mellitus phase were included. Data synthesis: Eight articles were identified that included 180 patients undergoing insulin use. Sample size ranged from 4 to 54 patients, with a mean age ranging from 12.4 to 28 years. The type of follow-up, time of insulin use, the dose and implementation schedule were very heterogeneous between studies. Conclusions: There are theoretical reasons to believe that insulin has a beneficial effect in the studied population. The different methods and populations assessed in the studies do not allow us to state whether early insulin therapy should or should not be carried out in patients with cystic fibrosis prior to the diagnosis of diabetes. Therefore, studies with larger samples and insulin use standardization are required. Introduction Cystic fibrosis-related diabetes (CFRD) is the most common comorbidity in patients with cystic fibrosis (CF) and affects 20% of adolescents and 40---50% of adults with CF. 1 Glucose disorders in CF patients typically begin with an intermittent postprandial hyperglycemia, followed by oral glucose intolerance without fasting hyperglycemia and finally diabetes with fasting hyperglycemia. 2,3 Insulin deficiency and resulting hyperglycemia affect lung disease. 3---5 Insulin is a hormone with anabolic effects and its deficiency may have a negative clinical impact on patients considered ''prediabetic''. 6 Increased serum glucose levels (≥144mg/dL) may have an adverse effect on lung function. Furthermore, increased glucose in the bronchial tree favors the growth of respiratory pathogens. 5 There is still a loss of lean body mass due to the catabolic state caused by insulin deficiency, which leads to a consumption of fat and proteins and also affects pulmonary function. 7 Therefore, insulin deficiency promotes a clinical deterioration in this population and not only an abnormal glucose metabolism, which may be enhanced by early intervention with insulin. 6 Both diabetes and glucose intolerance reduce the life expectancy of CF patients; insulin is the only treatment that improves clinical outcomes. 8 Early treatment with insulin may reduce the morbidity and mortality of the underlying disease. 9,10 Moreover, CF patients' classification using the oral glucose tolerance test (OGTT) in intolerant and diabetic patients is based on criteria derived from epidemiological studies in non-CF subjects, it raises doubts whether these conventional diagnostic limits would be appropriate or relevant for CF patients. 11 Thus, the use of conventional glucose evaluation tests in the CF population could underestimate the number of patients with abnormal glucose metabolism, and, consequently, this group could benefit from early intervention with insulin, in glucose levels below those considered abnormal in populations without cystic fibrosis. 12 To our knowledge, there is no systematic review of early initiation of insulin therapy in CF patients. Therefore, the aim of this study was to identify the effects of this intervention and contribute to clinical practice and future studies. Method The search process was developed according to the Prisma method (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). 13 The search was conducted between August and September 2014 in the following electronic databases: PubMed, Lilacs, SciELO, and Excerpta Medica Database (Embase). The following terms and descriptors (Medical Subjects Headings ---MeSH) were used for the search: 'cystic fibrosis', 'early insulin', 'insulin', 'body mass index', 'impaired glucose tolerance', and 'therapy'; in combinations: 'cystic fibrosis and early insulin', 'cystic fibrosis and insulin and body mass index', 'cystic fibrosis and early insulin', 'cystic fibrosis and insulin and body mass index', 'impaired glucose tolerance and cystic fibrosis and insulin and therapy'. Studies published between 2002 and 2014 were identified through electronic search by two independent reviewers who evaluated the titles and abstracts of articles. References of selected articles were also reviewed in order to identify studies not found in the surveyed bases. Discrepancies between reviewers were discussed and resolved by consensus. The date of the first search was August 28, 2014, and the last, September 22, 2014. Inclusion criteria were: (I) original articles; (II) prospective studies; (III) articles in English, Spanish or Portuguese; (IV) cystic fibrosis diagnosis; (V) glucose disorders; (VI) insulin use (regardless of type, dose, or implementation schedule); (VII) evaluation of the results in clinical parameters (weight or height or body mass index and pulmonary function). Glucose disorder was considered as an OGTT non-characterized as diabetes by the American Diabetes Association (ADA) criteria and OGTT glucose values above 140mg/dL at any time, except at baseline and 120min; or postprandial glucose random or above 200mg/dL; or impaired glucose tolerance (IGT) diagnosis by ADA criteria. 14 Exclusion criteria were: (I) non-original articles, such as letters, conference proceedings and editorials; (II) studies evaluated only CFRD without other types of disorders of glucose. The extracted data were: study design; sample size; population characteristics; follow-up time; type of insulin therapy (including dose and regime used); and effects on weight, body mass index, and lung function. Results The initial search identified 508 articles, of which 111 were selected based on titles and abstracts. References of selected papers were also reviewed and an additional study was included. Of these, 80 were identified as duplicates and removed; thus, 32 articles were read in full, of which 24 were excluded by the exclusion criteria. The final selection consisted of eight items ( Fig. 1). Characteristics of the study results are summarized in Table 1. Sample size of the included studies ranged from 4 to 54 patients, with mean age from 12.4 to 28 years. Investigators and subjects were not blind to the treatment assignment in any of the studies. Type of follow-up, insulin time, dose, and implementation schedule were very heterogeneous, which can be seen in Table 1. Three studies used control groups to compare the effects of insulin. Moran et al. 15 selected corresponding controls who underwent other types of intervention (repaglinide or placebo), while Minicucci et al. 16 used controls with IGT and Koloušková et al. 17 used controls with normal OGTT by ADA criteria (NGT). In these last two studies, controls did not undergo pharmacological interventions. Inclusion criteria for studies were very heterogeneous. Mozzillo et al. 18 used the following inclusion criteria: no use of systemic corticosteroids and no exacerbation of lung disease. Minicucci et al. 16 included patients with at least one of the following conditions: (I) body mass index BMI<10th percentile (p 10 ); (II) loss of one BMI percentile for age and sex in the previous year; (III) forced expiratory volume in one second (FEV1) ≤80% of predicted; and (IV) decreased FEV1≥10% in the previous year. Lung function deterioration and weight loss were also criteria for inclusion of subjects in the study by Dobson et al. 6 In contrast, Moran et al. 15 chose to intervene in a more clinically stable group of patients and used the following inclusion criteria: (I) end of linear growth; (II) weight stability in the last three months; (III) absence of acute infection in the last two months. Exclusion criteria for this study were: (I) use of oral or intravenous corticosteroids in the last six months; (II) fasting hyperglycemia in the previous year; (III) liver dysfunction; (IV) pregnancy. Early insulin deficiency, diagnosed by intravenous glucose tolerance test (IVGTT) and/or high levels of glucose in OGTT, was used as inclusion criteria in the studies by Koloušková et al. 17 and Hameed et al. 19 Five studies evaluated the effects of insulin in BMI of CF patients. 15---18,20 Bizzarri et al., 20 Moran et al., 15 and Koloušková et al. 17 demonstrated a significant increase in BMI after insulin intervention. Mozillo et al. 18 found a significant increase in BMI only in patients with initial BMI Z-score <−1. Although Moran et al. 15 they did not notice a significant increase in this parameter. Dobson et al., 6 Drummond et al., 12 and Hameed et al. 19 chose to assess body weight. Hameed et al. 19 and Drummond et al. 12 found significant weight gain after insulin intervention, while Dobson et al. 6 suggested this trend, as data were not statistically evaluated. FEV1 was the only clinical parameter assessed by all studies. Bizzarri et al., 20 Mozzillo et al., 18 and Hameed et al. 19 found a significant increase in FEV1 after the use of insulin. Koloušková et al. 17 found that, at the end of follow-up, intervention group had higher FEV1 compared to control group. Dobson et al. 6 showed an apparent increase in this parameter with the use of insulin, as it was only a case report. In studies by Moran et al., 15 Drummond et al. 12 and Minicucci et al., 16 FEV1 remained unchanged after the intervention. But Drummond et al. 12 evaluated separately only patients diagnosed with IGT and found a significant reduction in FEV1 decline rate in patients using insulin. Hameed et al. 19 evaluated separately only the early insulin-deficient patients (excluding patients with CFRD) and also found in this group a significant increase in FEV1. Moran et al. 15 reported a lower apparent decline in FEV1 in patients using insulin compared to placebo, but this stability was not statistically significant. Mozzillo et al. 18 found a reduced number of pulmonary exacerbations (as compared to the previous year), while Bizzarri et al. 20 found no changes in the number of hospitalizations for exacerbations. The four patients evaluated by Dobson et al. 6 showed an increase in forced vital capacity (FVC) with the use of insulin. Hameed et al. 19 found a significant improvement in FVC after the intervention. In the results found by Bizzarri et al., 20 there was no significant change in the levels of glycosylated hemoglobin (HbA1c) after insulin, whereas the group of patients evaluated by Minicucci et al. 16 showed a significant reduction in HbA1c with the use of insulin. Frequent episodes of hypoglycaemia were reported only by Drummond et al. 12 In the other studies cited in this review, the adverse effects of insulin therapy were infrequent and well tolerated. Discussion There are few published papers on the use of insulin in patients with cystic fibrosis prior to overt diabetes. Most are limited to a single center and mainly to adults. To our knowledge, this is the first systematic review to examine the benefits and risks of insulin use in CF patients before the diagnosis of diabetes. Glucose intolerance indicates the presence of insulin deficiency, which leads to a protein consumption and negative clinical/nutritional impact. Therefore, early treatment of insulin can have a positive effect in CF patients in the prediabetic phase. 20 The study results make sense when considering the pathophysiology of the evolution to CFRD. Initially, there is an insulin deficiency that generates a protein catabolism and glycemic excursions, with consequent difficulty gaining and maintaining weight and worsening of lung function. Therefore, the introduction of insulin at this stage would likely prevent the catabolic effects of insulin deficiency. Current data are clear about the insulin treatment for patients with CFRD with or without fasting hyperglycemia, 14 but there are no consistent results to determine whether this treatment should also be used for those with other glucose disorders, as it is not well defined what are glucose disorders in this specific population. Moreover, there is doubt whether the cut-off values for the diagnosis of CFRD and IGT are valid for CF because they are based on population without the disease. A negative impact of the prediabetic phase is described in nutritional status and pulmonary function of CF patients, 6,21 it suggests that insulin should be started before the diagnosis of CFRD by the current methods available, as insulin has anabolic effects and these patients have few side effects (hypoglycemia). The only study reporting frequent episodes of hypoglycemia was the study by Drummond et al., 12 but they included several insulin therapy regimens in patients with CFRD, IGT, and NGT, and there was no description of the insulin type or dose instruction for each group, which may be related to the difference in the frequency of hypoglycemia seen between studies. Although most studies have a small sample size and used more than one type of insulin, only Moran et al. 15 and Minicucci et al. 16 reported no positive effects with early insulin therapy, but these studies have some peculiarities described below, suggesting that early initiation of insulin therapy in CF patients could be beneficial. Minicucci et al. 16 reported no clinical improvement with the use of glargine in CF patients with IGT (ADA criteria). The authors assumed that participation in the study made patients more aware of their change in glucose metabolism, leading to better nutritional behavior. Most other studies 6,17---20 showed positive results, but the insulin doses used were higher. Mozzillo et al. 18 found that after 12 months of therapy with insulin, BMI curve (Centers for Disease Control ---CDC) improved in patients with baseline Z-score below −1, which is in agreement with the study by Koloušková et al. 17 that also reported improvement in BMI, regardless of the baseline Z-score. Koloušková et al. 17 demonstrated that insulin administration has positive effects on lean body mass due to protein catabolism reversal. However, control group also showed a tendency towards that improvement, probably due to better nutritional orientation, as in both groups there was a recommendation for increased caloric intake up to 120---150% of the daily needs. According to the authors, the results support the concept that insulin deficiency, assessed in this study by using IVGTT and OGTT, leads to clinical deterioration in CF patients and that early initiation of insulin therapy could be recommended earlier than it is currently accepted (CFRD). Dobson et al., 6 Bizzarri et al., 20 and Hameed et al. 19 found weight improvement in patients using insulin. Moran et al. 15 reported that insulin reverses weight loss in patients with CFRD without fasting hyperglycemia, but not in patients with severe IGT. However, the study population was in adulthood and there seems to be a better response when these individuals are in childhood and adolescence. Moreover, the group in question had severe IGT (OGTT≥200mg/dL at any time and 120min between 180 and 199mg/dL), and in other studies, patients were previously selected in severe IGT phase, which may explain the differences found in the results. Bizarri et al. 20 found improved FEV1 with no reduction in pulmonary exacerbations. Mozzillo et al. 18 reported increased FEV1 and reduced pulmonary exacerbations with the use of insulin. Hameed et al. 19 demonstrated a fall in FEV1 before starting treatment and improvement after the introduction of insulin. Koloušková et al. 17 and Drummond et al. 12 assessed FEV1 compared to untreated subjects and identified a decline in lung function in control group, which was not seen in insulin-treated group. Moran et al. 15 and Minicucci et al. 16 found no improvement in FEV1 with early use of insulin. However, in the study by Minicucci et al. 16 there was a 10% decrease in FEV1 in the year prior to the intervention, which may be a bias because even CFRD patients have no such decline in this parameter. Koloušková et al. 17 and Drummond et al. 12 , who had a more consistent number of cases, found that FEV1 was lower in the untreated group compared to those treated with insulin, which confirms the results described by Dobson et al. 6 Bizarri et al. 20 , and Hameed et al. 19 Dobson et al. 6 suggest an improvement in pulmonary function (FEV1 and FVC) with the use of insulin. However, their sample size was small (n=4), selected by convenience, and had no control; the reevaluation occurred in a short period of time (3 months) without insulin standardization (more than one type of insulin was used) and there was no statistical analysis, probably due to the sample size. Hameed et al. 19 assessed height separately and found no differences after insulin therapy initiation. This is probably due to the result seen in another study by Bizzarri et al. 22 They suggest that in the development of CFRD there is already substantial and irreversible impairment of height because most patients with CF develop diabetes at puberty, the same time that the growth spurt occurs. 22 The evidence, favorable or unfavorable, to the use of insulin before overt diabetes in CF patients remains inconclusive, with little knowledge about long-term results. There are few prospective studies on the use of insulin before overt diabetes in patients with CF and they include population with different types of glucose disorders, without age group delimitation, follow-up time, and insulin type, dosage, and implementation schedule. Only two studies 15,16 were multicenter and only three were controlled. 15---17 Moreover, it was not possible to assess the effect of important variables, such as nutritional routine, and a standard definition of glycemic disorders. When analyzing the results of studies regarding the anabolic effects of insulin, there are theoretical reasons to believe that insulin has a beneficial effect on the population studied. However, the addition of a treatment for diabetes in a multiple drug treatment regimen is complicated, which makes this decision even more controversial. Multicenter trials in randomized pediatric patients, with adequate nutritional support, type and standard doses of insulin (enough to promote anabolism), are needed to determine if treatment is justified or not. Keep in mind that placebo-controlled trials are difficult to perform due to the fact that insulin is an injectable medicine. Conclusion The different methods and case series used in the studies do not allow affirming that early insulin therapy should be applied in patients with CF and glucose disorders. To this end, studies with larger samples, diet standardization, age group, and uniformity of insulin use are needed. Funding This study did not receive funding. One of the coauthors receives doctoral fellowship from Fapesp (Fundação de Amparo à Pesquisa do Estado de São Paulo, process n • 2014/00611-2)
v3-fos-license
2023-06-21T13:09:29.189Z
2023-06-19T00:00:00.000
259204947
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b275cbad020976f6a729a67fa309edee4964655e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44612", "s2fieldsofstudy": [ "Biology" ], "sha1": "28054d1b1c20ac30697ff3267df51461cf421071", "year": 2024 }
pes2o/s2orc
FixNCut: single-cell genomics through reversible tissue fixation and dissociation The use of single-cell technologies for clinical applications requires disconnecting sampling from downstream processing steps. Early sample preservation can further increase robustness and reproducibility by avoiding artifacts introduced during specimen handling. We present FixNCut, a methodology for the reversible fixation of tissue followed by dissociation that overcomes current limitations. We applied FixNCut to human and mouse tissues to demonstrate the preservation of RNA integrity, sequencing library complexity, and cellular composition, while diminishing stress-related artifacts. Besides single-cell RNA sequencing, FixNCut is compatible with multiple single-cell and spatial technologies, making it a versatile tool for robust and flexible study designs. Supplementary Information The online version contains supplementary material available at 10.1186/s13059-024-03219-5. Introduction Single-cell sequencing has revolutionized our understanding of the complexity of life, allowing researchers to study tissues, organs, and organisms with unprecedented resolution [1].However, most single-cell techniques are designed for freshly prepared specimens, which can present logistical challenges for decentralized study designs that require disconnecting the time and site of sampling from downstream processing steps.In this regard, preservation methods have been developed that enable sample collection and storage, expanding the applications of single-cell sequencing to personalized medicine and collaborative research.In addition to facilitating flexible study designs, early sample preservation can improve robustness and reproducibility by reducing artifacts introduced during sample handling, such as differences in lab personnel skills, library preparation workflows, and sequencing technologies.Furthermore, preservation methods can mitigate cellular stress caused by external factors, such as sample collection, transport, storage, and downstream processing steps involving mechanical or enzymatic dissociation, which can alter transcriptomic profiles [2][3][4].Such cellular stress can impact sample quality and confound downstream analyses by inducing early stressresponse genes and altering the natural state of the cell.Therefore, early sample preservation can enhance the quality and reliability of single-cell sequencing studies, while enabling flexible and decentralized study designs. Dissociation-induced artifacts can be mitigated by the use of cold-active proteases active at low temperatures (6 °C) to decrease transcriptional activity and the expression of heat shock and stress-response genes.However, digestion at low temperatures can result in changes in cell type abundance due to incomplete tissue dissociation [2].Alternatively, inhibitors of transcription and translation have been shown to reduce gene expression artifacts by minimizing the impact of dissociation-induced stress [5].To overcome the challenges associated with sample logistics in single-cell studies, cryopreservation has been established as a storage method that preserves transcriptional profiles of cells in suspension and solid tissues [4].However, cryopreservation can result in reduced cell viability and induce considerable changes in sample composition, such as the depletion of epithelial cells, myeloid suppressor cells, and neutrophils [3,[6][7][8].Alternatively, cells can be fixed using alcohol, such as ethanol or methanol, but this can cause structural damage due to dehydration, protein denaturation, and precipitation, potentially affecting transcriptomic profiles.Nevertheless, alcohol-based fixation has been shown to better maintain cell composition compared to cryopreservation in specific contexts [3,9].More recently, ACME (ACetic-MEthanol) fixation has been developed as a solution to simultaneously dissociate and fix tissues, resulting in high cellular and RNA integrity [10].Although ACME has been demonstrated to be effective when combined with cryopreservation for cnidarian samples, its value for sample preparation across other species, including mouse and human, remains to be shown.Cross-linking fixatives, such as formaldehyde and paraformaldehyde (PFA), are used with specialized single-cell assays, but are incompatible with commonly applied high-throughput single-cell RNA sequencing (scRNA-seq) protocols measuring transcriptome by standard polyA-based expression capture (3′ end sequencing).Moreover, formaldehyde-based fixation generally impedes the application of single-cell multiome analysis, cellular indexing of transcriptomes, and epitopes sequencing (CITE-seq) [11,12] or immune repertoire profiling, which relies on 5′ end sequencing.Finally, post hoc computational tools such as machine learning algorithms have been developed to reduce or remove dissociation-induced artifacts [13].For a more comprehensive description of methods to mitigate dissociation-induced artifacts, we recommend referring to the review by Machado et al. [14].However, due to the often larger biological compared to technical variability and the fact that not all cell types within a sample suffer the same stress, it is difficult to generalize bias correction across cells and samples [15]. To overcome the challenges discussed above, it was crucial to develop a standardized protocol for sample collection, cell stabilization, and tissue processing that allows for fixation prior to sample processing.We present a workflow, called FixNCut, which uses Lomant's Reagent (dithiobis(succinimidyl propionate) (DSP)), a reverse crosslinker fixative, to enable tissue fixation prior to sample digestion steps.Such order of events prevents changes in gene expression during digestion and processing, while disconnecting sampling time and location from subsequent processing sites.DSP is a homo-bifunctional N-hydroxysuccinimide ester (NHS ester) crosslinker that contains an amine-reactive NHS ester at each end of an 8-carbon spacer arm.NHS esters react with primary amines at pH 7-9 to form stable amide bonds and release N-hydroxy-succinimide.Proteins typically have multiple primary amines in the side chain of lysine residues and the N-terminus of each polypeptide that can serve as targets for NHS ester crosslinking reagents.DSP is lipophilic and membrane-permeable, making it applicable for intracellular and intramembrane conjugation.However, it is insoluble in water and must be dissolved in an organic solvent before adding to the reaction mixture.The presence of a disulfide bond in the center of DSP makes its crosslinking reversible via reducing agents like DTT, which is present in most reverse transcription buffers of single-cell sequencing applications (e.g., 10x Genomics assays).To date, DSP has only been successfully used to preserve cells in suspension (cell culture or PBMCs) for single-cell sequencing in applications such as CLint-Seq, nanofluidic systems, or phosphoprotein-based cell selection [16][17][18], but it has not been employed in tissues prior to dissociation. Here, we demonstrate the versatility of the FixNCut protocol to overcome key limitations in generating single-cell data across multiple tissues.We provide evidence that FixNCut preserves RNA integrity, library complexity, and cellular composition, while allowing for cell labeling or sample hashing prior to single-cell analysis.To illustrate its potential, we applied FixNCut to fix and digest mouse lung and colon tissue, as well as human colon biopsies from inflammatory bowel disease (IBD) patients, demonstrating its clinical utility.Additionally, we show that DSP fixation can be used in the context of spatial-omics, specifically multiplexing tissue imaging for spatial proteomics (i.e., Phenocycler). Reversible fixation of human cells in suspension Extending previous studies using DSP to preserve cell lines for RNA sequencing [16], we initially confirmed its applicability for single-cell analysis of cells in suspension (human peripheral blood mononuclear cells; PBMCs) and microfluidics systems (10x Genomics Chromium Controller), before combining fixation and dissociation of complex solid tissues.To this end, we compared cell morphology, RNA integrity, and reverse transcription efficiency of fresh and DSP-fixed PBMCs.Fixed cells showed highly similar morphology compared to fresh PBMCs in bright-field microscopy, with no evident changes in cell phenotypes or sample clumping after fixation (Fig. S1a).Next, we captured and barcoded single cells from both fresh and DSP-fixed samples using the Next GEM Single Cell 3′ Reagent Kits v3.1 from 10x Genomics.Bioanalyzer profiles of the amplified cDNA from both samples were virtually identical, demonstrating DSP fixation not to affect RNA integrity or reverse transcription performance (Fig. S1b,c).After sequencing, we confidently mapped over 80% of the reads from both sequencing libraries to the human reference genome, with over 50% of exonic reads usable for quantifying gene expression levels (Fig. 1a).We observed a comparable correlation between the number of detected genes or the number of unique molecular identifiers (UMIs) and sequencing depth for fresh and fixed samples (95% CI, 3.65e−05 ± 1.11e−05 and 0.44 ± 0.06 vs. 3.56e−05 ± 1.14e−05 and 0.35 ± 0.06, respectively), indicating DSP fixation to conserve library complexity (Fig. 1b).Briefly, we captured a total of 22,481 genes in both conditions, together with 1667 and 1482 specific for fresh and fixed samples, respectively.We confirmed this observation at the single-cell level, where we found a similar relationship between sequencing depth and the number of detected UMIs or genes per cell (Fig. S1d).In line, we observed similar gene counts in single blood cells in fresh and fixed samples (Fig. 1c).After filtering out low-quality cells, we found a similar distribution of the main quality control (QC) metrics between both protocols (Fig. S1e), except for a few specific cell subpopulations (Fig. S1f ).These results suggest DSP fixation to conserve the ability to detect gene transcripts in single cells compared to fresh samples in scRNA-seq experiments. To further assess potential technical variation between protocols, we identified highly variable genes (HVGs) independently in fixed and fresh PBMCs.We found that 70% of HVGs were shared between the two protocols, indicating a conserved representation of the transcriptome and suitability for joint downstream processing (Fig. S1g).Additionally, when we examined the variation captured in the main principal components (PCs) and displayed single-cell transcriptomes in two dimensions (uniform manifold approximation and projection; UMAP), we did not observe any notable outliers due to the sampling protocol (Fig. 1d).Cells clustered together based on biological differences rather than preparation protocol, suggesting fixed and fresh cells to have similar capacity for cellular phenotyping.The pseudo-bulk gene expression profiles between fixed and fresh samples were highly correlated (R 2 = 0.99, p < 2.2e−16) (Fig. 1e), indicating DSP fixation not to alter the expression of specific genes.This was further confirmed at the cell population level (Fig. S1h).Moreover, the biological processes such as apoptosis, hypoxia, reactive oxygen species (ROS), cell-cycle (G2/M checkpoint), unfolded protein response (UPR), and inflammation hallmarks remained unchanged across libraries (Fig. 1f ). Next, we performed a joint analysis of 17,483 fresh and fixed cells, which were clustered to define 19 distinct cell populations (Fig. 1g; Fig. S1i).All cell types and states were found across both protocols at similar proportions, except for classical monocytes and NK cells, which showed small but significant differences, being slightly increased in fresh and fixed, respectively (Fig. 1h).Fixation did not affect differential expression analysis (DEA), with only four upregulated genes representing hemoglobin subunits (HBA1, HBA2, and HBB) and a mitochondrial gene (MT-NDL4) (Fig. 1i; Additional file 2: Table 1).These genes were consistently found across all cell populations (Additional file 3: Table 2), a phenomenon also observed when performing digestion protocols at low temperatures [3].The FixNCut protocol may prevent erythrocyte lysis, leading to their co-encapsulation with nucleated blood cells and the detection of specific transcripts. Importantly, we observed a reduction in technical artifacts introduced during sample processing prior to single-cell experiments upon fixation.Specifically, gene expression alterations previously defined to correlate with ex vivo PBMC handling [19] showed a significant reduction upon fixation (p < 2.2e−16) (Fig. 1j).Moreover, a samplingtime gene signature obtained from single-cell benchmarking studies [4] also showed a significant reduction in the fixed PBMCs (p < 2.2e−16) (Fig. 1k).Interestingly, T lymphocytes appeared to be particularly affected (p < 0.0001; except for gdT cells, p < 0.05), showing the strongest protection from sampling artifacts in fixed cells (Fig. S1j).DSP also protected against the general reduction of gene expression activity, previously reported during PBMC sample processing [4].Notably, more than 30% of genes from the sampling-time signature were also detected as enriched in the fresh PBMCs (Fig. S1k). The results suggest that fixed PBMCs have comparable cellular composition and gene expression profiles to freshly prepared samples, while reducing gene expression artifacts introduced during sample preparation. FixNCut protocol applied on mouse solid tissues Beyond the benefits of cell fixation in standardizing sample processing and preserving gene expression profiles of cells in suspension, the FixNCut protocol was specifically designed for solid tissues.Specifically, it allows for fixation and subsequent digestion, which is particularly advantageous for complex and logistically challenging study designs, such as clinical trials.Here, sampling artifacts, including biases in gene expression and cell type composition, are frequently observed in fragile solid tissue types.For example, differentiated colonic epithelial cells (e.g., secretory or absorptive cells), tightly connected adult neurons, or processing-sensitive adipocytes are more susceptible to cell damage and death as a result of common tissue dissociation protocols [20,21].Fixation prior to digestion using the FixNCut protocol can reduce these artifacts.Thus, we next evaluated the effectiveness of the FixNCut protocol with subsequent scRNA-seq readout in different solid mouse tissues before extending its application to challenging human patient samples, such as tissue biopsies. Fresh mouse lung samples were minced, mixed, and split into two aliquots, one processed fresh and the other fixed using the FixNCut protocol with subsequent 30-min tissue digestion using Liberase TL.The fixed sample showed a slight decrease in cell size and an increase in DAPI+ cells, but overall, cell morphology was similar to the fresh sample (Fig. S2a).Single-cell encapsulation and scRNA-seq (10x Genomics, 3′ RNA v3.1) showed comparable proportions of reads mapped to the mouse reference genome and exonic genomic regions for both fresh and fixed samples (Fig. 2a).We further observed a similar correlation between the number of detected genes or UMIs and sequencing depth in fresh and fixed samples (95% CI, 2.31e−05 ± 8.03e−06 and 0.40 ± 0.06 vs. 2.64e−05 ± 7.86e−06 and 0.40 ± 0.06, respectively) (Fig. 2b).At the cell level, we confirmed the similar complexity of fixed libraries, as reflected by the number of detected UMIs and genes (Fig. S2b).Genes identified in both libraries (n = 20,684) were mostly protein-coding genes (76%).Conversely, genes exclusively captured in either fixed (n = 1383) or fresh (n = 1157) samples were largely non-coding RNA genes, specifically lncR-NAs (52% vs 47%) (Fig. S2c).We further observed that more genes were captured for the fixed samples after accumulating information from a few number of individual lung cells (Fig. 2c).Importantly, after filtering out low-quality cells, the main QC metrics in fixed samples showed consistent distributions across all characterized cell types (Fig. S2d,e), suggesting that FixNCut protocol preserves the capacity for scRNA-seq profiling after fixation and digestion. An overlap of almost 80% of sample-specific HVGs was found when comparing the fresh and FixNCut protocols (Fig. S2f ).The absence of batch effects linked to protocols was demonstrated by the PCA and UMAP representations (Fig. 2d), indicating bias-free transcriptome profiles after cell fixation and digestion.Highly comparable profiles of mean gene expression values were observed between fresh and fixed mouse lung samples (R 2 > 0.99, p < 2.2e−16) (Fig. 2e), a finding also confirmed at the population level (Fig. S2g).Moreover, the high correlation across gene programs supported the absence of alterations in major biological processes (Fig. 2f ). We then performed a joint analysis of all 19,606 mouse lung cells, which were segregated into 20 distinct cell populations, encompassing both lung and tissue-resident immune cells (Fig. 2g; Fig. S2h).All characterized cell types were detected in both fresh and fixed samples, with slight variability in cell type proportions between both protocols.The fixed protocol showed an improved representation of tightly connected epithelial and endothelial cell types, while immune cells (B and T cells, monocytes, monocyte-derived DCs, and neutrophils) were proportionally increased in the fresh sample (Fig. 2h).To validate preserved gene expression profiles in fixed tissues, we performed differential expression analysis (DEA) between the two protocols.We observed upregulation of genes related to pneumocytes (Sftpc), myeloid enhancer binding protein (Cebpb), and endothelial cells promoting cell migration (Cxcl2) for the fixed protocol, which could be largely explained by the enrichment of this population upon fixation.In contrast, fresh samples were enriched in genes related to inflammatory and immune processes (Ms4a4b and Trbc2), in accordance with the increased proportion of recovered immune cells (Fig. 2i; Additional file 4: Table 3).At the cellular level, we found a uniform enrichment in stress-related genes in nonimmune populations from the fresh sample, while the fixed sample showed this enrichment in immune populations (Additional file 5: Table 4).Gene set enrichment analysis (GSEA) stratified by cell type revealed that freshly prepared endothelial cells were enriched in ROS, apoptosis, and cellular response to external stimuli, whereas the opposite patterns were observed upon fixation (Additional file 6: Table 5).Overall, these results suggest the global conservation of library complexity and quality, along with the inclusion of tightly connected, challenging-to-isolate cell types in fixed mouse lung samples. We further evaluated the performance of the FixNCut protocol in a different challenging solid tissue context.To do so, we minced and mixed mouse colon samples that were split and subjected to scRNA-seq after digestion of either fresh or fixed tissues.Our results indicate that FixNCut provides several benefits, including improved transcriptome capture accuracy, as evidenced by a higher number of total reads mapped to the reference and a higher exonic fraction (Fig. 3a).Additionally, the fixed sample exhibited a higher non-significant library complexity based on the total number of detected genes (95% CI, 8.81e−05 ± 2.83e−05 vs. 9.03e−05 ± 3.16e−05) coupled with an increased number of total UMIs at deeper sequencing (95% CI, 0.71 ± 0.05 vs. 0.73 ± 0.07) (Fig. 3b), with fixed cells showing increased numbers of detected UMIs and genes (Fig. S3a).Genes identified in both libraries (n = 18,314) were mostly protein-coding genes (81%).Conversely, genes exclusively captured in fixed samples (n = 2225) compared to those from fresh (n = 1011) showed a larger percentage of protein-coding (45% vs 35%) coupled with a smaller fraction of lncRNA (44% vs 53%) (Fig. S3b), indicating that FixNCut enhances the gene capture efficiency, potentially allowing for a more fine-grained cell phenotyping after sample fixation.Notably, the cumulative gene count was greater for the fixed colon, particularly when considering a larger number of sampled cells (Fig. 3c), and we observed improved QC metrics for the FixNCut sample after filtering out low-quality cells, which held true across all cell populations (Fig. S3c,d).The overlap of HVGs between the fresh and fixed colon samples was slightly lower than that observed in lung tissues (> 60%) (Fig. S3e).Further, we identified noticeable differences in the transcriptomic profile, as demonstrated in both PCA and UMAP representations (Fig. 3d), which were attributed to the aforementioned improvements in library complexity after DSP fixation.Given that the overall cellular transcriptomic profile remains intact, confirmed by a high correlation in mean gene expression values between the two protocols (R 2 = 0.96, p < 2.2e−16) (Fig. 3e, f ), we applied sample integration to collectively annotate cells and to address technical differences at cell type level (Fig. 3d).Notably, cell populations that exhibited a diminished correlation between the fresh and fixed samples coincided with cell types that were specifically enriched in the fixed sample (Fig. S3f ). We captured a total of 14,387 cells that were clustered into 16 cell populations, representing both immune and colon-epithelium cells (Fig. 3g; Fig. S3g).All cell types were detected in both conditions, but we observed a clear shift in cell type composition with an enrichment of sensitive epithelial and stromal cells in the fixed sample (Fig. 3h).Differential expression analysis revealed a higher representation of ribosomal protein and mitochondrial genes in the fixed sample, mostly explained by the larger capture of actively cycling epithelial cell population known as transit-amplifying (TA) cells (Fig. 3i; Additional file 4: Table 3).In line, both epithelial and stromal populations were also enriched in mitochondrial and ribosomal protein genes (Additional file 7: Table 6).Additionally, GSEA by cell population showed enrichment of ribosomal-dependent and metabolic processes pathways in fixed cells, especially in the sensitive populations (Additional file 8: Table 7).Together, our results demonstrate the FixNCut protocol to enhance library complexity and quality metrics, while also capturing fragile epithelial and stromal cell populations from delicate tissues, such as the colon.Thus, DSP-based fixation preserves the integrity of tightly connected cell types that are otherwise difficult to isolate for single-cell experiments, resulting in an improved representation of cell types and states in these solid tissues. Long-term storage of fixed tissues Conducting multi-center clinical studies can be challenging due to centralized data production and the need for storage and shipment.To address this challenge, we evaluated the combination of the FixNCut protocol with cryopreservation (cryo; 90% FBS and 10% DMSO; see the "Methods" section) to allow for the separation of sampling time and location from downstream data generation, while preserving sample composition and gene expression profiles.To test this, fresh mouse lung samples were minced, mixed, and split into three pools for fixation-only, cryo-only, and fixation+cryo sample processing.After single-cell capture and sequencing (10x Genomics, 3′ RNA v3.1), all three libraries showed comparable statistics of mapped and exonic reads across conditions, indicating successful preservation of the transcriptome (Fig. 4a).We also observed a similar relationship between the number of detected genes and the sequencing depth for all three protocols (95% CI, 2.88e−05 ± 8.64e−06 cryo vs. 2.37e−05 ± 8.07e−06 fixed/ cryo), although the number of detected UMIs was statistically significantly reduced in the fixed/cryo sample compared to cryo-only (95% CI, 0.46 ± 0.06 cryo vs. 0.24 ± 0.06 fixed/cryo) (Fig. 4b), which was consistent considering the detected UMIs and genes across individual cells (Fig. S4a), but hardly noticeable when accumulating gene counts across multiple cells (Fig. 4c).Genes identified across all libraries (n = 19,509) were mostly protein-coding genes (78%).Conversely, genes exclusively captured in fixed/cryo samples compared to those from cryo-only showed a similar percentage of protein-coding genes (34% vs 36%) and comparable fractions of lncRNA (52% vs 50%) (Fig. S4b). After removing low-quality cells, we found highly comparable distributions for the main QC metrics across all samples.However, we noticed a small increase in the percentage of mitochondrial gene expression detected in the fixed/cryo sample (Fig. S4c).Similarly, the different cell populations showed consistent QC across conditions (Fig. S4d). We confirmed the absence of DSP-fixation biases after cryopreserving fixed samples, as indicated by a high overlap (> 70%) of HVGs across all three protocols (Fig. S4e).In addition, both PCA and UMAP dimensionality reduction plots showed no discernible biases between preservation protocols (Fig. 4d).We also observed highly comparable expression profiles and gene programs when correlating the mean gene expression values for all protocol comparisons (R 2 > 0.99, p < 2.2e−16) (Fig. 4e).Moreover, there was no appreciable alteration in biological processes at the gene program or population level when comparing across protocols (Fig. 4f; Fig. S4f ). We next analyzed 24,291 mouse lung cells processed with the three different protocols and annotated 20 lung and tissue-resident immune cell populations (Fig. 4g; Fig. S4g), and all cell types and states were found across the three conditions at similar proportions.However, we observed slight changes in composition, with fixed/cryo samples showing a decrease of B cells coupled with an increase of gCap compared to the cryo sample, and an increase of Monocytes compared to fixed-only.Comparing fixed-only with cryo, we found an increase in arterial and pneumocyte type I cells compared to the cryo sample (Fig. 4h).Additionally, we observed downregulation of genes associated with immune function (e.g., Igkc, Ccl4, Scgb1a1) in the fixed/cryo, explained by the aforementioned shift in cell type composition.Importantly, the cryo-only sample showed upregulated genes related to stress response, such as Fosb (Fig. 4i; Additional file 4: Table 3).A closer inspection of the different cell populations validated the expression of stress-related genes across all cells in the cryo-only compared fixed/cryo samples, specifically in non-immune cells (Additional file 9: Table 8).Accordingly, GSEA detected an enrichment of regulatory or response pathways for almost all cryopreserved cell types compared to fixed/cryo samples (Additional file 10: Table 9).These results support the feasibility of cryopreservation after fixation to combine the robustness and logistical advantages of the respective methods for scRNA-seq experiments. Minimization of technical artifacts in FixNCut tissue samples Fixing tissues after sample collection preserves the natural state of a cell and avoids technical biases, previously shown to affect bulk and single-cell transcriptomics analysis [2][3][4]22].In addition to the abovementioned differences in stress-response genes, we further aimed to demonstrate the ability of FixNCut to preserve gene expression profiles by examining previously identified artifact signatures.Specifically, we investigated condition-specific gene signatures from published studies using our mouse lung and colon data (see the "Methods" section).Fig. 5 Minimization of technical artifacts using FixNCut protocol on mouse tissues.This figure shows the impact of various dissociation-induced gene signature scores, including dissociation on mouse muscle stem cells [22], warm dissociation on mouse kidney samples [3], and warm collagenase dissociation on mouse primary tumors and patient-derived mouse xenografts [2], across mouse tissues and processing protocols used.All statistical analyses between protocols were performed using the Wilcoxon signed-rank test; significance results are indicated with the adjusted p-value, either with real value or approximate result (ns, p > 0.05, *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001, ****p ≤ 0.0001).a Violin plots of dissociation-induced gene signatures scores for fresh and fixed mouse lung.b Score of warm collagenase gene signature for fresh and fixed mouse lung samples across cell populations.c Overlap of differentially expressed genes in the fresh and fixed mouse lung sample with genes from the three dissociation-induced signatures.d Violin plots of dissociation-induced gene signatures scores for fresh and fixed mouse colon.e Score of warm collagenase gene signature for fresh and fixed mouse colon samples across cell populations.f Overlap of differentially expressed genes in the fresh and fixed mouse colon sample with genes from the three dissociation-induced signatures.g Violin plots of dissociation-induced gene signatures scores for cryo and fixed/cryo mouse lung.h Score of warm collagenase gene signature for cryo and fixed/cryo mouse lung samples across cell populations.i Overlap of differentially expressed genes in the cryopreserved and Fixed/Cryo mouse lung sample with genes from the three dissociation-induced signatures After analyzing mouse lung samples, we found that fixed samples had comparable dissociation/temperature-signature scores, except for the warm collagenase which was significantly lower (p < 4.3e−07) compared to fresh samples (Fig. 5a).We observed that external tissue stressors had a greater impact on fresh lung resident cells compared to the infiltrating immune cell fraction (Fig. 5b).Interestingly, the signature scores for these populations displayed a bimodal-like distribution, indicating an uneven effect within cell populations (Fig. 5b).Additionally, stress-signature genes were not only found to be differentially expressed in the fresh lung but also in the fixed samples, regardless of their level of expression (Fig. 5c). Similarly, fixed colon samples showed a significantly larger decrease in dissociation/ temperature-stress signature scores compared to fresh samples (Fig. 5d).Here, we also observed a lineage-dependent impact of cell stress; colonocytes were greatly affected with differences in subtypes, whereas immune cells largely escaped stress biases (Fig. 5e).Endothelial and stromal cells suffered the largest dissociation-related stress in the fresh samples, which was drastically reduced upon fixation (Fig. 5e).Moreover, stress-signatures genes were also differentially expressed in the fresh colon sample, while largely absent in the fixed sample (Fig. 5f ). Furthermore, we demonstrated the effectiveness of FixNCut for long-term sample storage by examining the dissociation/temperature-stress signature scores in cryo-only and fixed/cryo mouse lung samples.Our results showed that cryopreserved samples had significantly higher stress-related signature scores compared to fixed/cryo (p < 2.2e−16) (Fig. 5g).Interestingly, the stress signature score for endothelial and stromal cells exhibited a bimodal distribution exclusively in the cryo-only sample, with cells showing larger dissociation-related effects in the same population (Fig. 5h), consistent with previous observations.Over 70% of signature-specific genes were significantly differentially expressed in the cryo-only sample, an even higher proportion compared to fresh lung and colon samples, whereas the fixed/cryo samples had almost no overlapping DEGs (Fig. 5i). Moving towards the use of FixNCut on clinical samples As a proof-of-concept for a multi-center clinical research study focused on autoimmune diseases, we evaluated the performance of FixNCut on human patient biopsies.To this end, we obtained fresh colon biopsies from two IBD patients in remission.The biopsies were mixed and split into four aliquots, which were processed as follows: fresh, fixedonly, cryo-only, or fixed/cryo.The fixed human colon samples exhibited a similar proportion of reads mapped to the reference genome and exonic regions as mouse colon tissues (Fig. 6a) and displayed comparable library complexity for short-term (fixed vs. fresh) and long-term (fixed/cryo vs. cryo) conditions, considering the total number of detected genes (95% CI 6.29e−05 ± 2.03e−05 fresh vs. 5.53e−05 ± 2.13e−05 fixed; 6.65e−05 ± 2.23e−06 cryo vs. 6.10e−05 ± 2.11e−05 fixed/cryo) and captured UMIs (95% CI 0.48 ± 0.07 fresh vs. 0.37 ± 0.08 fixed; 0.47 ± 0.07 cryo vs. 0.39 ± 0.08 fixed/ cryo) (Fig. 6b).A similar pattern was observed comparing the number of captured genes and UMIs at the cell level (Fig. S5a).The cumulative detected gene count was highest for the fixed biopsy, with fresh being the worst condition (Fig. 6c).Genes identified across all libraries (n = 21,637) were mostly protein-coding genes (70%), followed by non-annotated genes (21%).Genes exclusively captured in one condition (fresh, fixed, cryo-only, or fixed/cryo) showed a highly similar distribution of gene features (Fig. S5b).After removing low-quality cells, the main quality control metrics had similar distributions, with slightly improved median UMI, gene counts, and reduced mitochondrial gene percentage for the fixed sample (Fig. S5c), which were consistently observed across almost all populations (Fig. S5d).We observed an overlap (> 50%) of HVGs across all conditions (Fig. S5e) and although subtle protocol-associated effects were found, we ensure a consistent cell annotation across samples after successful sample integration (Fig. 6d).Gene expression profiles and gene programs significantly correlated across all samples (R 2 > 0.96, p < 2.2e−16) (Fig. 6e, f ), with a slightly reduced correlation observed in M0 macrophages and stromal cells (Fig. S5f ). By jointly analyzing 17,825 IBD colon cells across protocols, we identified 21 major cell types, including both colon and tissue-resident as well as infiltrated immune cells (Fig. 6g; Fig. S5g).All cell types and states were found across all protocols at similar proportions, although the number of high-quality cells was reduced in the cryo-only sample (Fig. 6h).Notably, the FixNCut protocol captured larger proportions of CD4+ T and B cells compared to fresh or cryo-only samples, among other minor changes (Fig. 6h).We also found that fixed samples had downregulation of heat-shock proteins (HSP), such as HSPA1A, HSPA1B, and DNAJB1, and upregulation of B cellspecific genes, including MS4A1, HLA-DRA, HLA-B, and CXCR4, when compared to fresh and cryo-only human colon samples (Fig. 6i; Additional file 11: Table 10).In line with the findings from the mouse experiment, we observed that fixed human colon samples exhibited increased expression of ribosomal genes (related to TA cells) at the cell population level, whereas fresh samples showed higher mitochondrial expression.Apart from increased HSP genes in the cryo-only sample, no other significant differentially expressed genes were found between the conditions (Additional file 12: Table 11; Additional file 13: Table 12).GSEA revealed the cryo-only sample to be enriched in stress pathways (response to external stimuli such as stress, temperature, (See figure on next page.)Fig. 6 FixNCut protocol tested in human colon biopsies.a Mapping analysis of sequencing reads within a genomic region.b Comparative analysis of the number of detected genes (top) and UMIs (bottom) across various sequencing depths.c Cumulative gene counts analyzed using randomly sampled cells.d Principal component analysis (PCA) and uniform manifold approximation and projection (UMAP) representation of gene expression profile variances prior data integration, and harmony integrated UMAP representation of gene expression profile variances of fresh, fixed, cryopreserved, and fixed/cryopreserved samples.e Linear regression model comparing average gene expression levels of expressed genes across protocols used.The coefficient of determination (R 2 ) computed with Pearson correlation is indicated.f Hierarchical clustering of coefficient of determination (R 2 ) obtained for all pair comparisons across protocols for biological hallmarks, including apoptosis, hypoxia, reactive oxygen species (ROS), cell cycle G2/M checkpoint, unfolded protein response (UPR), and inflammatory response genes.g UMAP visualization of 17,825 fresh, fixed, cryopreserved, and fixed/cryopreserved human colon cells, colored by 21 cell populations.h Comparison of cell population proportions between fresh (n = 5759), fixed (n = 4250), cryo (n = 3489), and fixed/cryo (n = 4327) cells.The bottom figure shows the results of compositional cell analysis using the Bayesian model scCODA.Credible changes and Log2FC are indicated.i Differential gene expression analysis across conditions: fixed vs fresh (top-left), fixed vs cryo (top-right), fixed/cryo vs cryo (bottom-left), and fixed/cryo vs fixed (bottom-right).Significant adjusted p-values (FDR) < 0.05, upregulated (red), and downregulated (blue) genes with Log2FC > |1| are indicated.The top DE genes are included in the plot.j Violin plots for stress-related gene signature score [2,3,22] for human colon biopsies across protocols.Statistical analysis between protocols was performed using the Wilcoxon signed-rank test oxidative stress, and protein folding), a pattern also observed in the fixed/cryo sample compared to fixed, but restricted to immune cell types (Additional file 14: Table 13; Additional file 15: Table 14). By comparing published stress signatures (see the "Methods" section) across fresh, fixed-only, cryo-only, and fixed/cryo samples, we found no significant differences between the fixed and fresh samples, whereas the cryo-only sample had significantly higher scores than the fresh sample, with the fixed/cryo sample presenting a lower stress scores than the cryo-only sample, indicating a reduction of gene expression artifacts by fixation for long-term storage in patient biopsies (Fig. 6j).We observed no significant cell population-specific effect; however, stromal cells had the highest score compared to other (Fig. S5h).These findings provide proof-of-concept evidence for the applicability and value of FixNCut in improving the robustness of data generation in the clinical setting. Expanding the application of FixNCut to various single-cell techniques Next, we assessed the compatibility of the FixNCut protocol with single-cell application variants that involve cell labeling with antibodies or lipids targeting the cell membrane (e.g., FACS, CITE-seq, cell hashing).To this end, we stained fresh, cryopreserved, and fixed PBMCs and colon tissue samples with fluorescent antibodies or lipid-modified oligos (LMOs), which were analyzed by flow cytometry.First, we analyzed a cohort of cryopreserved and cryo+fixed human PBMCs (n = 3) stained with anti-CD3, CD4, CD8, and CD19 monoclonal antibodies (mAbs).The analysis of cell morphology and viability showed that DSP fixation induced slight changes in cell size, internal complexity, and membrane permeability of PBMCs.Specifically, lymphocytes showed a decrease in forward scatter (cell size), while monocytes showed a decrease in side scatter (internal complexity) (Fig. S6a,b).Next, cells were gated based on the expression of CD3, CD4, and CD8 to characterize all T cell subtypes, and CD19 for B cells.We confirmed that DSP fixation did not alter the percentage of any of the immune cell types analyzed (Fig. 7a).However, using the same amount of antibodies, the mean fluorescence intensity (MFI) was higher in the cryopreserved compared to the cryo+fixed PBMCs (Fig. 7b).To ensure cryopreservation did not introduce any bias, we analyzed another cohort of fresh and fixed human PBMCs (n = 4) stained with anti-CD45, CD3, CD4, and CD8 mAbs and against ubiquitously expressed human surface proteins (β2M and CD298) and LMOs for cell hashing and multiplexing.DSP fixation led to an increased binding of DAPI, Annexin V (apoptotic cell marker), and propidium iodine (necrotic cell marker) in PBMCs, indicating reduced membrane integrity, particularly after storing cells at 4 °C for 2 days (Fig. S6c).These changes in morphology and membrane integrity should be taken into account when working with mixed study designs including both fixed and fresh samples.Despite observing a minor decrease in the MFI for most of the tested antibodies in fixed samples, we did not detect significant differences in cell type composition comparing fresh to fixed cells (Fig. 7c; Fig. S6d).Similarly, whereas PBMCs labeled with β2M and CD298 antibodies showed no differences in MFI between protocols, cells stained with LMOs revealed a minor but noticeable reduction in signal strength in fixed cells (Fig. 7d). Similarly, staining dissociated human colon biopsies with anti-CD45, CD3, CD11b, and EpCAM antibodies showed similar MFI in cryopreserved and fixed cells (Fig. 7f; Fig. S6e).Our results showed that DSP fixation is compatible with cell labeling of cells with antibodies.Nevertheless, we recommend the optimization of labeling conditions and flow cytometry protocol of fixed samples, depending on antibody sensitivity, antigen abundance, and downstream applications, to further improve results. Finally, we explored the potential applicability of DSP-fixed tissues in spatial technologies.We applied DSP-fixed tissue to spatial proteomics using multiplexed immunofluorescence tissue imaging, formerly known as co-detection by indexing (CODEX) [23].We analyzed a DSP-fixed paraffin-embedded prostate cancer sample and its formalinfixed counterpart using the commercial Phenocycler instrument (Akoya Biosciences).DSP-fixed sections exhibited a pattern of expression and signal intensity comparable to formalin-fixed sections, the latter considered to be the "gold standard" (Fig. 7g).This study showcased the potential of DSP fixation in spatial-omics, serving as a proof-ofconcept for its application to a broader array of spatial techniques.Additional efforts are underway to extend the use of DSP-fixed tissues to other spatial-omics technologies, such as Visium/Xenium (10x Genomics), GeoMx/CosMx (Nanostring), STereo-seq (BGI), and MERSCOPE (Vizgen). Discussion In this study, we introduced the FixNCut protocol, a novel approach that combines sample fixation with subsequent tissue dissociation to overcome several limitations in generating single-cell data.While DSP fixation has previously been used to fix K562 cells and keratinocytes prior to sequencing [16,17], or in combination with single-cell technologies to explore the adaptive immune repertoire [18], our study is the first to utilize the reversible properties of the DSP fixative with standard enzymatic dissociation on solid biopsies in the context of single-cell transcriptomic studies.The FixNCut protocol offers significant advantages, including the ability to fix tissue prior to digestion, providing a snapshot of the cell transcriptome at sampling time and minimizing technical artifacts during tissue processing.As the reversible fixative targets proteins rather than nucleic acids, our approach ensures RNA integrity, library complexity, cellular composition, and gene expression comparable to those of the gold-standard fresh RNA sequencing assays.With FixNCut, time and location constraints for sample collection and processing are removed, making it an ideal strategy for research studies involving multiple groups, institutions, and hospitals worldwide. Standardizing protocols for clinical biopsy collection and downstream processing poses a significant challenge.While single-cell profiling technologies have proven to be highly useful and straightforward for PBMCs and other cells loosely retained in secondary lymphoid tissues, the effective isolation of single cells from solid tissues, such as tumors, remains a technical hurdle.In such cases, single cells may be tightly bound to extracellular scaffolds and neighboring cells, making dissociation and isolation difficult [24].Additionally, preserving the single-cell transcriptome before scRNA-seq is crucial, particularly when processing multiple samples from biological replicates simultaneously, as it can reduce the need for immediate time-consuming single-cell isolation protocols, such as dissociation, antibody staining, or FACS-based isolation [24].Recent studies have highlighted the impact of collection and dissociation protocols on cell type proportions and transcriptome profiles in multiple tissue contexts [2][3][4].Furthermore, dissociation-related effects have also been observed in cryopreserved human gut mucosa samples [25] and renal biopsies [26].To overcome these issues, a one-step collagenase protocol was used for intestinal biopsies when no cell type enrichment was required [25].Meanwhile, the use of cold-active proteases on kidney samples resulted in fewer artifacts, but inefficient tissue dissociation [26].In addition, studies have shown that neural cell populations (NCPs) suspension without methanol preservation also experiences alterations in cellular composition and gene expression [27].For fragile tissue biopsies, such as the pancreas or skin, which contain delicate cell populations, cryopreservation and cellular dissociation steps may introduce biases on the cellular composition.We previously conducted testing of VivoFix [28] with variable results, but acknowledge the potential utility of the protocol for fixation and dissociation. To address these challenges, we developed FixNCut, an approach that involves reversible fixation of the tissue at the time of collection to prevent further transcriptional changes during downstream processing.This is followed by standard dissociation and storage procedures.At the core of our approach is the use of Lomant's reagent/DSP, a reversible fixative that can easily penetrate cell membranes and preserve tissue characteristics.In this study, we compared fresh and fixed lung and colon samples from different species and experimental scenarios and found comparable results.Additionally, we demonstrated the versatility of the FixNCut for long-term storage by cryopreservation following sample fixation, making it a suitable protocol for its use in more complex and challenging research scenarios.While the lung represents a more resilient tissue for sample processing, without the introduction of major changes in gene expression or cellular composition, colon tissue is very sensitive.Here, we demonstrated decreased RNA quality and a shift in cellular composition in fresh compared to fixed samples, even under standard experimental conditions.We predict that under more stressful conditions, such as therapeutic intervention models, these biases will be even more pronounced. In single-cell analysis, antibodies are commonly used to select cells of interest by standard FACS enrichment or to quantify cell surface proteins using sequencing (e.g., CITE-Seq).Unlike other fixative agents, such as methanol, which induces protein unfolding and precipitation, or formalin, which non-selectively cross-links proteins, DNA, and RNA (reducing immunoreactivity with target-specific antibodies), the FixNCut protocol has the advantage that cells can be readily stained with antibodies and LMOs.This allows cell labeling or hashing before single-cell analysis.Hence, we assume that this protocol is adaptable and compatible with multiple single-cell modalities, including but not limited to CITE-seq, as well as other droplet or microwell platforms.This versatility makes it a powerful tool for designing flexible and robust studies, being applicable to different tissues, species, or disease conditions. The FixNCut protocol offers a straightforward way to preserve biopsies for various research contexts, including animal models at research institutes and patient biopsies collected at hospitals.We have demonstrated that FixNCut can be applied in clinical settings, where samples are collected at separate locations and time from their downstream processing steps.The FixNCut protocol shows potential compatibility with multiple single-cell and spatial applications for both single-cell and single-nuclei sequencing, making it a promising versatile tool in various basic, translation, and clinical areas (e.g., oncology and autoimmunity).However, further validation efforts have to reinforce its utility in these diverse applications. Conclusions We demonstrate the FixNCut protocol to preserve the transcriptional profile of single cells and the cellular composition of complex tissues.The protocol enables the disconnection of sampling time and location from subsequent processing sites, particularly important in clinical settings.The protocol further prevents sample processing artifacts by stabilizing cellular transcriptomes, enabling robust and flexible study designs for collaborative research and personalized medicine. Human PBMC isolation Peripheral venous blood samples were collected from voluntary blood donors using ACD tubes and stored at 4°C.PBMC isolation was performed using Ficoll density gradient centrifugation.Briefly, 10 mL of blood were diluted with an equal volume of 1× PBS and carefully layered onto 15 mL of Lymphoprep (PN.15257179, STEMCELL Technologies) followed by centrifugation for 20 min at 800×g and room temperature (RT) (with acceleration and brake off ).After centrifugation, PBMCs were collected with a sterile Pasteur pipette, transferred to a 15-mL tube, and washed twice with 10 mL of 1× PBS by centrifugation for 5 min at 500×g at RT. PBMCs were resuspended in 1× PBS + 0.05% BSA, and cell number and viability were measured with LUNA-FL TM Dual Fluorescence Cell Counter (LogosBiosystem). Mouse lung collection C57BL/6 mice were purchased from Janvier Laboratories at 6 weeks of age and sacrificed between weeks 7 and 9 by CO 2 asphyxiation.Lung samples were perfused prior to collection.To perfuse the lungs, a 26-G syringe was used to inject 3 mL of cold Hank's Balanced Salt Solution (HBSS) into the right ventricle of the heart, which resulted in the lungs turning white after injection.Mice were then carefully dissected for further processing. Mouse colon collection Mice were sacrificed using as described above, and the colon was collected and washed with HBSS using a syringe to remove feces.The collected colon samples were transported from the facility to the lab in a complete DMEM medium on ice.Upon arrival, the samples were extensively washed with ice-cold PBS and then cut into 3 × 3 mm pieces on a Petri dish using a sterile razor blade.The tissue pieces were then fixed as previously described. Human colon biopsies Colonic biopsies were collected from an ulcerative colitis patient in remission and placed in HBSS (Gibco, MA, USA) until processing, which was completed within an hour.The biopsies were split into four different conditions: fresh, fixed, cryopreserved, and fixed/ cryopreserved.For fixation, the biopsies were treated as previously described for mouse lung tissue. Human prostate tissue Human prostate tissue was collected with informed consent from a 75-year-old patient who underwent a radical prostatectomy procedure for prostate cancer. Preparation of DSP fixation buffer A 50× stock solution of DSP (50 mg/mL) was prepared in 100% anhydrous DMSO and stored at −80 °C.Prior to use, 10 μL of the 50× DSP was added dropwise to 490 μL of RT PBS in a 1.5-mL tube while vortexing to prevent DSP precipitation.This working solution (1× DSP fixation buffer) was then filtered through a 40-μm Flowmi Cell Strainer (PN.BAH136800040-50EA, Sigma-Aldrich).Table 1 provides detailed instructions for using the FixNCut protocol with both the DSP stock solution and working dilution. PBMCs fixation One million cells were split into two separate 1.5-mL tubes, with one tube used fresh (as a non-fixed control sample) while the other was subjected to cell fixation.For fixation, cells were centrifuged at 500×g for 5 min at 4 °C, and the resulting pellet was resuspended with 500 μL of 1× DSP fixation buffer and incubated at RT.After 15 min, the cells were mixed by pipetting and incubated for an additional 15 min.Fixation was stopped by adding 10 μL of 1 M Tris-HCl pH 7.4, and the sample was briefly vortexed and incubated at RT for 5 min.Both fresh and fixed samples were centrifuged for 5 min at 500×g at 4 °C, and contaminating erythrocytes were eliminated by resuspending the pellets and incubating at RT for 5 min with 1× Red Blood Cell lysis solution (PN.130-094-183, Miltenyi Biotec).Both samples, fresh and fixed, were centrifuged for 5 min at 500×g at 4 °C, and contaminating erythrocytes were eliminated by resuspending the pellets in 500 μL of PBS and incubating at RT for 5 min upon addition of 10 times volume of 1× Red Blood Cell lysis solution (PN.130-094-183, Miltenyi Biotec).Cells were then resuspended in an appropriate volume of 1× PBS + 0.05% BSA in order to reach optimal concentration for cell encapsulation (700-1000 cells/μL) and filtered using a pluriStrainer Mini 40 μm (PN.43-10040-70 Cell Strainer).Cell concentration was verified with LUNA-FL TM Dual Fluorescence Cell Counter (LogosBiosystem). Cryopreservation Cryopreservation of fresh or fixed biopsies was done by transferring them into 1 mL of freezing media (90% FBS + 10% DMSO, Thermo Scientific) and storing them at −80 °C in a Mr. Frosty ™ Freezing Container to ensure gradual freezing. Table 1 Recommendations on working with DSP stock and working dilution, as described by Attar et al. [16] Preparation of 50× DSP stock • Equilibrate DSP vial at RT for 30 min and then prepare a 50× stock solution of DSP (50 mg/mL) in anhydrous dimethyl sulfoxide (Sigma, Cat.N. 276855-100ML). • Dispense the stock into single-use aliquots (e.g., 100 μL aliquots, but volume depends on your use) and store in a bag and dry environment (with silica/desiccant if possible) at −80 °C.• Be mindful of not opening and closing the tubes at −80 °C. Preparation of DSP working dilution • Thaw the 50× DSP stock reagent from −80 °C and equilibrate at RT no longer than 10 min before fixation. • Immediately before use prepare 500 μL of 1× DSP working solution in molecular biology grade 1× PBS as follows: aliquot 10 μL of 50× DSP stock reagent in a 1.5-mL tube and while vortexing (VERY IMPORTANT) add 490 μL of PBS dropwise using a P200 pipette.1× DSP must be used within 5 min of preparation.• Note: Do not prepare larger volumes (e.g., if you need to fix two samples, prepare each aliquot of 500 μL separately, DO NOT prepare 1 mL and then aliquot into two tubes).You should notice some thin white rings on the walls of the tube once diluted the 50× stock.This is expected and will be cleared during filtration.Stronger precipitation indicates insufficient solving of DSP and preparation of a new dilution is strongly recommended. • Do not re-freeze the leftovers of the 50× DSP.Always use a freshly thawed aliquot to prepare the 1× working solution. Single-cell RNA-seq experimental design (scRNA-seq) Human PBMC 3′ scRNA-seq Cells from both fresh and fixed PBMCs were processed for single-cell RNA sequencing using the Chromium Controller system (10X Genomics), with a target recovery of 8000 total cells.The Next GEM Single Cell 3′ Reagent Kits v3.1 (PN-1000268, 10X Genomics) were used to prepare cDNA sequencing libraries, following the manufacturer's instructions.Briefly, after GEM-RT clean-up, cDNA was amplified for 11 cycles and then subjected to quality control and quantification using an Agilent Bioanalyzer High Sensitivity chip (Agilent Technologies).The Dual Index Kit TT Set A (PN-1000215, 10X Genomics) was used for indexing cDNA libraries by PCR.The size distribution and concentration of 3′ cDNA libraries were verified again on an Agilent Bioanalyzer High Sensitivity chip (Agilent Technologies).The cDNA libraries were sequenced on an Illumina NovaSeq 6000 using the following sequencing conditions: 28 bp (Read 1) + 8 bp (i7 index) + 0 bp (i5 index) + 89 bp (Read 2) to generate approximately 40,000 read pairs per cell. Mouse lung cryopreservation, fixation, and cryopreservation upon fixation Mouse lungs were harvested and transferred into ice-cold complete DMEM medium.Samples were extensively washed with ice-cold PBS, transferred into a Petri dish, and cut into ~3 × 3 mm pieces using a razor blade.Tissue pieces were divided into four tubes for each condition.Tissue pieces in tube 1 were cryopreserved in freezing media (50% DMEM + 40% FBS + 10% DMSO) and placed into a Mr. Frosty ™ Freezing Container and transferred to a −80 °C freezer to ensure a gradual freezing process.The tissue pieces in tubes 2 and 3 were triturated with a razor blade (~1 × 1 mm) on ice and fixed by submerging them in 500 μL of 1X DSP fixation buffer (freshly prepared, within 5 min of use) and incubating at RT for 30 min.After incubation, 10 μL of 1 M Tris-HCl pH 7.5 was added to stop the fixation, and the samples were vortexed for 2-3 s and incubated at RT for 5 min.After a brief centrifugation, the supernatant was removed, and the tissue pieces were washed once with 1 mL of PBS.Tissue pieces in tube 2 were stored at 4 °C in PBS supplemented with 2 U/μL of RNAse inhibitor (Cat.N. 3335402001, Sigma-Aldrich) until the following day, while tissue pieces in tube 3 were cryopreserved by adding 10% DMSO to the PBS, transferring the cell suspension into a cryotube, which was placed into a Mr. Frosty ™ Freezing Container, and stored at −80 °C.The tube containing fresh tissue was stored in complete DMEM on ice and washed again with ice-cold PBS before tissue dissociation. Mouse lung dissociation and scRNA-seq The day after the collection and storage of samples, the cryopreserved and fixed/cryopreserved samples were quickly thawed in a 37 °C water bath and washed with PBS.Similarly, the fixed sample was washed with PBS before tissue dissociation.The samples were then transferred to a Petri dish on ice and triturated using a razor blade.Next, the small tissue pieces were incubated in 1 mL of digestion media (200 μg/mL Liberase TL, 100 μg/mL DNase I in HBSS with Ca 2+ Mg 2+ ) at 37 °C with shaking at 800 rpm.After 15 min of incubation, the samples were mixed by pipetting, followed by another 15 min of incubation.The cells were then filtered using a pluriStrainer Mini 70 μm (PN.43-10070-40 Cell Strainer), and the strainer was washed with 10 mL of cold 1X HBSS.The samples were centrifuged at 500×g for 5 min at 4 °C, and the cell pellets were resuspended in 100 μL of PBS + 0.05% BSA.Contaminating erythrocytes were lysed using the previously described method.The cells were washed once with 10 mL PBS + 0.05% BSA, resuspended in an appropriate volume of the same buffer, and filtered using 40-μm strainers.The total number of cells was determined using the LUNA-FL TM Dual Fluorescence Cell Counter (LogosBiosystem).The cell concentration of each sample was adjusted to 700-1000 cells/μL and 7000-10,000 cells were loaded into a 10X Chromium controller.Single-cell RNA-sequencing was performed as described above. Mouse colon dissociation Fresh and fixed colon samples were incubated in 1 mL of digestion media (200 U/mL Collagenase IV, 100 μg/mL DNase I in HBSS w/Ca 2+ Mg 2+ ) at 37 °C, shaking at 800 rpm for 30 min.Samples were mixed by pipetting every 10 min during incubation.After the incubation, samples were filtered through a pluriStrainer Mini 70 μm (PN.43-10070-40 Cell Strainer) and the strainer was washed with 10 mL of cold 1X HBSS.The samples were then centrifuged at 500×g for 5 min at 4 °C and cell pellets were washed twice with cold PBS + 0.05% BSA.Finally, the cell pellets were resuspended in an appropriate volume of the same buffer and filtered using 40-μm strainers.The total cell number was determined with the LUNA-FL TM Dual Fluorescence Cell Counter (LogosBiosystem).The cell concentration of each sample was adjusted to 700-1000 cells/μL and 7000 cells were loaded onto a 10X Chromium controller.Single-cell RNA-seq was performed as described above. Flow cytometry analysis of human PBMCs Anti-human CD3, CD4, CD8, and CD19 antibodies were tested as follows: Cryopreserved PBMCs obtained from three healthy donors were rapidly thawed in a 37 °C water bath.Thawed samples were washed in pre-warmed RPMI media supplemented with 10% FBS (Thermo Fisher Scientific) and centrifuged at 500×g for 5 min at RT.The supernatant was discarded, and pellets were washed in 10 mL of 1× PBS + 0.05% BSA, centrifuged at 500×g for 5 min at 4 °C, and resuspended in 1 mL of PBS + 0.05% BSA.The cell suspension was then filtered through a 40-μm cell strainer.Cell viability and concentration were verified using a LUNA-FL TM Dual Fluorescence Cell Counter (LogosBiosystem).Each sample was split into two separate tubes, and half of the cells were fixed with 1× DSP fixation buffer as previously described.After fixation, cells were washed and resuspended in 100 μL of Cell Staining Buffer (PN-420201, Biolegend) and stained with 5 μL of each of the four following primary antibodies: antihuman CD3, CD4, CD8, and CD19 antibody for 15 min at RT in the dark.Detailed information on the antibodies and reagents used in this study is provided in Table 2. Samples were washed twice with Cell Staining Buffer and resuspended in 0.5-1 mL of PBS + 0.05% BSA. 10 μg/mL DAPI (PN-564907, BD Bioscience) was added to determine cell viability before flow cytometric analysis using the BD FACS Melody Automated Cell Sorter (BD Bioscience) and the BD FACSChorus ™ Software.Postacquisition analysis was performed using FlowJo version 10 (FlowJo LLC). Anti-human CD45, CD3, CD4, and CD8 antibodies and anti-CD298 and β2-microglobulin antibodies were tested as follows: Human PBMCs were isolated from normal donor human buffy coats provided by the Australian Red Cross Blood Service by Ficoll-Paque ® density gradient centrifugation.Fresh and fixed PBMCs were incubated with Human BD Fc Block for 10 min at 4°C and then stained for cell surface markers for 30 min at 4 °C according to the manufacturer's recommendation.Annexin V-FITC and PI staining were used to determine viability.Detailed information on the antibodies and reagents used in this study is provided in Table 2.The acquisition was performed on LSR Fortessa X 20 (BD) and analyzed via FlowJo software (FlowJo LLC). LMO sample preparation The labeling of PBMCs was performed following the protocol previously described by McGinnis et al. [30].Briefly, 5 × 105 fresh and fixed PBMCs were washed twice with PBS and labeled with a 1:1 molar ratio of anchor LMO and barcode oligonucleotide for 5 min on ice.Subsequently, both samples were incubated with a co-anchor and Alexa 647 fluorescent oligo feature barcodes at concentrations of 200 nM and 400 nM, respectively, for another 5 min on ice.The cells were then washed twice with icecold 1% BSA in PBS.Acquisition was performed on the LSR Fortessa X 20 (BD) and analysis was carried out using FlowJo software (FlowJo LLC).Detailed information on the LMOs used in this study is provided in Table 3. Flow cytometry analysis of human colon biopsy The single-cell suspensions obtained after biopsy digestion were labeled with the following antibodies: anti-CD45, anti-CD3, anti-CD11b, and anti-EPCAM, according to the manufacturer's instructions.Detailed information on the antibodies used in this study can be found in Table 2. Cell viability was assessed using the Zombie Aqua Fixable Viability Kit (BioLegend).The cells were then fixed with the Stabilizing Fixative (BD) before being analyzed using the FACSCanto II flow cytometer (BD). Tissue preparation The 2 cm × 10 mm tissue sample was divided into two equal halves lengthwise.One half was fixed in 2 ml of 10% neutral buffered formalin (NBF), while the other half was placed in 500 μL of 1X DSP fixation buffer.The NBF-fixed sample was incubated at RT for 4 h, stored overnight at 4 °C, washed 3 times with 1 mL of milliQ water, and then stored in 1 mL of 70% ethanol at 4 °C.The DSP-fixed sample was treated with freshly-made DSP fixation buffer, which was replaced every 60 min for 4 h.The fixed sample was then neutralized with 10 μL of 1M Tris-HCl pH 7.4 for 15 min at RT, washed 3 times with 1 mL of milliQ water, and placed in 1 mL of 70% ethanol at 4°C.Both NBF-and DSP-fixed samples were embedded in paraffin overnight.Fivemicrometer-thick sections were cut from both the formalin-fixed, paraffin-embedded (FFPE) and the DSP-fixed paraffin-embedded (DSP-PE) tissues and mounted onto a single poly-L-lysine coated coverslip (22 × 22 mm, #1.5, Akoya Bioscience #7000005). Antibody staining The coverslip mounted section was baked at 60 °C on a heat block for 1 h to remove paraffin, then deparaffinized in 1× Histo-Choice clearing agent (ProSciTech #EMS64110-01) and rehydrated in ethanol before washing in milliQ water.Antigen retrieval was performed in a pressure cooker on the highest setting for 20 min in 1X citrate buffer, pH 6 (Sigma, #C9999-1000ML).The tissue was blocked using buffers from the commercially available Phenocycler staining kit (Akoya Bioscience, # 7000008) and stained with a 7-antibody panel at RT for 3 h.Detailed information on the antibodies used in this study can be found in Table 4.The antibodies were used at a dilution of 0.9:200 for commercially available Akoya antibody-oligo conjugates and 3.7:200 for antibodies customconjugated by Akoya Bioscience (Spatial Tissue Exploration Program (STEP)).After staining, the coverslip was subjected to a post-fixation step.DAPI staining was used to visualize cell nuclei and locate regions of interest on each tissue sample with the Zeiss Axio Observer 7 fluorescent inverted microscope. Phenocycler image acquisition The Phenocycler microfluidic microscope stage was programmed to acquire two 3 × 3 tiled regions on each tissue using a 20× objective lens, with each tile consisting of a 7-image Z-stack illuminated by LED light to specifically excite either DAPI (for 10 ms in all cycles) or one of three fluorescently labeled reporter oligos (Cy3, Cy5, and Cy7).The software was set to acquire images over 5 cycles, with each cycle consisting of the addition of a set of reporter oligos complementary to the antibody-oligo conjugates detailed in Table 4.During the first and last cycles, no reporter oligos were added to allow for background fluorescence subtraction.The exposure times for each antibody are also provided in Table 4.After imaging was completed, the sample was manually stained with Hematoxylin and Eosin (H&E) following the UofA histology protocol, and bright-field images were captured using the Zeiss Axio Observer 5 fluorescent inverted microscope. Image processing The acquired images were processed using the Phenocycler CODEX Processor software (Akoya version 1.8.3.14) to deconvolve and stitch them together, resulting in a set of multi-channel QPTIFF files for each region.The levels for each channel were adjusted using QuPath 0.4.0 and the final images were saved as RGB tiff files.However, some antibodies did not produce sufficient signal or acceptable images after processing and were therefore excluded from further analysis. Data processing To profile the cellular transcriptome, we processed the sequencing reads using the CellRanger software package (version 6.1.1)from 10X Genomics Inc.We mapped the reads against either the mouse mm10 or the human GRCh38 reference genome (GEN-CODE v32/Ensembl 98), depending on the samples.In order to avoid any artifacts on the downstream analysis due to differences in sequencing depth among samples, we normalized libraries for effective sequencing depth using "cellranger aggr".This subsampling approach ensures that all libraries have the same average number of filtered reads mapped confidently to the transcriptome per cell. Data analysis All analyses presented in this manuscript were conducted using R version 4.0.5, along with specific analysis and data visualization packages.For scRNA-seq analysis, we used the Seurat R package (version 4.0.0)[31], SeuratObject package (version 4.0.1), and other packages specified in the subsequent sections. scRNA-seq quality control To compare the library complexity (total captured genes and Unique Molecular Identifiers, or UMIs) across libraries, we investigated the relationship between the cumulative number of detected genes and UMIs with the library sequencing depth.To achieve this, we loaded the "molecule_info.h5"information using the function "read10xMolInfo" from the DropletUtils package (version 1.10.3).Then, we downsampled the library sequenced reads assigned to a barcode (excluding background and noisy reads) using the function "downsample_run", implemented in Rcpp, which ensures read sampling without replacement and simultaneously updates the sampling frequency.We utilized various depths for downsampling (steps of 5 M or 10 M reads, depending on the library), which emulates differences in sequencing depth per cell.To assess differences in library complexity along sequencing depth between protocols under study, we fitted a linear regression model (Y ~ X) to each curve.Then, we compared the confidence intervals (95% CI) of the independent variable (X, "sequenced reads") across libraries, considering the differences statistically significant when the confidence intervals between conditions did not overlap.Moreover, we assessed the distribution of cell complexity (total captured genes and UMIs) per cell sequencing depth across libraries and computed a linear model to compare the slope of the regression line for each different library.Ultimately, we computed the cumulative number of detected genes over multiple cells by averaging the total genes after 100 sampling of an increasing number of randomly sampled cells (from 1 to 100, using steps of 2), after running the "cellranger aggr" step described above. To enhance our comprehension of the impact of DSP-fixation on gene recovery, we conducted an analysis comparing the categories of all genes captured in each libraryeither in all conditions or uniquely in a specific condition of each comparison.For the mouse data, we employed the complete list of gene annotations sourced from the Mouse Genome Informatics website (https:// www.infor matics.jax.org), while for the human data, we utilized the comprehensive list of gene annotations obtained from the HUGO Gene Nomenclature Committee (HGNC) (https:// www.genen ames.org/).After ensuring that there were no remarkable differences in the main quality control (QC) metrics (library size, library complexity, percentage of mitochondrial and ribosomal expression) among the different samples, we performed an independent QC, normalization, and analysis for the libraries from different species and tissue, following the guidelines provided by Luecken et al. [32] and Heumos et al. [33].We removed lowquality cells by filtering out barcodes with a very low number of UMIs and genes, or with a high percentage of mitochondrial expression, as it is indicative of lysed cells.Additionally, we considered removing barcodes with a large library size and complexity.We eliminated genes that were detected in very few cells.Notably, due to the inherent characteristics of colon biopsies (a higher number of epithelial cells, which are less resistant to sample processing), we followed a slightly different QC approach for mouse and human colon samples.In brief, we performed a first permissive QC filtering out cells with very high MT% (> 75% for mouse and > 85% for human) before proceeding to downstream analysis.We annotated cells to distinguish between the epithelial and non-epithelial fraction.Then, we repeated the QC step, using different thresholds for the epithelial fraction (> 60% MT) and for the non-epithelial cells (> 50% for mouse and > 25% for human).Finally, data was normalized and log-transformed. Doublet was predicted with Scrublet [34] (version 0.2.3), and computed doublet scores were retained, with putative doublet cell barcodes flagged.However, no threshold was applied to filter them out at this stage, adopting a permissive approach.Consequently, during the clustering and annotation step, the clusters showing modified QC metrics, along with the co-expression of different lineage/population-specific gene markers and high doublet scores, were assessed to determine whether a specific cluster could be classified as a group of doublets and subsequently excluded. Cell clustering and annotation To achieve successful cell-type annotation combining data from the same tissue and species (mouse and human colon samples), we removed the batch effect with the Harmony integration method [35] using the library as a confounder variable.After integration, we created a k-nearest neighbors (KNN) graph with the "FindNeighbors" function using the first 20 Principal Components (PC), followed by the cell clustering with the Louvain clustering algorithm using the "FindClusters" function at different resolutions.To visualize our data in a two-dimensional embedding, we run the Uniform Manifold Approximation and Projection (UMAP) algorithm.Then, we performed a differential expression analysis (DEA) for all clusters to determine their marker genes using the normalized RNA counts.To annotate the clusters into specific cell types, we examined the expression of canonical gene markers, compared the results of the DE analysis, and referred to gene markers from published annotated datasets.We used the following datasets as references: human PBMCs based on Stuart et al. [36]; mouse lung based on Angelidis et al. [37], Zhang et al. [38], and Bain and MacDonald [39]; mouse colon based on Tsang et al. [40]; and human colon based on Garrido-Trigo et al. [41].Furthermore, we performed specific cell-type sub-clustering when required a fine-grained resolution to capture a specific cell state of interest.Doublets and low-quality cells were automatically removed at this point. Fig. 1 Fig. 1 FixNCut protocol in human peripheral blood mononuclear cells (PBMCs).a Mapping analysis of sequencing reads within a genomic region.b Comparative analysis of the number of detected genes (top) and UMIs (bottom) across various sequencing depths.c Cumulative gene counts analyzed using randomly sampled cells.d Principal component analysis (PCA) and uniform manifold approximation and projection (UMAP) representation of gene expression profile variances of fresh and fixed samples.e, f Linear regression model comparing average gene expression levels of expressed genes (e) and main biological hallmarks, including apoptosis, hypoxia, reactive oxygen species (ROS), cell cycle G2/M checkpoint, unfolded protein response (UPR), and inflammatory response genes (f).The coefficient of determination (R 2 ) computed with Pearson correlation and the corresponding p-value are indicated.g UMAP visualization of 17,483 fresh and fixed PBMCs, colored by 19 cell populations.h Comparison of cell population proportions between fresh (n = 9754) and fixed (n = 7729) PBMCs with the Bayesian model scCODA.Asterisks (*) indicate credible changes.iDifferential gene expression analysis between fresh and fixed samples.The top differentially expressed genes (DEGs) with significant adjusted p-values (FDR) < 0.05, upregulated (red), and downregulated (blue) with Log2FC > |1| are indicated.j Violin plot of ex-vivo blood handling gene signature score[19] for fresh and fixed human PBMCs.Statistical analysis between fixed and fresh cells was performed using the Wilcoxon signed-rank test.k Dotplot showing the average expression of sampling-time DEGs for Fresh (y-axis) for all 19 cell types (x-axis) split by protocol.The dot size reflects the percentage of cells in a cluster expressing each gene, and the color represents the average expression level Fig. 2 Fig. 2 FixNCut protocol tested in mouse lung samples.a Mapping analysis of sequencing reads within a genomic region.b Comparative analysis of the number of detected genes (top) and UMIs (bottom) across various sequencing depths.c Cumulative gene counts analyzed using randomly sampled cells.d Principal component analysis (PCA) and uniform manifold approximation and projection (UMAP) representation of gene expression profile variances of fresh and fixed samples.e Linear regression model comparing average gene expression levels of expressed genes between protocols.The coefficient of determination (R 2 ) computed with Pearson correlation is indicated.f Hierarchical clustering of coefficient of determination (R 2 ) obtained for all pair comparisons between protocols for biological hallmarks, including apoptosis, hypoxia, reactive oxygen species (ROS), cell cycle G2/M checkpoint, unfolded protein response (UPR), and inflammatory response genes.g UMAP visualization of 19,606 fresh and fixed mouse lung cells, colored by 20 cell populations.h Comparison of cell population proportions between fresh (n = 10,289) and fixed cells (n = 9317).The top figure shows the difference in cell population proportions between fresh and fixed samples, and the bottom figure shows the results of the compositional cell analysis using the Bayesian model scCODA.Asterisks (*) indicate credible changes, upregulated for the fresh sample.i Differential gene expression analysis between fresh and fixed samples.The top differentially expressed genes (DEGs) with significant adjusted p-values (FDR) < 0.05, upregulated (red), and downregulated (blue) with Log2FC > |1| are indicated Fig. 3 Fig. 3 FixNCut protocol tested in mouse colon samples.a Mapping analysis of sequencing reads within a genomic region.b Comparative analysis of the number of detected genes (top) and UMIs (bottom) across various sequencing depths.c Cumulative gene counts analyzed using randomly sampled cells.d Principal component analysis (PCA), uniform manifold approximation and Projection (UMAP) prior data integration, and harmony integrated UMAP representation of gene expression profile variances of fresh and fixed samples.e, f Linear regression model comparing average gene expression levels of expressed genes (e) and biological hallmarks, including apoptosis, hypoxia, reactive oxygen species (ROS), cell cycle G2/M checkpoint, unfolded protein response (UPR), and inflammatory response genes (f).The coefficient of determination (R 2 ) computed with Pearson correlation and the corresponding p-values are indicated.g UMAP visualization of 14,387 fresh and fixed mouse colon cells, colored by 16 cell populations.h Comparison of cell population proportions between fresh (n = 6009) and fixed (n = 8378) mouse colon samples with the Bayesian model scCODA.Asterisks (*) indicate credible changes, upregulated for the fixed sample.i Differential gene expression analysis between fresh and fixed samples.The top differentially expressed genes (DEGs) with significant adjusted p-values (FDR) < 0.05, upregulated (red), and downregulated (blue) with Log2FC > |1| are indicated Fig. 4 Fig. 4 Long-term storage of fixed mouse lung samples.a Mapping analysis of sequencing reads within a genomic region.b Comparative analysis of the number of detected genes (top) and UMIs (bottom) across various sequencing depths.c Cumulative gene counts analyzed using randomly sampled cells.d Principal component analysis (PCA) and uniform manifold approximation and projection (UMAP) representation of gene expression profile variances of fixed, cryopreserved, and fixed/cryopreserved samples.e Linear regression model comparing average gene expression levels of expressed genes across protocols used.The coefficient of determination (R 2 ) computed with Pearson correlation is indicated.f Hierarchical clustering of coefficient of determination (R 2 ) obtained for all pair comparisons across protocol for biological hallmarks, including apoptosis, hypoxia, reactive oxygen species (ROS), cell cycle G2/M checkpoint, unfolded protein response (UPR), and inflammatory response genes.g UMAP visualization of 24,291 fixed, cryo, and fixed/cryo mouse lung cells, colored by 20 cell populations.h Comparison of cell population proportions between fixed (n = 10,256), cryopreserved (n = 8609), and fixed/cryopreserved cells (n = 5426).The top figure shows the difference in cell population proportions between fixed, cryo, and fixed/cryo samples, and the bottom figure shows the results of the compositional cell analysis using the Bayesian model scCODA.Credible changes and Log2FC are indicated.i Differential gene expression analysis across conditions: fixed vs cryo (top-left), fixed/ cryo vs cryo (top-right), and fixed/cryo vs fixed (bottom).The top differentially expressed genes (DEGs) with significantly adjusted p-values (FDR) < 0.05, upregulated (red), and downregulated (blue) with Log2FC > |1| are indicated Fig. 7 Fig. 7 Fluorescent antibody labeling of membrane proteins in fixed cells and tissues in mouse and human.a Representative gating strategy of one experiment analyzed by flow cytometry with cryopreserved and cryopreserved+fixed PBMCs from healthy donors (n = 3).PBMCs were stained with anti-human CD45, CD3, CD19, CD4, and CD8 monoclonal antibodies (mAbs).T cells were selected by the positive expression of CD3, whereas B cells were selected from the CD3-negative fraction (Non T cells) and by the positive expression of CD19.CD4-positive and CD8-positive T cells were selected from CD3-positive T cells.Box plots show the percentage of positive cells in each subpopulation for cryo (blue) and cryo+fixed (orange) PBMCs analyzed by flow cytometry.b Representative histograms of the mean fluorescent intensity (MFI) with cryopreserved and cryopreserved+fixed PBMCs from healthy donors (n = 3) stained with anti-human CD45, CD3, CD19, CD4, and CD8 mAbs.Bar plots show the MFI expression for three fresh and fixed PBMC samples analyzed by flow cytometry.c, d Representative histograms of MFI of fresh and fixed PBMCs from healthy donors (n = 4) stained with anti-human CD45, CD3, CD4, and CD8 mAbs, anti-β2M, anti-CD298, and LMOs analyzed by flow cytometry.f Representative histograms of the MFI from cryopreserved (blue) and cryopreserved+fixed (orange) human colon samples processed and stained with anti-human CD45, CD3, CD11b, and EpCAM mAbs and analyzed by flow cytometry.g Multiplex fluorescence tissue imaging of a human prostate cancer section, DSP-fixed (top) or formalin-fixed (bottom) paraffin-embedded, captured using Phenocycler.Images show hematoxylin and eosin staining, five-color overlay, and individual SMA, Pan-CK, E-cadherin, and p63 antibody staining Table 2 Flow cytometry antibodies and reagents Table 4 Antibody and reporter cycle layout, including LED intensities and exposure times for each marker
v3-fos-license
2024-05-23T15:07:12.350Z
2024-05-21T00:00:00.000
269961872
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/17/11/2443/pdf?version=1716276232", "pdf_hash": "27cb4a4fd8695690f94d44174e33583176ac5815", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44613", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "c9ec4f6cf037b60937fbe9e94baf8fbc22e2648b", "year": 2024 }
pes2o/s2orc
Partial Discharge Pattern Recognition Based on an Ensembled Simple Convolutional Neural Network and a Quadratic Support Vector Machine : Partial discharge (PD) is a crucial and intricate electrical occurrence observed in various types of electrical equipment. Identifying and characterizing PDs is essential for upholding the integrity and reliability of electrical assets. This paper proposes an ensemble methodology aiming to strike a balance between the model complexity and the predictive performance in PD pattern recognition. A simple convolutional neural network (SCNN) was constructed to efficiently decrease the model parameters (quantities). A quadratic support vector machine (QSVM) was established and ensembled with the SCNN model to effectively improve the PD recognition accuracy. The input for QSVM consisted of the circular local binary pattern (CLBP) extracted from the enhanced image. A testing prototype with three types of PD was constructed and 3D phase-resolved pulse sequence (PRPS) spectrograms were measured and recorded by ultra-high frequency (UHF) sensors. The proposed methodology was compared with three existing lightweight CNNs. The experiment results from the collected dataset emphasize the benefits of the proposed method, showcasing its advantages in high recognition accuracy and relatively few mode parameters, thereby rendering it more suitable for PD pattern recognition on resource-constrained devices. Introduction Partial discharge (PD), characterized by localized and transient discharges that typically occur at defects within insulation systems, is a critical and intricate electrical phenomenon in various types of electrical equipment.PD does not completely bridge the insulation between conductors [1]; instead, it represents a localized flashover within an insulation system due to a large localized electric field being greater than the dielectric withstand capability while the overall insulation system remains capable of withstanding the applied electrical field.PD is diverse in both form and location.It can transpire in various electrical equipment, including transformers, generators, insulators, cables, and switchgear.The occurrence of PD in these systems can be ascribed to uneven electric field distributions, material imperfections, or operational stresses, leading to the generation of various signals, including lights, heats, smells, sounds, electromagnetic waves, and high-frequency electric currents. Detecting and characterizing PD is paramount in maintaining the integrity and reliability of electrical assets.PD measurements are used to evaluate the safety condition of insulation systems, enabling the identification of potential defects and facilitating proactive maintenance.There are several techniques for detecting PD in electrical systems.Ultrasonic detection involves capturing the ultrasonic noise emitted by PD using sensitive sensors, providing insights into the discharge localization and severity.Electromagnetic interference (EMI) detection monitors electromagnetic signals to locate areas of partial discharge Energies 2024, 17, 2443 2 of 12 activity.Acoustic emission detection focuses on capturing and analyzing the acoustic signals produced by PD, offering valuable information about discharge characteristics.High-frequency current transient measurements are effective in assessing insulation conditions and identifying potential failure points.Dissolved gas analysis (DGA) involves monitoring and analyzing the composition of gasses dissolved in insulating oil, providing indications of PD and potential insulation degradations.Electric field measurements detect anomalies and areas of increased field intensity, serving as an indicator of partial discharge activity.In engineering applications, the original measured data are processed to extract the statistical feature parameters and generate the phase-resolved partial discharge (PRPD) patterns [2].Subsequently, PD pattern recognition is carried out based on these processed data.PD pattern recognition involves the identification and analysis of the characteristic electromagnetic, acoustic, and ultrasonic signals to distinguish the type of PD activity based on its unique pattern feature.By utilizing advanced signal processing techniques and machine learning algorithms, PD pattern recognition enables the classification of partial discharge sources within high-voltage equipment.Consequently, PD pattern recognition plays a crucial role in condition monitoring, allowing for the early detection of insulation defects in an electrical system. Traditional PD pattern features typically include the waveform characteristics, the spectral features, the pulse counts, the phase characteristics, and the amplitude features.Traditional machine learning methods, such as artificial neural networks (ANNs) and support vector machines (SVMs), are conventionally utilized to learn from these features for pattern recognition.Tang et al. proposed a minimum-redundancy maximum-relevance (mRMR) algorithm-based feature optimization selection method to select the statistical features under a PRPD model [3].The results indicated that the PD severity assessment accuracy with the optimal feature set had a higher stability of precisions than that with the traditional feature set.Zhou et al. utilized both time domain and frequency domain features and introduced an optimized SVM algorithm for the pattern recognition of PD using ultrasonic signals [4].The results showed that the proposed SVM algorithm had a higher recognition accuracy and a faster convergence speed.Carvalho et al. compared three clustering algorithms (K-means, Gaussian mixture model, and mean-shift) and the SVM method for PD classifications; the supervised SVM demonstrated a notably high average accuracy [5].Furthermore, global optimization algorithms have been used to optimize the hyperparameters of SVM models in some studies.Sun et al. proposed an improved whale optimization algorithm (IWOA) to optimize the hyperparameters of SVMs to identify different types of PD [6].The resultant accuracy verified that IWOA had a good effect on the parameter optimization of SVMs.Sun et al. also proposed an improved northern goshawk optimization (SCNGO) to optimize the parameter penalty factor and the kernel parameter of the SVM [7].Fujioka et al. utilized the maximum intensity observed in the PRPD pattern as the input data of an ANN [8].The classification accuracy was improved by shifting the phase of the maximum sensor output to 0 • , as proposed.Haiba et al. utilized ANNs for classifying defects in ceramic insulators [9].The results from the ANN indicated that the overall recognition rate was dependent on the number of the collected signals, a greater number of captured signals led to a higher recognition rate.The findings of the ANN technique were also verified by SVM and KNN models in [9].Nevertheless, the major drawback of using traditional machine learning methods for PD pattern recognition is the necessity to extract features in advance. In recent years, studies on the recognition of PRPDs, phase-resolved pulse sequences (PRPSs) [10], and other spectrograms in the direction of PD pattern recognition have demonstrated outstanding performance attributable to advancements in image recognition technology.Aldosari et al. combined long short-term memory (LSTM) networks and convolutional neural networks (CNNs) to identify the form of PD patterns, demonstrating that the integrated CNN-LSTM network outperformed an individual CNN or an LSTM network [11].Additionally, they found that image data augmentation had a better effect in both grayscale and RGB images.Fu et al. employed the DenseNet model in conjunction with transfer learning to extract features from the time domain signal map of a gas-insulated switchgear PD [12].The proposed method enabled direct pattern recognition research on the unstructured data time-domain waveform spectrogram of PD.Yin et al. constructed a model for identifying the statistical parameters of PRPD patterns based on the Hausdorff-like distance and an improved CNN for PRPD pattern recognition [13].They utilized Dempster-Shafer (D-S) evidence theory to combine the results of the two pattern recognition methods, thus enhancing the accuracy of PD pattern recognition.Song et al. utilized the histogram of oriented gradient (HOG) features of the 3D PRPSs and designed the attribute selective Naïve Bayes (ASNB) classifier to recognize the 3D PRPS graphs [10].The contrasting results compared to those using statistical feature parameters indicated that the use of HOG features resulted in a higher recognition accuracy and a stronger robustness in PD recognition under different voltages.Wang et al. enhanced the PRPS graph using the contrast-limited adaptive histogram equalization (CLAHE) algorithm and employed uniform local binary patterns (LBPs) as the feature vector of the PRPS graph [14].They then used the Adaboost cascade classifier for the integrated learning of different classification models.The experimental results indicated that using ULBP as the feature vector could enhance the generalization ability of traditional algorithms, and the use of CLAHE enhancements improved the upper limit of the recognition rate.Nevertheless, due to their limited number of layers, these models may not comprehensively extract the PD features. Lightweight CNNs are increasingly being employed in the recognition of PD due to their hardware-friendly nature [15][16][17][18].A lightweight CNN, in the context of deep learning, refers to a neural network architecture designed with a relatively small number of parameters and computations, enabling an efficient inference on resource-constrained devices such as mobile phones or edge devices.These networks are tailored to strike a balance between model complexity and predictive performance, making them ideal for deployments in PD pattern recognition.Currently, the most widely used mobile networks include ShuffleNet [19], MobileNet [20][21][22], and EfficientNet [23].It should be pointed out that even though significant progress has been made in lightweight CNN-based methods for recognizing PD patterns, the large model size still poses challenges in using lightweight CNN-based methods for recognizing PD patterns to satisfy the real-time recognition requirements, especially when deployed to embedded devices.Acknowledging the limitations of existing lightweight CNN-based methods, this paper proposes an ensemble learning method that combines SVM and CNN, with improved recognition accuracy, high solution efficiency, and reduced parameter quantity.A simplification of MobileNet V2 (SCNN) was undertaken to address the demand for more efficient models while preserving the problemsolving accuracy.Furthermore, the integration of a quadratic SVM (QSVM) with the SCNN model effectively enhances the accuracy of PD recognition.These innovations collectively demonstrate the efforts in streamlining complex network architectures while maintaining accuracy, and in integrating traditional machine learning methods with modern CNN models to improve the recognition accuracy.This approach not only advances the field by achieving enhanced recognition results, but also showcases practical relevance by being more suitable for deployment on terminal devices, aligning with the demands of real-world applications.The research makes a significant scientific contribution by addressing the challenges of real-time recognition requirements and deployment on embedded devices in the context of identifying PD patterns in electrical systems. The Proposed PD Pattern Recognition Methodology To be self-contained, 3D Graph of PRPS and MobileNet V2 will firstly be briefed, and the proposed PD pattern recognition methodology will then be detailed. Three-Dimensional Graph of PRPS According to the generating mechanism, PD can be classified into suspended electrode discharge, surface discharge, and metal tip discharge.Suspended electrode discharge comes from the presence of free or floating conductive particles within an insulation material.When subjected to an electric field, these particles can lead to localized discharges due to the concentration of electric fields in their neighbors.Surface discharge transpires when the electric field at the surface of the insulator exceeds the dielectric strength limit of the material.This can occur due to surface irregularities, impurities, or imperfections, leading to the formation of localized discharge along the insulation surface.Metal tip discharge comes from high electric field concentrations at the tips of protruding conductive elements within the insulation system.This concentration of the electric field at the tips leads to the initiation of localized discharge. The 3D-PRPS graphs in PD analysis are visualization tools that represent the distribution of partial discharge events in three-dimensional space, and provide a comprehensive view of the period, phase, and discharge amplitude of PD [10].Typical 3D-PRPS graphs of suspended electrode discharge, surface discharge, and metal tip discharge are shown in Figure 1.For suspended electrode discharge, there are obvious discharge pulses in both the positive and negative half of the phase.Comparatively, the phase width of a surface discharge is broader, while the pulse pattern of a metal tip discharge appears sporadic and dispersed.In conclusion, PRPS graphs manifest diverse visual patterns across various discharges, thus forming the fundamental basis for PD pattern recognition. Three-Dimensional Graph of PRPS According to the generating mechanism, PD can be classified into suspended electrode discharge, surface discharge, and metal tip discharge.Suspended electrode discharge comes from the presence of free or floating conductive particles within an insulation material.When subjected to an electric field, these particles can lead to localized discharges due to the concentration of electric fields in their neighbors.Surface discharge transpires when the electric field at the surface of the insulator exceeds the dielectric strength limit of the material.This can occur due to surface irregularities, impurities, or imperfections, leading to the formation of localized discharge along the insulation surface.Metal tip discharge comes from high electric field concentrations at the tips of protruding conductive elements within the insulation system.This concentration of the electric field at the tips leads to the initiation of localized discharge. The 3D-PRPS graphs in PD analysis are visualization tools that represent the distribution of partial discharge events in three-dimensional space, and provide a comprehensive view of the period, phase, and discharge amplitude of PD [10].Typical 3D-PRPS graphs of suspended electrode discharge, surface discharge, and metal tip discharge are shown in Figure 1.For suspended electrode discharge, there are obvious discharge pulses in both the positive and negative half of the phase.Comparatively, the phase width of a surface discharge is broader, while the pulse pattern of a metal tip discharge appears sporadic and dispersed.In conclusion, PRPS graphs manifest diverse visual patterns across various discharges, thus forming the fundamental basis for PD pattern recognition. MobileNet V2 MobileNet V2 is a neural network architecture designed to facilitate efficient and high-performance deep learning on resource-constrained devices such as mobile phones and embedded systems [21].The MobileNet V2 network uses inverted residual blocks with linear bottlenecks and shortcut connections based on the depthwise separable convolution of MobileNet V1, as shown in Figure 2, where W, H, and C are the width, the height, and the channel of the input image, respectively; N is the size of the kernel of the depthwise convolution; M is the number of kernels in the pointwise convolution.In Figure 2a, the depthwise separable convolution splits standard convolutions into depthwise convolutions and pointwise convolutions.Inverted residual blocks, as shown in Figure 2b, are types of building blocks which are designed to capture nonlinearities more effectively compared to traditional residual blocks.The input is first expanded to a higher-dimensional space using a 1 × 1 pointwise convolution, then processed with depthwise convolutions, and finally projected back to a lower-dimensional space.Within linear bottlenecks, a linear activation function is utilized to alleviate the information collapse that arises when information undergoes nonlinear mapping from a high-dimensional space to a low-dimensional space.Additionally, shortcut connections are employed to facilitate MobileNet V2 MobileNet V2 is a neural network architecture designed to facilitate efficient and high-performance deep learning on resource-constrained devices such as mobile phones and embedded systems [21].The MobileNet V2 network uses inverted residual blocks with linear bottlenecks and shortcut connections based on the depthwise separable convolution of MobileNet V1, as shown in Figure 2, where W, H, and C are the width, the height, and the channel of the input image, respectively; N is the size of the kernel of the depthwise convolution; M is the number of kernels in the pointwise convolution.In Figure 2a, the depthwise separable convolution splits standard convolutions into depthwise convolutions and pointwise convolutions.Inverted residual blocks, as shown in Figure 2b, are types of building blocks which are designed to capture nonlinearities more effectively compared to traditional residual blocks.The input is first expanded to a higher-dimensional space using a 1 × 1 pointwise convolution, then processed with depthwise convolutions, and finally projected back to a lower-dimensional space.Within linear bottlenecks, a linear activation function is utilized to alleviate the information collapse that arises when information undergoes nonlinear mapping from a high-dimensional space to a low-dimensional space.Additionally, shortcut connections are employed to facilitate information flow and aid in gradient propagation in training.It is reported that MobileNet V2 will achieve an accuracy of 72% on ImageNet classifications [24]. CLAHE and Circular LBP Features Contrast-limited adaptive histogram equalization (CLAHE) is an image processing technique used to improve the local contrast of an image by adjusting the intensity distribution in small regions [25].Unlike traditional histogram equalization, CLAHE limits the contrast enhancement to prevent the over-amplification of noises.By adaptively modifying the contrast in different areas of the image, CLAHE effectively enhances the visual appearance of images, particularly in regions with varying contrast levels. Circular local binary pattern (CLBP) is a texture descriptor used in computer vision and image analysis [26].It works by comparing each pixel with its neighboring pixels on a circle to encode the local texture information into a binary pattern.The LBP feature vector is created by calculating the frequency of the occurrences of these patterns within a local neighborhood.This method is robust to monotonic grayscale changes and provides a compact representation of the texture information. Three-Dimensional PRPSs Acquisition This study firstly developed a PD defect test prototype by using an ultra-high-frequency (UHF) sensor to obtain PD signals.The voltage came from a non-partial discharge booster transformer.The PD spectrogram and amplitude of partial discharge UHF signals under simulated defects were measured by the UHF sensor.The prototype device is shown in Figure 3.In Figure 3, the resistance-capacitance voltage-dividing device is composed of the coupled capacitance and the measuring impedance.The UHF sensor was 3 m away from the PD generator.The schematic diagram of the prototype is shown in Figure 3. Three types of discharges-suspended electrode discharge, surface discharge, and metal tip discharge-could be generated in the PD generator.The discharging data for a total of 50 power frequency cycles at every 5° angle were recorded by the UHF sensor.The finally collected data sizes for suspended electrode discharge, surface discharge, and metal tip discharge were 262, 64, and 319, respectively. CLAHE and Circular LBP Features Contrast-limited adaptive histogram equalization (CLAHE) is an image processing technique used to improve the local contrast of an image by adjusting the intensity distribution in small regions [25].Unlike traditional histogram equalization, CLAHE limits the contrast enhancement to prevent the over-amplification of noises.By adaptively modifying the contrast in different areas of the image, CLAHE effectively enhances the visual appearance of images, particularly in regions with varying contrast levels. Circular local binary pattern (CLBP) is a texture descriptor used in computer vision and image analysis [26].It works by comparing each pixel with its neighboring pixels on a circle to encode the local texture information into a binary pattern.The LBP feature vector is created by calculating the frequency of the occurrences of these patterns within a local neighborhood.This method is robust to monotonic grayscale changes and provides a compact representation of the texture information. The Proposed PD Pattern Recognition Methodology 2.4.1. Three-Dimensional PRPSs Acquisition This study firstly developed a PD defect test prototype by using an ultra-highfrequency (UHF) sensor to obtain PD signals.The voltage came from a non-partial discharge booster transformer.The PD spectrogram and amplitude of partial discharge UHF signals under simulated defects were measured by the UHF sensor.The prototype device is shown in Figure 3.In Figure 3, the resistance-capacitance voltage-dividing device is composed of the coupled capacitance and the measuring impedance.The UHF sensor was 3 m away from the PD generator.The schematic diagram of the prototype is shown in Figure 3. Three types of discharges-suspended electrode discharge, surface discharge, and metal tip discharge-could be generated in the PD generator.The discharging data for a total of 50 power frequency cycles at every 5 • angle were recorded by the UHF sensor.The finally collected data sizes for suspended electrode discharge, surface discharge, and metal tip discharge were 262, 64, and 319, respectively. SCNN Structure Design This paper presents a simple CNN (SCNN) structure based on the fundamental bottleneck residual block of MobileNetV2, aiming to strike a high balance between the size of the CNN model and the training accuracy.In order to examine the influence of the quantity of bottleneck residual blocks, this study initially investigated the recognition accuracy of a CNN with varying numbers of bottleneck residual blocks using all the collected data.The recognition process was repeated 10 times; the averaged accuracy is shown in Table 1.From Table 1, it is apparent that with an increase in the number of blocks from one to six, there is a corresponding rise in the recognition accuracy.However, the difference in the recognition efficiency between using five blocks and six blocks was marginal, within an error of 1%.Subsequent increases in the number of blocks did not yield significant improvements in the recognition efficiency.Consequently, the number of blocks in this study was selected to be six, considering the computational resources and the recognition accuracy.It has been proven that a swish activation function outperforms the ReLU function [27].The H-swish activation function approximates the sigmoid function in swish through an approximation function, exhibiting similar performance to swish while reducing the computational costs and improving the execution speed [22].Therefore, an H-swish activation function is more suitable for applications in mobile devices requiring real-time image processing.Hence, in this study, the H-swish function was used as the activation function in the first and second layers of the model, as well as in the final layer. The final structure of the proposed SCNN is shown in Table 2, where t represents the expansion factor compared to the input channels in the inverted residual structure using 1 × 1 convolutions, c denotes the depth of the output feature map (channel), n signifies the repetition of the bottleneck, and s indicates the stride of the depthwise convolution in the first bottleneck of each row. SCNN Structure Design This paper presents a simple CNN (SCNN) structure based on the fundamental bottleneck residual block of MobileNetV2, aiming to strike a high balance between the size of the CNN model and the training accuracy.In order to examine the influence of the quantity of bottleneck residual blocks, this study initially investigated the recognition accuracy of a CNN with varying numbers of bottleneck residual blocks using all the collected data.The recognition process was repeated 10 times; the averaged accuracy is shown in Table 1.From Table 1, it is apparent that with an increase in the number of blocks from one to six, there is a corresponding rise in the recognition accuracy.However, the difference in the recognition efficiency between using five blocks and six blocks was marginal, within an error of 1%.Subsequent increases in the number of blocks did not yield significant improvements in the recognition efficiency.Consequently, the number of blocks in this study was selected to be six, considering the computational resources and the recognition accuracy.It has been proven that a swish activation function outperforms the ReLU function [27].The H-swish activation function approximates the sigmoid function in swish through an approximation function, exhibiting similar performance to swish while reducing the computational costs and improving the execution speed [22].Therefore, an H-swish activation function is more suitable for applications in mobile devices requiring real-time image processing.Hence, in this study, the H-swish function was used as the activation function in the first and second layers of the model, as well as in the final layer. The final structure of the proposed SCNN is shown in Table 2, where t represents the expansion factor compared to the input channels in the inverted residual structure using 1 × 1 convolutions, c denotes the depth of the output feature map (channel), n signifies the repetition of the bottleneck, and s indicates the stride of the depthwise convolution in the first bottleneck of each row.Furthermore, to determine the most suitable batch size when using batch training for SCNN, various values for minibatch were investigated.After five repeated runs of each setting, the averaged training time and accuracy are shown in Table 3. Compromising the computational time and the recognition accuracy, the minibatch size was set as eight in this study when training the SCNN.For the obtained 3D PRPS graph, more than half of the image space lacked feature information.Consequently, 2D processing was performed from the top view.Subsequently, the processed 2D color image underwent grayscale processing using the floating-point method, followed by image enhancement through CLAHE.CLBP feature extraction was performed to generate the feature space of CLBP.The method described in [14] was adopted to select features within the CLBP feature space.Ultimately, 59 CLBP features were obtained and used as input data of the SVM.The whole processing procedure and results are shown in Figure 4. Furthermore, to determine the most suitable batch size when using batch training for SCNN, various values for minibatch were investigated.After five repeated runs of each setting, the averaged training time and accuracy are shown in Table 3. Compromising the computational time and the recognition accuracy, the minibatch size was set as eight in this study when training the SCNN.For the obtained 3D PRPS graph, more than half of the image space lacked feature information.Consequently, 2D processing was performed from the top view.Subsequently, the processed 2D color image underwent grayscale processing using the floatingpoint method, followed by image enhancement through CLAHE.CLBP feature extraction was performed to generate the feature space of CLBP.The method described in [14] was adopted to select features within the CLBP feature space.Ultimately, 59 CLBP features were obtained and used as input data of the SVM.The whole processing procedure and results are shown in Figure 4. To determine the most suitable SVM model, experiments were conducted for six types of SVMs: linear SVM, quadratic SVM, cubic SVM, coarse SVM, medium SVM, and fine SVM.The training data comprised all the data for three types of PD.The training of different SVMs was conducted using the classification learner in MATLAB R2022b.The receiver operating characteristic (ROC) curve and the area under the curve (AUC) [28] were used to criticize the performance of different SVMs.The ROC curve is a graphical tool that plots the true positive rate against the false positive rate.The ROC curve provides a visual representation of a classifier ability to discriminate between classes across different threshold values.A steeper ROC curve indicates better performance, and the area under the ROC curve (AUC) quantifies the overall performance of the classifier.AUC values range from 0 to 1, where a value closer to 1 indicates a better discrimination performance, while a value near 0.5 suggests a performance similar to random guessing.The results of the ROC curves and AUC values for different types of PD are shown in Figure 5.The ROC To determine the most suitable SVM model, experiments were conducted for six types of SVMs: linear SVM, quadratic SVM, cubic SVM, coarse SVM, medium SVM, and fine SVM.The training data comprised all the data for three types of PD.The training of different SVMs was conducted using the classification learner in MATLAB R2022b.The receiver operating characteristic (ROC) curve and the area under the curve (AUC) [28] were used to criticize the performance of different SVMs.The ROC curve is a graphical tool that plots the true positive rate against the false positive rate.The ROC curve provides a visual representation of a classifier ability to discriminate between classes across different threshold values.A steeper ROC curve indicates better performance, and the area under the ROC curve (AUC) quantifies the overall performance of the classifier.AUC values range from 0 to 1, where a value closer to 1 indicates a better discrimination performance, while a value near 0.5 suggests a performance similar to random guessing.The results of the ROC curves and AUC values for different types of PD are shown in Figure 5.The ROC curves and AUC values are shown in Figure 4. Observing Figure 5, it is apparent that among the six types of SVM models, the quadratic SVM demonstrated higher AUC values across the three fault types.Therefore, the quadratic SVM model (QSVM) was selected to construct the PD pattern recognition model for the CLPB features.Based on the aforementioned studies, our PD pattern recognition methodology was proposed; its overall procedure is explained in Figure 6 to facilitate its implementation by fellow researchers.After collecting data from UHF sensors, the obtained images were initially preprocessed, involving image resizing, image rotation, image graying, and image enhancement.For the SCNN model, the image needed to be processed to be identical in size to the input size of the network: 224 × 224.For the QSVM model, the CLBP features were extracted, as shown in Figure 4.After image preprocessing, SCNN and QSVM models were separately established.The output scores of the SCNN and QSVM, with as many categories as the types of PD, were concatenated into one input vector, serving as the input for the ensemble learning model, and the ensemble learning model was trained using the bagging and discriminant method.Based on the aforementioned studies, our PD pattern recognition methodology was proposed; its overall procedure is explained in Figure 6 to facilitate its implementation by fellow researchers.After collecting data from UHF sensors, the obtained images were initially preprocessed, involving image resizing, image rotation, image graying, and image enhancement.For the SCNN model, the image needed to be processed to be identical in size to the input size of the network: 224 × 224.For the QSVM model, the CLBP features were extracted, as shown in Figure 4.After image preprocessing, SCNN and QSVM models were separately established.The output scores of the SCNN and QSVM, with as many categories as the types of PD, were concatenated into one input vector, serving as the input for the ensemble learning model, and the ensemble learning model was trained using the bagging and discriminant method. Procedures of the Proposed ENS-SCNN-QSVM Based on the aforementioned studies, our PD pattern recognition methodology was proposed; its overall procedure is explained in Figure 6 to facilitate its implementation by fellow researchers.After collecting data from UHF sensors, the obtained images were initially preprocessed, involving image resizing, image rotation, image graying, and image enhancement.For the SCNN model, the image needed to be processed to be identical in size to the input size of the network: 224 × 224.For the QSVM model, the CLBP features were extracted, as shown in Figure 4.After image preprocessing, SCNN and QSVM models were separately established.The output scores of the SCNN and QSVM, with as many categories as the types of PD, were concatenated into one input vector, serving as the input for the ensemble learning model, and the ensemble learning model was trained using the bagging and discriminant method. Experimental Study To demonstrate the performance of the proposed PD pattern recognition methodology, comprehensive experiments were conducted.In the experimental study, all recorded data were split into two parts, 70% for training and 30% for testing; the training dataset sizes for suspended electrode discharge, surface discharge, and metal tip discharge were 183, 45, and 223, respectively.Comparison was performed among SCNN, QSVM, random forest (RF) [29], extreme gradient boosting (XGBoost) [30], ensemble learning of SCNN and QSVM (ENS-SCNN-QSVM), and some existing lightweight networks, MobileNet V2 [21], EfficientNetB0 [23], and ShuffleNet [19].The comparison focused on the recognition accuracy, the parameter quantity, and the training time.For the identification of the three types of PD, each classifier was run independently 10 times to obtain an averaged recognition efficiency and an averaged training time.For SCNN, ENS-SCNN-QSVM, MobileNet V2, EfficientNetB0, and ShuffleNet, batch training was used, while the minibatch size was set at 8, the max number of training epochs was 20, and the learn rate was 0.001.For RF, the number of trees was 100, the minimum number of samples for each leaf node was 5, each tree was trained using a random selection of 10 features, and the maximum depth of each tree was 100.For XGBoost, the number of weak classifiers was 100, the maximum depth was 10, and the learning rate was 0.1.CLBP features were used in both the RF model and the XGBoost model.The experiments were conducted in MATLAB R2022b, using a single GPU on an AMD Ryzen 7 4800H with Radeon Graphics 2.90 GHz, NVIDIA GeForce GTX 1650 Ti.Notably, MobileNet V2, EfficientNetB0, and ShuffleNet were trained using transfer learning, where the pre-trained networks from ImageNet [24] were loaded.The initial weights of the main backbone were frozen; then, retraining was conducted using the training data presented in this paper.The PD recognition results are shown in Table 4.The confusion matrices for the eight methods run once on the testing data are shown in Figure 7.The precision, recall, and accuracy for each method with the testing data are presented in Table 5. could be correctly identified, while SCNN and ENS-SCNN-QSVM both achieved a recall of 100%.Among the eight methods for PD pattern recognition, suspended electrode discharge and metal tip discharge were easily misidentified.For the overall recognition rate on the testing dataset, the proposed ENS-SCNN-QSVM is the highest, at 70.6%. Conclusions The precise identification of PD is pivotal for ensuring the reliability of power supply within a power system.As CNNs are progressively employed in PD pattern recognition, the challenge of large model sizes persists, especially when striving to meet real-time demands, particularly on embedded devices.This paper introduces an ensemble learning method that combines SCNN and QSVM for identifying PD patterns.An SCNN was constructed based on the inverted residual blocks utilized in MobileNet V2.The QSVM model was established using the CLBP vectors, which was extracted from the enhanced 2D gray image.The SCNN and QSVM scores were ensembled using bagging and discriminant methods.Comparative results with existing lightweight CNNs demonstrate the proposed method's advantages in recognition accuracy, response efficiency, and parameter quantity, making it more suitable for deployment on terminal devices for PD pattern recognition. In conclusion, the presented method shows advances in the field of PD pattern recognition, offering potential applications in the real-time identification of online PD in electrical equipment such as switchgear.By situating the UHF PD sensor outside the pertinent electrical equipment designated for testing, and subsequently connecting it to the oscilloscope or computer host through the PD host, one can display the PRPS spectrum, facilitating the application of the proposed method.Further research and development in this direction can contribute to explore multi-source mixed PD pattern recognition, focusing on separating PD mixed signals and extracting the respective characteristics, and investigate different methodologies to combine SCNN and QSVM. Figure 3 . Figure 3.The prototype testing device for PD. Figure 3 . Figure 3.The prototype testing device for PD. Figure 6 . Figure 6.The procedure of the proposed PD pattern recognition methodology. Figure 5 . Figure 5.The ROC curves and AUC values for (a) suspended electrode discharge, (b) surface discharge, and (c) metal tip discharge. Figure 6 . Figure 6.The procedure of the proposed PD pattern recognition methodology.Figure 6.The procedure of the proposed PD pattern recognition methodology. Figure 6 . Figure 6.The procedure of the proposed PD pattern recognition methodology.Figure 6.The procedure of the proposed PD pattern recognition methodology. Table 1 . Accuracy under different numbers of bottleneck residual blocks. Table 1 . Accuracy under different numbers of bottleneck residual blocks. Table 2 . The proposed SCNN body architecture. Table 3 . Accuracy and runtime under different training minibatch sizes. Table 2 . The proposed SCNN body architecture. Table 3 . Accuracy and runtime under different training minibatch sizes. Energies 2024, 17, x FOR PEER REVIEW 8 of 13curves and AUC values are shown in Figure4.Observing Figure5, it is apparent that among the six types of SVM models, the quadratic SVM demonstrated higher AUC values across the three fault types.Therefore, the quadratic SVM model (QSVM) was selected to construct the PD pattern recognition model for the CLPB features. Energies 2024, 17, x FOR PEER REVIEW 8 of 13curves and AUC values are shown in Figure4.Observing Figure5, it is apparent that among the six types of SVM models, the quadratic SVM demonstrated higher AUC values across the three fault types.Therefore, the quadratic SVM model (QSVM) was selected to construct the PD pattern recognition model for the CLPB features. Table 4 . PD recognition results using 8 different methods. Table 5 . The precision, recall, and accuracy for 8 methods on the testing data.
v3-fos-license
2017-04-05T14:10:07.411Z
2013-10-18T00:00:00.000
4060947
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0078004&type=printable", "pdf_hash": "ce4fc383b1171a3a5ade56b5b8aba7ef6bce2ec1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44617", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "92115bf02046690f8d36e0a42079aac0fa1edcc3", "year": 2013 }
pes2o/s2orc
A Multidisciplinary Approach Providing New Insight into Fruit Flesh Browning Physiology in Apple (Malus x domestica Borkh.) In terms of the quality of minimally processed fruit, flesh browning is fundamentally important in the development of an aesthetically unpleasant appearance, with consequent off-flavours. The development of browning depends on the enzymatic action of the polyphenol oxidase (PPO). In the ‘Golden Delicious’ apple genome ten PPO genes were initially identified and located on three main chromosomes (2, 5 and 10). Of these genes, one element in particular, here called Md-PPO, located on chromosome 10, was further investigated and genetically mapped in two apple progenies (‘Fuji x Pink Lady’ and ‘Golden Delicious x Braeburn’). Both linkage maps, made up of 481 and 608 markers respectively, were then employed to find QTL regions associated with fruit flesh browning, allowing the detection of 25 QTLs related to several browning parameters. These were distributed over six linkage groups with LOD values spanning from 3.08 to 4.99 and showed a rate of phenotypic variance from 26.1 to 38.6%. Anchoring of these intervals to the apple genome led to the identification of several genes involved in polyphenol synthesis and cell wall metabolism. Finally, the expression profile of two specific candidate genes, up and downstream of the polyphenolic pathway, namely phenylalanine ammonia lyase (PAL) and polyphenol oxidase (PPO), provided insight into flesh browning physiology. Md-PPO was further analyzed and two haplotypes were characterised and associated with fruit flesh browning in apple. Introduction Fruit quality features, represented by the biochemical and physical properties making a fruit edible and appreciated by consumers, are nowadays considered a major priority in several breeding programmes worldwide. For this reason, in the last decade the scientific community has initiated a series of extensive studies addressed at examining the genetic and molecular mechanisms responsible for controlling the fundamental physiological processes ultimately leading to fruit quality [1][2][3][4][5]. In the modern system of fruit distribution and marketing, fruit quality also needs to be guaranteed during storage, allowing high quality standards to be maintained. In addition to this, pre-prepared fresh cut fruit is rapidly becoming interesting, given increased consumer demand for fresh and convenient food [6,7]. However, the fruit processing procedure can encounter serious problems, which need to be prevented in order to ensure high quality and at the same time avoid substantial loss of fruit. One of the most important problems occurring during fruit processing is flesh browning, which is undesirable due to the aesthetically unpleasant appearance and consequential off-flavour [8,9]. The apple's susceptibility to flesh browning is thought to be the result of complex interplay between the polyphenoloxidase (PPO) enzyme and polyphenol content [10]. PPO is a bi-copper metalloenzyme showing two conserved copper-binding domains, CuA and CuB, which interact with molecular oxygen and phenolic substrates [11]. The PPO enzyme catalyses the formation of quinones from phenols through the hydroxylation of monophenols to o-diphenols (such as chlorogenic acid and (-)-epicatechin) and their subsequent dehydrogenation to oquinones [12]. O-quinones are highly reactive, as they can undergo self-polymerisation or react with amines and thiol groups to form brown pigments ( Figure 1). In the cell, after a PPO is encoded in the nucleus it is transported to the cytoplasm, where proPPO formed is stored in the plastid. The enzyme is thus physically separated from its substrate, which is located in the vacuole [12][13][14]. The loss of subcellular structural compartmentation, following damage or cutting, leads the PPO enzyme to come into contact with phenolic compounds. The quinones produced by oxidation rapidly condense to generate relatively insoluble brown polymers (melanins), ultimately detected as the browning reaction. For the conversion of odihydroxyphenols into o-benzoquinones, PPO can also use O 2 as a second substrate (catecholase activity; [13]). Initial browning can also occur during storage in a controlled atmosphere, causing serious fruit loss. The importance of browning prevention depends on the fact that the symptoms in whole fruit are not usually externally visible, and thus the problem can only be detected after purchase, influencing consumer confidence. Of several strategies adopted to prevent fruit flesh browning, the application of low O 2, together with a high concentration of CO 2 are the most common, although an excessive level of CO 2 may also affect fruit quality negatively [15]. For quality control in minimally processed fruit, another system for reducing flesh browning is control of PPO activity. This enzyme can indeed be inactivated by ascorbic acid or its derivates, which reduce phenoxyl radicals and quinone forms of phenolics back to precursor phenols, through a coupled oxidation/reduction reaction [16,17]. However, the effect of ascorbic acid is temporary, as it is further irreversibly oxidized into dehydroascorbic acid, enabling browning to occur again [18]. Due to this mechanism, development of functional markers for the selection of new apple accessions characterised by a minimum browning phenotype would be a valuable approach, also taking into account the fact that several chemicals (such as sulphate) and physical prevention strategies are no longer accepted by the FDA agency [15,19]. To date, genetic control of flesh browning has only been partially assessed using the QTL mapping approach, revealing the presence of two intervals located on linkage groups 3 and 17 [20,21]. However, these two QTLs have not yet been shown to be associated with any candidate genes involved in the flesh browning metabolism. In the apple, the only PPO gene positioned to date on a genetic map is represented by [22], but its genomic location on chromosome 5 has not yet been associated with any browning phenotype. In this survey, two independent apple populations were employed to examine and characterise fruit flesh browning in apple. A new gene, here called Md-PPO, was identified and associated with the different browning rate. To better understand this physiological mechanism, both transcript and metabolite assessment was carried out on a set of apple samples developing browning. Finally, a new hypothesis regarding the physiological regulation of flesh browning is discussed here. Plant materials To investigate fruit flesh browning in apple and detect QTL regions involved in control of this phenotype, two independent full-sib populations were employed. The first progeny came from the controlled cross between 'Fuji' and 'Pink Lady' (POP_1), while the second (POP_2) was created by crossing the apple cv. 'Golden Delicious' with 'Braeburn'. Both populations were planted in the same experimental orchard in 2003 and both belong to the breeding programme currently underway at FEM (Foundation Edmund Mach), located in northern Italy. All the plant material was grafted onto M9 rootstock and maintained by making use of standard technical management for pruning and disease control. Fruit Flesh Browning Phenotyping Fruit flesh browning in apple was assessed in the two populations at two specific stages in the fruit ripening process. Apples from POP_1 were assessed at harvest, while for POP_2 flesh browning was also measured after two months' cold storage, to estimate the impact of low temperature in the control of this disorder. For convenience in terms of the genotyping procedure, both progenies were represented by the two parental cultivars, plus 94 seedlings each (to set a 96 well plate). The browning phenotype was assessed by measuring the flesh colour from two halves of a cut apple with a Konica Minolta digital colorimeter CM2600D. Five whole apples/ genotype were considered, collecting ten measurements/ samples at the end. The use of this instrument allowed digital characterisation of the flesh colour, based on the tristimulus coordinates L * , a * and b * . This colour space was chosen with regards to the most popular RGB (red, green and blue) system because it provides a more uniform colour difference in relation to human perception. For this reason the L * a * b * is the colour space generally employed in photoelectric measurements. The L * value spans from 0 to 100, corresponding to black (0) and white (100), while a * and b * indexes do not have specific numerical limits. Negative numbers for a * indicate a green shade while positive values point to magenta. For b * , negative and positive values are for blue and yellow respectively. Analysis of flesh browning was performed by measuring the flesh colour at three specific moments: after cutting (T 0 ), after 30 minutes (T 30 ) and after 60 minutes (T 60 ) on the same sample. Furthermore, the absolute value for L * , a * and b * taken at these three stages (T 0 , T 30 and T 60 ) and the delta values (∆) between T 30 and T 0 , T 60 and T 0 and T 60 and T 30 were considered. The percentage variation, represented by ((T i -T 0 )/T 0 )x100, where T i is the L * , a * or b * value after 30 or 60 minutes and T 0 is the corresponding index measured after cutting, was also calculated. Finally, apple flesh browning was examined using twenty-four new colour parameters. PPO identification To target the PPO gene set located over the apple genome assembly, BLAST searching was performed, using the only available and functionally well-characterised PPO gene for apple as the query, coded in the GeneBank (www.ncbi.nlm.nih.gov) as L29450 [12]. BLAST was performed using the internal resources of FEM (http:// genomics.research.iasma.it) and the predicted gene set produced during the apple genome sequencing project [23]. The sequences retrieved were confirmed through gene annotation, performed using the Uniprot database (http:// uniprot.org), and further compared with the one released by GDR (http://rosaceae.org). Gene distribution over the chromosomes was illustrated using MapChart [24]. Marker development Two types of functional molecular markers related to the PPO elements identified here were newly designed: SSRs and SNPs. Initially a set of SSR motifs were discovered in silico in the genomic contigs of each PPO, using Sputnik software (http://espressosoftware.com/sputnik/index.html). Primer pairs (Table S1) flanking the microsatellite motif, were designed by Primer3 software (http://frodo.wi.mit.edu/primer3/) and further used for specific amplification. SSR markers were initially exploited on the four parental cultivars to search for polymorphism in a PCR mix with a final volume of 20 μL. The PCR conditions were as follow: 5 ng of DNA, 10 X Buffer, 0.25mM dNTPs, 0.075 µM of forward labelled and reverse primers and 1000 U of 5Prime ® Taq polymerase. Temperature conditions were: initial denaturation at 94 °C for 150 s followed by 32 cycles at 94°C for 30 s, 58°C for 45 s, 72°C for 60 s, and a final extension at 72 °C for 5 min. In addition to the SSR markers, a set of SNPs related to the PPO gene corresponding to L29450 was also created. Three pairs of primers were designed, in order to cover the full length of the entire gene: PPO_1 for: CTTCTTGGTCTTGGAGGTCT and PPO_1 rev: ATCGGAGCTTGTCGTAGAGA; PPO_2 for: CCACAACTCATGGCTCTTCT and PPO_2 rev: CTAACTCTGCTGTCTCGTTG; PPO_3 for: GTTCTTTGGGAACCCGTACA and PPO_3 rev: CATCAAACTTCACAGCCACG. Amplification of the three fragments was carried out in a final volume of 10 µL containing 5 ng of DNA, 10X buffer, 0.25 mM dNTPs, 1 µM forward and reverse primers and 1000U 5Prime ® Taq polymerase. PCR was carried out using an ABI 2720 Thermal Cycler (Applied Biosystems by Life Technologies, Carlsbad, CA, USA) with the following thermal conditions: denaturation at 94 °C for 2 minutes, followed by 35 cycles of denaturation at 94 °C for 45 s, annealing at 65 °C for 45 s, extension at 72 °C for 2 min, finishing with a conclusive extension at 72 °C for 10 min. Amplicons were than purified by adding 4 µl of water and 1µl of ExoSAP to 1 µL of PCR product. The mix was then incubated for 45 minutes at 37 °C, followed by 15 minutes at 75 °C. In a second step, 2 µL of Buffer 5X, 1 µL of the forward primer (at 3.2 uM), 1 µL of Big dye terminator (Applied Biosystem by Life Technologies, Carlsbad, CA, USA) and 0.5 µL of water were added to the purification mix to reach a final volume of 10 µL. The sequencing reaction was performed for 2 minutes at 96 °C followed by 39 cycles of denaturation (96 °C for 10 s), annealing (50 °C for 5 s) and extension (60 °C for 4 minutes). The sequencing runs were carried out using an ABI PRISM ® 3730 capillary sequencer (Applied Biosystems by Life Technologies, Carlsbad, CA, USA) and finally the SNPs were genotyped by re-sequencing the amplicons across the two populations. SNP scoring was carried out by analysing the sequences obtained using Pregap4 software (Staden package; http://staden.sourceforge.net). QTL analysis The new markers designed in this study were used to improve the current version of the two molecular maps. The POP_1 map was originally created for comprehensive examination of fruit texture in apple [25], while POP_2 was employed to anchor the contigs produced during the apple genome sequencing projects [23]. Marker integration in the context of these two maps was done using JoinMap 4.0 software [26], employing the Kosambi mapping function, while visual display of the linkage groups was carried out using MapChart software [24]. Finally, genetic and phenotypic data of 47 seedlings of the POP_1 were used for QTL analysis, in order to target the genomic regions associated with apple flesh browning. QTL intervals were detected using MapQTL ® 6 [27], and the Interval Mapping (IM) algorithm. To reduce residual variance and the effect of possible false positives, the markers with the highest LOD value were selected as cofactors in the subsequent MQM computation. The threshold for calling positive QTLs was established at a LOD value of 3, after running 1000 permutations. To examine the genetic regulation of this phenotype, each QTL interval was anchored on the assembled apple genome and the sequences of the predicted gene were retrieved by using the Computational Web Resources of FEM (www.genomics.research.iasma.it). From the geneset underlying the QTL regions, the predicted aminoacid sequences were employed to perform gene annotation by interrogating the UniProt (Viridiplantae) database. Finally, the effect of the Md-PPO haplotype was validated on two set of seedlings belonging to the POP_2, 64 measured at harvest and 58 after two months of cold storage. RNA isolation and qPCR analysis For transcript profiling, flesh samples from four replicates of the 'Golden Delicious' collected at T 0 (immediately after cutting), T 30 and T 60 were used. Cortex tissues were initially frozen in liquid nitrogen and further ground to a fine powder. RNA extraction was performed using the Spectrum Plant total RNA kit (Sigma). RNA was quantified using a NanoDrop ND-8000 spectrophotometer (Thermo Scientific, USA), while its purity and integrity was assessed by an Agilent 2100 Bioanalyzer. The RNA isolated from the apple samples (T 0 , T 30 and T 60 ) was converted into cDNA by the 'SuperScript VILO cDNA Synthesis Kit' (Invitrogen). Prior to this, 1 µg of total RNA from each sample was pretreated with 2 Units of rDNAse I (DNA free kit, Ambion) and used as a starting template. To clarify the physiological regulation of flesh browning, the expression of two genes, Md-PAL and Md-PPO, was assessed by using the following specific primer pairs: Md-PAL for : AGACCCTCAATGCCTCAGAA; Md-PAL rev : CAAGCCAGAACCAACAGCAG; Md-PPO for : Real Time PCR was performed using ACTIN as housekeeping gene (Md-ACT for : TGACCGAATGAGCAAGGAAATTACT; Md-ACT rev :TACTCAGCTTTGGCAATCCACATC). Transcript quantification, carried out using the ViiA7 instrument (Applied Biosystem), was performed by using the 'KAPA SYBR FAST Universal qPCR kit' (Kapa Biosystems). PCR thermal conditions were: incubation at 95°C for 20 sec, 40 cycles of 95°C 1 sec and 60°C 20 sec. Finally, a cycle at 95°C for 15 sec, 60°C for 1 min and 95°C for 15 sec was applied to determine the melting curve. The Ct results were obtained by averaging three independent normalised expression values for each sample, carried out using Q-gene software [28]. Relative gene expression was plotted as the mean of the normalised expression values of the triplicates. Phenolic profiling Phenols were extracted from the ground cortex tissues of 'Golden Delicious' collected at T 0 , T 30 and T 60 , following the procedure reported in [29] and analysed as described in [30]. Briefly, 2 g of tissue powder collected from four replicates each experimental time, previously prepared for RNA isolation, were treated in sealed glass vials using 4 ml of water/methanol/ chloroform solution (20:40:40). After vortexing for 1 min, the samples were mixed using an orbital shaker for 15 min at room temperature, and further centrifuged at 1000g (4 °C) for 10 min, after which the upper phases, made up of aqueous methanol extract, were collected. Extraction was repeated by adding another 2.4 ml of water/methanol (1:2) to the pellet and chloroform fractions. After the final centrifugation, the upper phases from the two extractions were combined and brought to the volume of 10 ml and filtered with a 0.2 μm PTFE filter prior to liquid chromatography-mass spectrometry analysis. Ultraperformance liquid chromatography was performed employing a Waters Acquity UPLC system (Milford, MA, USA) coupled to a Waters Xevo TQMS (Milford, MA, USA) working in ESI ionisation mode. Separation of the phenolic compounds was achieved on a Waters Acquity HSS T3 column 1.8 μm, 100 mm × 2.1 mm (Milford, MA, USA), kept at 40 °C, and with two solvents: A (water containing 0.1% formic acid) and B (acetonitrile containing 0.1% formic acid). The samples were eluted according to a linear gradient method described in detail by Vrhovsek et al. [30]. 2 μL of the final extraction were injected by an autosampler set at the temperature of 6 °C. Data were processed by using Waters MassLynx 4.1 and TargetLynx software. The compounds analysed belong to four chemical classes: hydroxycinnamic acids, dihydrochalcones, flavan-3-ols and flavonols. Distribution of apple flesh browning traits Flesh browning in apple is understood as the colour change in the fruit cortex after cutting. Analysis was performed immediately after cutting and after 30 and 60 minutes. Within 1 hour a significant variation in colour intensity was observed, as shown by the four parental cultivars ( Figure S1 i-iv). Flesh browning was assessed in two populations (POP_1 and POP_2), and the distribution of the several parameters considered to examine the phenotype is illustrated in Figure 2. This shows the data distribution of the L * , a * and b * index absolute values, measured after 60 minutes of exposure to the air, in order to evaluate the maximum trait variability existing within each progeny. In both cases, all the browning subphenotypes showed quantitative distribution, which is the main prerequisite for a QTL mapping survey. However, some exceptions with skewed segregation were detected, such as L * at T60. It is also worth noting that analysis of data distribution revealed a transgressive type of segregation (particularly in the POP_1). A set of seedlings exceeding the phenotypic value of the two parents was indeed observed, making the efforts to identify molecular markers suitable for the anticipated selection of this trait worthwhile. This type of segregation is also reflected in the difference of the L * , a * and b * values measured over the three stages for the four parental cultivars ( Figure S1_v). PPO Organisation in the Apple Genome To identify the several PPO genes positioned within the apple genome assembly, the L29450 sequence was used as query, as to date it is the only PPO gene whose regulation has been experimentally validated for apple [12]. The predicted apple gene set was investigated using iterative nucleotide BLAST searching, and only genes showing an e-value of ≤ 2e -04 were further considered as possible polyphenol oxidase candidates. The resulting gene list was further annotated using BLASTp (adopting the amino acid sequences resulting from the predicted gene set) through interrogation of the Uniprot database. Of the initial set, ten genes were finally annotated as PPOs and located in three distinct chromosomes ( Figure 3). In particular, one element was located at the bottom of chromosome 2 (MDP0000500159), five were clustered at the top of chromosome 5 (MDP0000609966, MDP0000222503, MDP0000207799, MDP0000173059 and MDP0000221498) while the last group, made up of four elements, was detected at the bottom of chromosome 10 (MDP0000744636, MDP0000234782, MDP0000709073 and MDP0000699845; available at GDR: http://www.rosaceae.org/gb/gbrowse/ malus_x_domestica/ and FEM: http:// genomics.research.iasma.it/gb2/gbrowse/apple). The number of PPOs identified for apple is also consistent with what has been reported so far for other species. In tomato, for instance, seven genes belonging to this family were positioned on chromosome 8 [31]. In this species, as observed for apple, the PPO elements are organised in clusters. With regards to tomato, apple shows three chromosomes characterized by the presence of PPO elements, but chromosome 2 contains only one element, and chromosomes 5 and 10 are homoeologous, due to the recent genome duplication occurring in this species [23]. Moreover, other authors have already presented the duplication of these two chromosomes in apple in previous publications [32][33][34][35]. Recently, a comprehensive survey performed on 25 sequenced genomes revealed that other species shared a similar number of PPO genes [36], such as Sorghum bicolour (8), Glycine max (11) and Populus trichocarpa (11). Beside this, it was also discovered that the PPO family can vary greatly in size across species. Indeed, Solanum tuberosum shows only 5 PPO elements [37], Zea mays 6 and Vitis vinifera 4. This difference was thought to be a consequence of duplication events, responsible, for instance, for the complete loss of this gene family in Arabidopsis thaliana [36,38]. The organisation of the cluster of PPO genes in apple is also supported by phylogenetic analysis carried out by Tran and colleagues, which suggests the tandem arrangement of PPO genes on chromosomes as a consequence of these recent genome duplication events. Of the total number of PPO genes found in the apple genome, the one encoded as MDP0000699845 was shown to be the most similar to L29450, initially used as the query, and for this reason it was further studied in more detail, to exploit its association with the fruit flesh browning rate. Functional marker design and QTL mapping The first set of functional markers designed was related to microsatellite motifs discovered on the contigs where the ten PPO genes were targeted. Initially, these SSR markers were tested for polymorphism, searching in the four parental cultivars ('Fuji', 'Pink Lady', 'Golden Delicious' and 'Braeburn'). Of these markers, only two primer pairs provided interesting results: Md-PPO_SSR_ch5e and Md-PPO_SSR_ch10d ( Figures S2 and S3). The position of these two markers, assigned in silico to chromosomes 5 and 10 (as reported by the Genome Browser), was further confirmed by genetic mapping of the two populations considered in this investigation ( Figure S4 and S5). Md-PPO_SSR_ch05e was mapped at the top of LG (linkage group) 5 in both progenies (at 17.4 cM in POP_1 and 4.8 cM in POP_2). In the same way, Md-PPO_SSR_ch10d was also mapped at 64.4 cM and 80 cM from the top of linkage group 10 for POP_1 and POP_2 respectively. The different genetic positions observed could be attributed to different genome coverage of these two linkage maps, being higher in POP_2. Because SNPs found within gene sequences are considered to be one of the most frequent and important causal events controlling phenotype variation [39][40][41], a set of SNP markers was also exploited. It is indeed known that substitution of a single base can modify the aminoacid sequence, leading to functional variation [42]. Furthermore, given that they are the most abundant type of molecular marker within the genome, SNP markers are currently recognised as those most valuable in finding associations with phenotypic traits [43]. As proof of this, several references have already reported the use of SNPs in trait association analysis [41,[44][45][46][47][48][49][50][51]. SNP discovery was performed by selecting the gene MDP0000699845 as the main candidate, on which three primer pairs were designed to characterise the full-length sequence of the gene. On aligning the sequences read in the four parental cultivars, two SNPs were exploitable in POP_2 only, being homozygous in the parents of POP_1. The two SNPs, located respectively at 170 bp and 500 bp, were genotyped by sequencing all the seedlings in the 'Golden Delicious x Braeburn' population, showing 1:1 segregation (χ 2 = 0.4). Both SNPs were heterozygous in the apple cv. 'Braeburn' (SNP 170 : CG and SNP 500 : AG), while in 'Golden Delicious' the allelotype configuration was always homozygous (GG) for both SNPs. The close proximity between these two SNPs generated identical segregation (high LD level), which enabled construction of a haplotype, further positioned on the POP_2 genetic map ( Figure S4). As expected the Md-PPO gene was mapped by means of this haplotype on LG 10 at 1.7 cM from Md-PPO_SSR_ch10d. This distance may however be overestimated, due to the fact that the SNP set used to saturate the maps came from the sequencing of the 'Golden Delicious' genome [52]. Because of this, the SNPs were fully informative only for the 'Golden Delicious' parent ("ab x aa" and "ab x ab"), while in 'Braeburn' the segregation type "aa x ab" was completely absent, leading to reduced representation of recombination events for this paternal cultivar in the POP_2 map. Between the two SNPs identified by comparing 'Golden Delicious' and 'Braeburn', SNP 170 caused an amino acid substitution in the protein sequence (from Glycine in 'Golden Delicious' to Glutamic Acid in 'Braeburn'; Figure S6) that may theoretically contribute towards modifying the occurrence of flesh browning observed in this progeny. Integration of these markers led to an improved version of the two maps, which were subsequently used for a QTL mapping survey carried out on POP_1 (composed by 481 markers for a total length of 1430.8 cM with an average distance between markers of 2.9 cM) and a further haplotype validation step performed on POP_2 (composed by 608 markers for a total length of 1204 cM with an average distance between markers of 1.9 cM). Within the POP_1 genome, six linkage groups out of seventeen (apple chromosome numbers), showed significant intervals for the presence of twenty-five QTLs associated with fruit flesh browning. Five of these were identified when a * and b * absolute values were measured at T 0 , T 30 and T 60 . In this case it is interesting to note that at the three measuring times not a single QTL was assigned the L * absolute value parameter, which is related to colour brightness (black/ white gradient). The distribution of the a * value led instead to identification of a QTL located on LG 16 (LOD: 3.85) when measured after cutting, and on LG 11 (LOD: 4.12) and 14 (LOD: 3.92) at the two stages (T 30 and T 60 ) of exposure to air ( Figure 4 and Table 1). The change in LGs during the timecourse was also detected for the b * colorimetric parameter. Indeed, after cutting the QTL was identified on LG 9 (LOD: 4.0), while after 30 and 60 minutes another set of QTLs was instead targeted on LG 11 (LOD: 3.21), 13 (LOD: 3.93) and 14 (LOD: 3.39). LOD profile comparison of the three times for each colorimetric value revealed a shift in the set of chromosomes associated with the QTLs from the first assessment (immediately after cutting), as compared to fruit exposed to air. It is worth noting that the QTLs identified after cutting (T 0 ), are mainly located on chromosomes 9 and 16, not being detected at any other location over the genome. These two QTLs should thus be more associated with flesh colour properties than flesh browning, hypothesis supported by the recent discovery of the gene set controlling red flesh in the apple and located respectively on these two chromosomes, such as MdMYB10 [53,54] and MdLAR [55,56]. As browning is thought to be the intrinsic capacity of a fruit to change flesh colour after wounding, a delta (∆) value, calculated between two moments, would possibly be more effective in finding genomic regions involved in the control of browning development. When ∆ values (as well as the variation expressed as the percentage variation between T 30 -T 0 and T 60 -T 0 ) for the three colorimetric indexes (L * , a * and b * ) were considered in the computation, two linkage groups, LG 10 and LG 11, were mainly shown to be involved, spanning from a minimum LOD value of 3.08 (26.1 % of expressed variance) to a maximum of 4.68 (36.8%; Table 1). The last group of targeted QTLs was instead located on chromosome 14, and was associated with the ∆ value for a * and b * between T 60 and T 30 , the absolute value for b * at T 30 and T 60 and for a * at T 60 , with LOD values from 3.39 (28.3%) to 4.99 (38.6%). Of the chromosomes identified in this survey considered to be relevant in the control of the flesh browning, chromosomes 10, 11 and 14 are the most significant, due to the presence of several QTLs in clusters, associated with different colour parameters, whereas the other three (9, 13 and 16) are represented by only one QTL each. The QTL-LOD profile defined for LG 11 also suggested the presence of two intervals associated with fruit flesh browning, one located at 25 cM and one below, at approximately 50 cM from the top of the linkage group. It is moreover worth noting that interesting QTLs associated with the concentration of Ascorbic Acid (AsA) and its oxidized form Dehydroascorbate (DHA), playing an important role in control of fruit flesh browning, were recently mapped [57] on chromosomes 10 (containing the Md-PPO gene) and 11. Indeed, in the apple the concentration of DHA was correlated to susceptibility to flesh browning [20], while in pears the tendency to develop internal browning was linked to a decreased concentration of AsA [58]. This finding, together with the results discussed here, suggests that chromosome 10 is the best candidate for regulation of the flesh browning phenotype, due to the simultaneous presence of genes responsible for its occurrence as well as its prevention. Finally, for better understanding the genetic control of flesh browning, the genomic intervals identified in this work were anchored on the genome assembly of 'Golden Delicious', in order to allow in silico annotation. Of the several genes identified over the six targeted chromosomes (Table S2), it is worth highlighting 52 elements (Table 2) mainly involved in cell wall metabolism and secondary metabolites (polyphenols [60][61][62]). In the first category, three gene families known to participate in cell wall disassembly were retrieved, namely pectinesterase, pectin lyase and polygalacturonase. Furthermore, the fifteen genes were mainly located on chromosomes 10 and 16, which are known to be hot spot regions for control of fruit texture physiology [25]. In particular, the presence of a polygalacturonase gene responsible for the texture variability observed in apple [48,51] has recently been validated on chromosome 10. The collocation of genes encoding cell wall degrading enzymes and QTLs discovered for fruit flesh browning has already been described in peach [59], where two browning QTLs co-mapped with candidates annotated as pectate lyase and expansin genes. The simultaneous presence of these two categories in the QTL intervals discovered for flesh browning can be explained by the fact that disassembly of internal cellular structure compartmentation is needed for flesh browning to occur. In this scenario, we can surmise the involvement of this class of genes to promote this process, facilitating the interplay between the polyphenol oxidase (stored in the plastid) and its phenolic substrate (stored in the vacuole). Finally, another PPO element was newly discovered on chromosome 13, not previously targeted during in silico genome analysis (because of its poor sequence similarity with L29450), increasing the final number of PPO genes present in the apple genome to eleven. Md-PPO haplotype validation The effect of the Md-PPO haplotype (defined by the two SNPs targeted within the MDP0000699845 full-length) was further validated in POP_2 ('Golden Delicious' x 'Braeburn') assessed at two specific ripening stages, at harvest and after two months' cold storage, in order to evaluate the effect of low storage temperature on flesh browning development. Marker validation was performed by grouping the seedlings into two classes according to their haplotypes (given as "np" and "nn" in Figure 5), which showed significant differences for ∆ a * at T 30 -T 0 and T 60 -T 0 based on the LSD-ANOVA test (Pvalue ≤ 0.05). For the two ripening stages (harvest and after two months' cold storage) both parameters were statistically significant, with the heterozygous haplotype associated with a reduced variation in flesh colour, thus less prone to developing fruit flesh browning after cutting. These results also suggest that the colour change observed during flesh browning can be mostly attributed to a variation in the a * parameter. This data is also in agreement with observations of significant changes in this colour index between T 0 and T 30 for the four apple cultivars, suggesting that LG9: b * _T 0 (solid red). LG16: a * _T 0 (solid black). The QTL analysis was performed by using the phenotypic data of 47 seedlings of the POP_1. For each seedling a total of five apples were considered (biological replicates), and for each fruit the browning was measured on the two cut halves (technical replicates). doi: 10.1371/journal.pone.0078004.g004 fruit browning develops rapidly in the first 30 minutes ( Figure S1). It is also interesting to note the differences observed during postharvest storage (2 months' cold storage). The association observed between the rate of flesh browning and the Md-PPO haplotype was indeed confirmed after storage, but with a reduction of about 27% and 10% for the "np" and "nn" classes of seedling respectively. A reduced polyphenol content during cold storage [63] may reduce substrate availability for Table 2. Characterisation of 52 elements belonging to 16 categories mainly involved in cell wall and polyphenol metabolism. Gene family Number LG_9 LG_10 LG_11 LG_13 LG_14 LG_16 The relative number and chromosome locations are given for each class. This list is a specific selection of the complete QTL annotation given in Table S2. doi: 10.1371/journal.pone.0078004.t002 PPOs, resulting in a lower flesh browning rate. The difference observed between harvest and two months after storage was also confirmed by the P-values. At harvest, in fact, the value computed between the two seedling classes was more significant (P-value: 0.04) than the one calculated after storage (P-value: 0.05). Expression profiling and polyphenolic characterisation during the development of fruit flesh browning in apple To explain the physiological regulation leading to the occurrence of fruit flesh browning in more detail, two main genes were considered, Md-PAL (phenylalanine ammonia lyase) and Md-PPO. The first gene, Md-PAL, is responsible for the biotransformation of L-phenyalanine to ammonia and transcinnamic acid, which is the first step in the biosynthetic pathway of polyphenol compounds [64,65]. The second, Md-PPO, is the same element investigated here and genetically associated with the flesh browning observed in the two populations. The expression profile of these two genes over the time-course showed distinct temporal activation, also in agreement with their respective positions along the polyphenol cascade ( Figure 1). Md-PAL, which is located upstream in this pathway, was progressively expressed throughout the timecourse, already showing increased mRNA accumulation after 30 minutes' (T 30 ) exposure to air (Figure 6a), consistent with the different accumulation of the main polyphenol classes. At T 30 a general accumulation was indeed observed for hydroxycinnamic acid, dihydrochalcone and flavan-3-ol classes (Figure 6 c, d, e), while an almost unchanged situation was observed for flavonols ( Figure 6f). It is worth noting that individual compounds showed the same trend as their respective phenolic groups, as indicated in Table S3. Increased production of these compounds by wounding has already been observed in lettuce [66], but never clearly examined in apple. This enhanced production, coordinated by the activation of Md-PAL expression, can be considered a response of the defence mechanism (antioxidant protection) as well as signalling [67,68]. In this scenario we can hypothesize that PPO enzyme is eventually synthesized to oxidize the amount of polyphenols produced after wounding, theory supported by the late expression observed for Md-PPO. This gene, located downstream in this pathway, indeed showed basal and consistent expression between T 0 and T 30 , which was crucial in the development of flesh browning, while at T 60 its transcript accumulation increased by about 5 fold (Figure 6b). These functional dynamics were also validated by metabolite screening of polyphenol characterisation. At T 60 the four main polyphenol classes, described in Figure 6 (c, d, e, f), showed an average concentration decrease of about 1.5-2 fold. The browning colouration occurring in the first 30 minutes may be On the x-axis, 30-0 and 60-0 are the percentage variation of ∆ a * calculated between T 30 -T 0 and T 60 -T 0 respectively (2). indicates the ∆ value calculated after two months' cold storage. As for the analysis carried out for the POP_1, five apples (biological replicates) were assessed for each genotypes, and for each fruit the flesh browning was measured of the two halves (technical replicates). Asterisks show statistically significant comparison based on the LSD-ANOVA test (P-value ≤ 0.05). PPO, panel b). In both graphs the y axis shows the mean normalised expression, graphing the three samples selected to monitor flesh browning (T 0 , T 30 and T 60 ). Over the same time-course, the accumulation of the four main polyphenolic compounds, namely hydroxycinnamic acid (c), dihydrochalcones (d), flavan-3-ols (e) and flavonols (f), is shown below in the figure. The amount of these compounds is plotted on the y axis and expressed as μg/g of fresh weight (FW). Polyphenolic profiling was performed analyzing four replicates/experimental time. For each bar the standard error is also reported. Samples significantly different are shown using different letters following a LSD-ANOVA test (P-value ≤ 0.05). doi: 10.1371/journal.pone.0078004.g006 possibly caused by PPO enzymes already available and stored in the plastid. After this activation, the wounded organs stimulate higher production of polyphenols as a defence signal, which is then maintained by a feed-back controlling mechanism, through the production of additional PPO enzymes devoted to their oxidation. In this system the fruit can regulate the signalling triggered by the wounding events, while the flesh browning appearing immediately after cutting seems to be more related to the initial genetically determined amount of PPO enzyme, together with its substrates. Conclusion This work offers new insight, shedding light on the genetic regulation of apple flesh browning. This flesh browning, which seriously decreases the quality of the final fruit in minimally processed fruit, seems to be related to a genetically determined accumulation of the PPO enzyme, which can differ greatly in the various apple cultivars. This theory is supported by the fact that Md-PPO gene is highly activated only in the last phase of the time-course designed here, while browning occurs much earlier, corresponding to the increased transcription of another gene, Md-PAL, known to stimulate the entire polyphenolic cascade. Md-PPO was then thought to be activated more to regulate the polyphenolic signalling system. However, this gene showed two distinct haplotypes, which may be responsible for the initial enzymatic amount stored in the cell (before wounding), responsible for the extent of final browning. The availability of a marker associated with this phenomenon may represent a valid alternative to destructive methods for the selection of low browning accession, suitable for improving the quality of minimally processed apples. Figure S1. Fruit flesh browning evolution in the four parental apple cultivars, 'Fuji'(i), 'Pink Lady' (ii), 'Golden Delicious' (iii) and 'Braeburn' (iv). For each variety the three panels are a_T 0 (after cutting), b_T 30 (after 30 minutes) and c_T 60 (after 60 minutes). The histograms below each panel give the digital colour measurements obtained by the colorimeter and expressed as L*, a* and b*. Also for the four parental cultivars, five apples were assessed for each experimental time (T 0, T 30 and T 60 ), representing the biological replicates, on which two colour measurements were performed on the two sides of a cut apple (technical replicates). The letters given show the statistical significance following to the LSD-ANOVA test (P-value ≤ 0.05). In slide "v" the difference between the two parental cultivars for L*, a* and b* values measured at T 0 , T 30 and T 60 is shown for each progeny. Statistically significant differences (P value ≤ 0.05) are highlighted with asterisks. The four parents are indicated as follows: 'Fuji' (green), 'Pink Lady' (pink), 'Golden Delicious' (blue) and 'Braeburn' (red). For each bar the standard error is also visualized. Below each histogram the actual value is reported. (PPT)
v3-fos-license
2021-07-04T01:07:18.564Z
2021-07-03T00:00:00.000
235720617
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3934/publichealth.2022002", "pdf_hash": "4915b4fec643fc96dbcb12c3cb341c081d4c83c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44618", "s2fieldsofstudy": [ "Medicine" ], "sha1": "dc8689b35031ffc573abf0b410b0b943ae059691", "year": 2021 }
pes2o/s2orc
Infection spread simulation technology in a mixed state of multi variant viruses ATLM (Apparent Time Lag Model) was extended to simulate the spread of infection in a mixed state of the variant virus and original wild type. It is applied to the 4th wave of infection spread in Tokyo, and (1) the 4th wave bottoms out near the end of the state of emergency, and the number of infected people increases again. (2) The rate of increase will be mainly by d strain (L452R) virus, while the increase by a strain (N501Y) virus will be suppressed. (3) It is anticipated that the infection will spread during the Olympic Games. (4) When variant viruses compete, the infection of highly infectious virus rises sharply while the infection by weakly infectious ones has converged. (5) It is effective as an infection control measure to find an infected person early and shorten the period from infection to quarantine by PCR test or antigen test as a measure other than the vaccine. Introduction Mutation of SARS-CoV-2 has occurred in the infected areas of the United Kingdom, Brazil, and India over the past year. These variant viruses have also been brought to Japan. Variant viruses include those with weakened infectivity and those with increased infectivity. In the situation where several variant viruses coexist, it is considered that highly infectious virus trend to become the mainstream of the epidemic. In order to suppress the infection spreads, it is necessary to take measures depending on the virus. In constructing infection control measures, it is necessary to predict whether the infection will spread or shrink. To predict the infection spread, several calculation models were proposed. First, we take a look at the prediction methods that have been used to date. SIR [1] and SEIR [2] models are often used in early infection stage. Because, they are simple mathematical structure and require short calculation time. As the progressing of infection spreads, measures such as vaccination and lockdown will be taken. Kuniya et al. use SEIR to evaluate the effect of the SOE (State Of Emergency) in the second wave in Tokyo. They conducted a parameter survey with varying coefficients of the equation [3]. Britton et al. applied improved SEIR to spread infection under the case of non-uniform population structure [4]. Muñoz-Ferná ndez et al. applied the modified SIR model to analyze the wave of COVID-19. They used nonconstant parameter [5]. Biala et al. improved SEIR model to calculate the spread of COVID-19 pandemic [6]. On the other hand, ABM (Agent Based Model) has been developed [7][8][9][10][11]. This technology is a probabilistic method, and unlike the deterministic methods such as SIR and SEIR, it is a method that assumes various behaviors of a person and calculates the infection probability, and it takes a considerable amount of calculation time. Many calculation methods as mentioned above are constructed assuming a single virus, and do not consider the mixture of variant viruses. The objective of our research is to develop a technology for predicting the spread of infection in a mixed state of variant viruses. Next, we briefly describe the progress of research so far. The above methods do not have a time delay from infection to quarantine. We considered that the time required until isolated from infected is the important role of contribution in expanding infection, therefore we developed ATLM (Apparent Time Lag Model) with a delay until isolation time [12]. This model currently has an extended version with vaccine and lockdown effects [13]. We have expanded it to handle variant viruses. The infectivity of variant viruses has already been reported [14]. We use these data to simulate the fourth wave of infection spread in Tokyo and investigate the availability of the method. Analysis model The ATLM we have developed [12] uses the following equation, which takes into account the time delay from infection to quarantine and the time delay from infection to loss of infectivity. We denote the cumulative number of infected people by x as unknown, and daily infected people is by ⁄ . where, T: time delay from infection to quarantine, μ(t): vaccination rate, α: infectivity, : ratio of asymptomatic persons, S is time delay from infection to the extinction of infectivity. M indicates the sensitive population. ρ(t) is the rate of decrease in infectivity due to the restriction of human flow such as a lockdown.   is original infectivity of the virus. Equation (3) represents the decrease in the sensitive population due to vaccination. Subscript 0 indicate initial value. Details of these equations were shown in previous paper [12]. Number of quarantined persons in y(t) and number of infected people during infection isolation in a community z(t) can be calculated by Eqs. (4) and (5) respectively. To extend the above equations for handling variant viruses, the following assumptions are taken into account. (1) Infected people are infected with only one type of virus, and there is no simultaneous infection. (2) Patients who have been infected with one variant virus in the past are not infected with another variant virus. (3) The infection rates between viruses are independent of each other and do not interfere with each other. (4) The effect of the vaccine is the same for each variant virus. (5) Both delay times until the onset and the infectivity disappear are the same for each variant virus. Under the above assumptions, cumulative number of people infected by variant virus i is expressed by subscript i. the differential Eq. (1) is rewritten as follows. where ai is infectivity of variant virus i. Equation (7) is a limitation induced from assumptions (1) and (2). That is, it is shown that variant viruses have the common sensitive population M. Number of quarantined persons in each virus yi(t) and number of infected people during infection isolation in a community zi(t) are also able to be calculated by following equations. Time integration The analytical solution of the differential Eq. (4) is unknown. Therefore, to solve the above equations, the 4th-order Runge-Kutta method was used for numerical integration. In Eq. (4), the numerical values of xi(t-T ) and xi(t-S) at time t have been already computed and there is no problem in accuracy. However, since these values are not calculated at the start of the calculation, precaution must be taken at the start time. Therefore, the initial value is given a sufficiently small value compared to M0. Next, it should be noted that, when X(t) is increased, too close to M(t). Especially when the vaccination rate becomes high, X(t) > M(t) may occur, in which case the solution oscillates and becomes unstable. To avoid numerical unstable, if X(t)/M(t) > 1, we set the right-hand side of Eq. (4) equal zero. Variant virus that is prevalent in Tokyo and is seen as almost by a strain virus (N501Y) [14]. Currently, the virus of interest is d strain (L452R) found in India. Infectivity of a strain prevalent in Tokyo is believed as 1.32 times that of the original wild type [14]. Infectivity of d strain is estimated at 1.78 times higher than original ones [15]. Thus, the ratio between two becomes about 1.35. We set the infectivity of each virus based on this ratio. It is confirmed that 9 people are infected at 2021/5/31 by d strain. Its sampling percentage of infected people is 10%, therefore, about 90 people are infected [15]. Table 1 shows calculation conditions. The initial value by d strain was determined so as to satisfy the above conditions. See appendix A. As shown in the table, the effect of the vaccine is incorporated. The SOE (State Of Emergency) was scheduled by Japanese Government and Tokyo Metropolitan Government. Figure 1 shows the pattern of the 4th wave in Tokyo. The daily change in the number of infected people is illustrated in (a). The origin of the horizontal axis is set at 2021/3/1. The epidemic peak is located at around fifty days from the calculation start (the beginning of May or the end of April), about 800 people infected persons have been calculated. This number is roughly equivalent to the actual 7day average for the 4th wave. (b) shows the number of quarantined persons calculated by Eq. (6), including home medical treatment and hotel medical treatment. It is said that 80% of the infected persons are mild according to the WHO. Hence, we estimated that remaining 20% would be in the hospital. Therefore, at the peak more 11,000 people has been quarantined and about 2200 people is considered in the hospital. According to the data of Tokyo [15] at the time of the fourth wave peak about 2400 patients were hospitalized, then the results are consistent with the actual data. (c) displays number of infected people until isolated in a community calculated by Eq. (9). The higher this number, the higher the probability of having the next infected person. The average infectivity of the variant virus is shown in (d), and as the infection progresses, the average value becomes closer to the infectivity of strain. It shows that the strain is becoming dominant. Figure 2 shows a ratio of patients by a and d. The patients by d increases from the SOE declared the end and becomes dominant after the point of 128 days (2021/7/6) from the calculation start. In addition, from these figures, it can be seen that the infection by d rises sharply at the stage when the infection of a has converged and bottomed out. As described above, this analysis also indicated that the one with stronger infectivity became dominant when the infection spreads. Sensitivity analysis We have already examined the sensitivity about time delay T in the previous work [12] and reproductive number R is proportion to product of infectivity  and time delay T. Therefor less T gives suppress of spread of infection. The effect of vaccination was also considered in the recent paper [13]. Then, in this section, we examine the effect of infectivity difference under the coexistence of two variant viruses. Calculation conditions are shown in Table 2. Infectivity of  strain is set to constant and that of  strain is changed from −8.2% to 4.1%. The results are displayed in Figure 3. In strong case, maximum infected people after SOE becomes about 1800, on the other hand, in weak case1 and weak case 2, maximum values become about 500 and 700 respectively. These results suggest that the ratio of infectivity between  strain and  strain over 1.3 accelerates replacement of  by . Figure 3. Change of infected people due to differences in infectivity. The problems of reinfection and breakthrough infection will be more important. To solve these problems, rate of decrease in antibody level or probability of reinfection and breakthrough infection must be taken into account. In the present calculation model has not yet adopted the methodology to solve above problems. Improvement of the model is future task. Measures to suppress spread of infection The above calculation results predict the peak of infection will become during the Olympic Games in Tokyo. Then, we thought three measures as follows. (1) In the case of continuing of the current measures (Case 1). (3) Measure 2: To shorten the period from infection to isolation by PCR test or antigen test (Case 3). The infection status of each case is plotted together in Figure 4. The horizontal axis is the date from 2021/3/1. The broken line, dotted line and solid line indicate Case 1, Case 2, and Case 3 respectively. Solid line with blue dots shows the infection status in Tokyo on a 7-day average [15]. Case 2 shows the transition of infected persons when the SOE is extended to 6/30. Peak of infected persons decreases by about 100, however the big improvement of the infection situation is not observed. Therefore, the extension until 6/30 has little effect on suppressing infection. The last measure is not effective unless as many people as possible participate. It is good if you know that you will be infected, but usually you do not know, so you need to have many people check it regularly. For that purpose, a negative certificate with a time limit (up to one week) should be issued and confirmed at restaurants and event venues. This will allow many people to be tested. In this way, it is possible to shorten the period from infection to isolation for two days or one day. In the present study, this period is set to 14 days due to the consistency of the data. We set the period to 12 days. Considering five days as a preparation period after the end of SOE, the implementation date was set to 6/25. Case 3 shows that the spread of infection after the peak of the 4th wave is suppressed to about 600. Searching for infected people early and shortening the infection period in this way is the most effective method other than vaccines as an infection control measure. Conclusions We have extended the ATLM which has been developed to simulate the status of infection with various variant viruses. The developed model was applied to the 4th wave of Tokyo and the following results were obtained. (1) The fourth wave will bottom out near the end of the state of emergency, and the number of infected people will increase again. (2) The rate of increase will be mainly by  strain, while the increase in  strain will be suppressed. (3) It is anticipated that the infection will spread during the Olympic Games. (4) When variant viruses compete, the infection of strongly infectious one rises sharply while the infection by weak infectious ones has converged. (5) The results of sensitivity analysis suggest that the ratio of infectivity between  strain and  strain over 1.3 accelerates replacing speed of dominant virus in infection spread. (6) It is effective as an infection control measure to find an infected person early and shorten the period from infection to quarantine by PCR test or antigen test as a measure other than the vaccine. Data availability We used time-series data of COVID-19 for March 1 through June 10, 2021 in Tokyo [15].
v3-fos-license
2021-08-20T18:22:21.222Z
2021-05-01T00:00:00.000
242264220
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://elib.bsu.by/bitstream/123456789/272669/1/%D0%96%D0%9F%D0%A1-%D0%91%D0%B0%D1%80%D0%B0%D0%BD.pdf", "pdf_hash": "36aa1fcda84b8f9734e83ae907fa9f7f2218c863", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44622", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "bb07b77f8a8246dee9010ad1b684171ebb40e4ea", "year": 2021 }
pes2o/s2orc
Optical and Electrophysical Properties of Thin Zinc Oxide Films Doped with Manganese Oxide and Obtained by Laser Deposition Nanostructured thin films on a silicon substrate were obtained on a ceramic of zinc oxide doped with manganese oxide by high-frequency periodic pulsed laser action with f ~ 10–15 kHz, wavelength λ = 1.064 μm at power density q = 150 MW/cm2 in a vacuum chamber with p = 2.7 Pa. The surface morphology and the elemental composition of the obtained films were studied using atomic force microscopy, scanning electron microscopy, and X-ray spectral microanalysis. Features of the transmission spectra in the visible, near, and middle IR regions were determined. The electrophysical properties of the ZnO + 2% MnO2/Si heterostructure were analyzed. The surface morphology of the samples was investigated with a Solver P47-Pro scanning probe microscope (NT-MDT, Russia) in the AFM regime. Contactless silicon cantilevers of the whisker type with stiffness coeffi cient 2.5-10 N/m, resonance frequency 115-190 kHz, and needle tip radius of curvature 1-3 nm were used. The AFM investigations were carried out in amplitude-frequency modulation mode by the constant force method [13]. The structure of the samples was investigated on a scanning electron microscope (SEM) with normal incidence of the beam on the surface of the sample. The signals of the refl ected and secondary electrons were recorded simultaneously at an accelerating potential of 20 kV. X-ray microanalysis was used to identify the elements and determine the elemental composition. The investigations were carried out on an Aztec Energy Advanced X-Max 80 energy-dispersive nitrogenfree spectrometer (Oxford Instruments, Great Britain), which provides an extended range of detectable elements (from beryllium to plutonium) and highly accurate determination of the concentration of the light elements in according to ISO 15632:2002 as well as high energy resolution. (The MnK α resolution is not worse than 125 eV.) In order to study the distribution of the elements over the surface of the sample, a given line was scanned with an electron beam. The transmission of optical radiation in the near infrared region by the thin fi lms was measured on a Carry 500 Scan spectrophotometer. The transmission spectra in the mid-IR region were recorded on a NEXUS IR Fourier spectrometer (Thermo Nicolet) in the region of 400-4000 cm -1 . The sputtered ceramic targets were obtained by pressing at 500 MPa, and sintering was carried out in air in a laboratory chamber electric furnace at T = 1350 o C for 2 h. The relative density of the samples was 95% of the theoretical value. The CVC measurements were made on a Keithley series 2450 source meter with a multispectral source of laser radiation with wavelengths of 405, 450, 520, 660, 780, 808, 905, and 980 nm in the region of 405-980 nm based on semiconductor lasers of the LDI type with calibrated radiation power of 2 mW. The FVC measurements were made on a laboratory bench based on an E7-20 emittance meter at room temperature without illumination at a signal frequency of 100 kHz and 1 MHz. Results and Discussion. A typical SEM of the microstructure of the initial target is shown in Fig. 12. By AFM it was established that a nanocrystalline structure is formed on the silicon substrate (Fig. 2). The main roughness parameters of the surface of the fi lm were determined by scanning a region measuring 20 × 20 μm at fi ve different points on the sample: mean height of surface relief of the fi lms 72 nm, average arithmetical mean of roughness 12.1 nm. Individual large particles with height of 100-350 nm and lateral dimension of 200-500 nm are observed on the surface of the fi lm ( Fig. 2a, b, d). Their average density is not greater than 1 particle/10 μm 2 . The lateral dimension of the structural elements is 25-30 nm (Fig. 2c). Figure 3 shows the SEM structure of the fi lms at various magnifi cations. The results of the investigation of the structure by the SEM method correlate with the results obtained by AFM: the fi lm is characterized by a nanocrystalline structure, and individual large particles are observed on the surface. By X-ray spectral microanalysis it was found that the MnO 2 dopant is distributed uniformly in the ZnO fi lm: during scanning with an electron beam the presence of manganese and oxygen was observed along the line both in the particles and in the regions between the particles (Fig. 3c). It is thus possible to obtain nanocrystalline fi lms of ZnO + 2% MnO 2 with uniform composition by laser deposition. The transmission of the laser-deposited ZnO + 2% MnO 2 /Si fi lm in the near IR region of 2.2-2.6 μm amounts to ~2% (Fig. 4a), while in the middle IR region of 488-661 cm -1 (20.5-15.1 μm) T ~ 25% with a decrease in transmission to T = 18.6% at 611 cm -1 (Fig. 4b), which is a characteristic absorption band corresponding to vibration of the Mn-O bond [14]. The refl ection spectrum of the ZnO + 2% MnO 2 /Si fi lm on the silicon substrate in the visible and near IR regions is shown in Fig. 4c. The refl ection in the UV (200-400 nm) and visible regions is less than in the near-IR region. The region of transparency of the ZnO + 2% MnO 2 fi lm and absorption of the incident radiation is characteristic of zinc oxide fi lms [15]. Figure 5a shows the FVC of the ZnO + MnO 2 /Si structure. Irrespective of the frequency of the signal the FVC has the form characteristic of high-frequency dependence of the capacitance on the voltage of the MOS structure on a silicon substrate with p-type conductivity. The capacitance of the oxide has a fl at form at negative voltages. As seen, the capacitance decreases with increase of frequency and at low frequencies does not go into saturation mode at negative voltages. During investigation of the electric characteristics ZnO/Si systems are usually regarded as heterostructures since zinc oxide is a direct-gap n-type semiconductor. However, ZnO fi lms have a band gap of 3.37 eV, and in structures with the narrower band gap of monocrystalline silicon it can behave as a dielectric at high signal frequencies [16]. Hysteresis is not observed in the measured FCV characteristics, which indicates the absence of fi xed charge on the dielectric, but the fl at form at negative voltages in the capacitance modulation region with a signal frequency of 1 MHz indicates the presence of embedded surface states (traps) in the oxide fi lm and at the ZnO + 2% MnO 2 /Si interface. The charge carriers captured on these traps do not manage to recharge with increase of the frequency, and the total capacitance of the system therefore decreases [17]. The current-voltage characteristic of the ZnO + 2% MnO 2 /Si structure (Fig. 5b) is typical of a heterostructure: in the region of positive voltages it is possible to distinguish two sections each of which is described by exponential dependence of the current on the voltage I ~ U m . On the fi rst section the voltage is < 1.6 V (m = 0.87), and on the second m ≈ 1, i.e., the conductivity is close to ohmic. As in the case of zinc oxide, the conductivity of the ZnO + 2% MnO 2 is determined by current restricted by space charge [16]. Since the doped zinc oxide exhibits photosensitivity over a wide spectral range [18], to determine the spectral sensitivity of the ZnO + 2% MnO 2 /Si structure the current-voltage characteristics were measured at positive voltages with the use of a multispectral laser source under the action of laser radiation with λ = 405-980 nm (Fig. 6a). The spectral dependence of the structure was constructed with a voltage shift of +2 V. (The highest photosensitivity occurs in this region of voltages.) The highest photosensitivity of 30.41 mA/W is observed with a voltage shift of +2 V at λ = 905 nm (Fig. 6b). Discovery of maximum photosensitivity in the IR region makes it possible to suppose that this effect is determined by the electron capture levels -by traps at the ZnO + MnO 2 -silicon interface.
v3-fos-license
2019-05-07T13:41:01.363Z
2019-04-15T00:00:00.000
146054759
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aab.copernicus.org/articles/62/181/2019/aab-62-181-2019.pdf", "pdf_hash": "46a602a60208b7dc5009771fce08ec81bf5bf652", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44623", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "sha1": "46a602a60208b7dc5009771fce08ec81bf5bf652", "year": 2019 }
pes2o/s2orc
Genetic diversity and relationships of Chinese donkeys using microsatellite markers Abstract Donkeys are one important livestock in China because of their nourishment and medical values. To investigate the genetic diversity and phylogenetic relationships of Chinese donkey breeds, a panel of 25 fluorescently labeled microsatellite markers was applied to genotype 504 animals from 12 Chinese donkey breeds. A total of 226 alleles were detected, and the expected heterozygosity ranged from 0.6315 (Guanzhong) to 0.6999 (Jiami). The mean value of the polymorphism information content, observed number of alleles, and expected number of alleles for all the tested Chinese donkeys were 0.6600, 6.890, and 3.700, respectively, suggesting that Chinese indigenous donkeys have relatively abundant genetic diversity. Although there were abundant genetic variations found, the genetic differentiation between the Chinese donkey breeds was relatively low, which displayed only 5.99 % of the total genetic variance among different breeds. The principal coordinates analysis clearly splits 12 donkey breeds into two major groups. The first group included Xiji, Xinjiang, Liangzhou, Kulun, and Guanzhong donkey breeds. In the other group, Gunsha, Dezhou, Biyang, Taihang, Jiami, Qingyang, and Qinghai donkeys were clustered together. This grouping pattern was further supported by structure analysis and neighbor-joining tree analysis. Furthermore, genetic relationships between different donkey breeds identified in this study were corresponded to their geographic distribution and breeding history. Our results provide comprehensive and precise baseline information for further research on preservation and utilization of Chinese domestic donkeys. Introduction Donkeys played an important role in ancient transport systems of Asia and Africa, donkeys provided a reliable source of protein and facilitated overland circulation of goods and people. China has a 4000-year history of raising donkeys (Zheng, 1985;Xie, 1987), and possesses more than 9 million donkeys, accounting for about 22 % of the world's donkey population (Hou and Hou, 2002). Twenty-four donkey breeds thrive throughout central, northeastern, and western China, primarily in the dry, arid, semi-arid, and warm climates of western China around the Yellow River valley, resulting in an abundant genetic resource (Xie, 1987). However, since the 1980s, the number of donkeys has been decreasing steadily along with agricultural mechanization. Moreover, some donkey breeds are currently threatened with extinction (Ma et al., 2003), such as the famous Guanzhong donkeys (Lei et al., 2007). Several studies have been conducted to investigate genetic diversity and origins of Chinese donkeys. Uniparental markers are routinely used to trace the origins of Chinese donkey breeds by defining paternal and maternal lineages on the basis of variation sites, which has revealed an African origin of Chinese donkeys (Chen et al., 2006;Han et al., 2014Han et al., , 2017. Autosomal microsatellite markers have been widely used in revealing genetic variability and identifying the genetic relationships among donkey populations Matassino et al., 2014;Rosenbom et al., 2015). Bordonaro et al. (2012) described the genetic variability and differentiation in Pantesco and two other Sicilian autochthonous donkey breeds by microsatellites makers. Recently, Jordana et al. (2016 analyzed genetic diversity and structure of American donkeys, providing information on putative routes of the spreading of donkeys across the American continent. These studies all provide important data for further breedspecific management and conservation programs. In order to investigate the genetic diversity and population structure of Chinese indigenous donkeys, 504 animals from 12 native breeds were assessed using 25 fluorescently labeled microsatellite markers. The results present accurate and comprehensive insights into the genetic variation, genetic structure, and dispersal route of Chinese donkey breeds, contributing to a rational basis for working out breeding strategies and genetic conservation plans. Sample collection and DNA extraction A total of 504 individuals from 12 Chinese donkey breeds were collected, including two large donkey types (Dezhou, Guanzhong,), three medium types (Qingyang, Biyang, and Jiami), and seven small types (Kulun, Gunsha, Qinghai, Liangzhou, Xinjiang, Taihang, and Xiji). These breeds are distributed along the Yellow River basin and Guanzhong Plain (Fig. 1), which represent the major genetic resources of Chinese donkey breeds. Our aim was to collect at least 30 samples from a minimum of two separate flocks, although this was not possible for all breeds (more information about these breeds is showed in Table 1). The genomic DNA was isolated from peripheral blood using a standard phenolchloroform protocol and stored at − 20 • C (Samhrook et al., 1989). Statistical analysis A Fisher's exact test was performed to determine possible deviation from the Hardy-Weinberg equilibrium (HWE) using GENEPOP 1.2 (Raymond and Rousset, 1995). Exact p values were estimated from the Markov-chain algorithm using 10 000 dememorization steps, 500 batches, and 5000 iterations per batch. Population genetic indexes, such as the observed number of alleles (N a ), effective number of alleles (N e ), observed heterozygosity (H o ), and expected heterozygosity (H e ) of each donkey breed, were obtained using POPGENE 1.31 software (Yeh et al., 1999). The Fstatistic values (F I S , fixation indices of subpopulation; F I T , fixation indices of total population; F ST , fixation index resulting from comparing subpopulations to the total population; Weir and Cockerham, 1984), together with the total number of alleles (At), were estimated with Arlequin version 3.1 (http://cmpg.unibe.ch/software/arlequin3, last access: 27 February 2019). The polymorphic information content (PIC) of each locus was calculated using PIC CALC (Nagy et al., 2012). The number of private alleles (NPA) was counted using the GDA program (https://download.csdn.net/ download/vip8_8/9856774, last access: 27 February 2019) A principal coordinates analysis (PCoA) was performed to reveal major patterns of genetic variability and clustering of breeds based on F ST matrix using GENALEX 6.501 (Peakall and Smouse, 2006). The population structure of the Chinese donkey was investigated by STRUCTURE (http:// web.stanford.edu/group/pritchardlab/structure.html, last access: 4 March 2019). Each run included a burn-in period of 800 000 Markov chain Monte Carlo (MCMC) steps, followed by 1 000 000 additional iteration steps. Neighbor-joining (NJ) trees were constructed based on the weighted estimator of Reynolds' distance (DR; Reynolds et al., 1983) by using POPULATIONS version 1.2.30 (Langella, 2002). The robustness of the dendrograms was evaluated using a bootstrap test of 5000 resembling of loci, with replacement. The unrooted distance tree was then visualized with TREEVIEW version 1.6.6 (Page, 1996). Polymorphism of microsatellite loci All of the microsatellite loci were amplified and were polymorphic in 12 donkey breeds. The HWE was tested for all breed-locus combinations, significant (P <0.05) deviations from a HWE were observed for 158 (13.50 %) of 300 breedlocus combinations (Table S3). On average, 13.16 alleles per breed and 4.080 breeds per locus deviated significantly from HWE. The Gunsha and Qinghai donkeys showed the maximum number of loci in disequilibrium (19 loci), followed by Qingyang donkey (17 loci). Of the 25 microsatellite loci analyzed, as many as 262 alleles were identified for the studied donkey populations (Table S2). The total number of alleles per locus (AT) ranged from 3 (HTG6 and COR022) to 20 (AHT4), with a mean of 10.48. PIC is an index of gene abundance, the level of which indicates the diversity of the genetic basis of a breed. PIC reflects genetic variation in microsatellite loci. When PIC >0.5, 0.5> PIC >0.25, and PIC <0.25, it indicates the locus has high polymorphism, moderate polymorphic, and low polymorphism, respectively (Botstein et al., 1980). The PIC across the 25 loci ranged between 0.1489 (COR022) and 0.8670 (HMS2). Additionally, 20 loci showed high polymorphism (PIC >0.5) and three loci (SGCV28, HMS45, and ASB02) showed moderate polymorphism (PIC >0.25) (Table S2). Genetic diversity among native Chinese donkey breeds A summary of the identified polymorphisms from 12 donkey breeds is listed in Table 1. Various alleles in a population are attributed to the long-term evolution. The mean N a for 12 Chinese donkey breeds was 6.890, ranging from 5.720 (Gunsha) to 8.120 (Kulun). The N e was the highest in the Jiami breed (4.320) and lowest in the Guanzhong breed (3.280), with a mean of 3.700. Heterozygosity (H ), also known as genetic diversity, reflects the genetic variation on N loci, which is generally considered to be the optimal parameter for estimating genetic variation in a population. H o for the whole population was 0.5708 that showed a range of values from 0.5397 (Qingyang) to 0.5993 (Kulun). The H e values varied between 0.6315 in Guanzhong donkeys and 0.6999 in Jiami donkeys (mean value = 0.6628), which showed no significant difference among breeds (Table 1). A total of 32 private alleles were observed in our study (Table 1); the NPA of the Qinghai donkey was particularly high (NPA = 9), representing 28.12 % of the total NPA. However, half of the donkey breeds have only one private allele that was at very low frequencies of below 4 % and no private alleles were detected in Guanzhong and Gunsha donkeys. The inbreeding coefficients (F I S ) of all Chinese donkey breeds were positive, and the values of five Chinese breeds (Dezhou, Liangzhou, Jiami, Qinhai, and Qingyang; F I S >0.0750) differed significantly from zero (P <0.01). These results indicate the possibility of inbreeding within the population, evoking the necessity to carefully select a proper strategy for further conservation of the resource. Genetic distance and relationship among native Chinese donkey breeds The PCoA method was performed to investigate possible genetic relationships between Chinese donkey breeds (Fig. 2). The first axis (accounting for 27.88 % of variation) separated two groups. The first group encompassed Xiji, Xinjiang, Liangzhou, Kulun, and Guanzhong donkeys. The second one gathered Gunsha, Dezhou, Biyang, Taihang, Jiami, Qingyang, and Qinghai donkeys. The second axis (19.54 %) tended to separate the Xiji donkey breed from the other donkeys of the first group. The results of the STRUCTURE program analysis revealed that there were two geographical lineages when K = 2 (Fig. 3). The existence of two major clusters was consistent with the PCoA analysis, such that the first inferred one (cluster A) gathered Kulun, Guanzhong, Liangzhou, and Xiji donkey breeds, the second one (cluster B) included Biyang, Dezhou, and Gunsha donkeys, while other donkey breeds (Qingyang, Qinghai, Jiami, Xinjiang, and Taihang) had contributions from both clusters. According to the results with K = 4 (Table S5), the Xiji population seems to have evolved independently due to inefficient transportation, and has experienced a genetic drift process. Genetic distance is a measure of genetic variation between populations, which objectively reflects variations and differentiation between them. An NJ tree was constructed on the basis of the Reynolds' distance. It showed that all 12 donkey breeds could be clustered into two clusters (Fig. 4), which highly correspond to the results of PCoA and structure analysis (K = 2). Genetic diversity and differentiation of Chinese donkeys In this study, the polymorphisms at 25 microsatellite loci in 504 Chinese donkeys from 12 breeds were investigated. The overall and average N a were very high, reflecting relatively high genetic variability in these donkey breeds. Among Chinese donkeys, the H e ranged from 0.6315 (Guanzhong) to 0.6999 (Jiami), which showed a comparable level to the previous values reported in Spanish (Arangurenméndez et al., 2001) and Croatian coast donkeys (Ivankovic et al., 2015), and was more diversified than Poitou (Bellone et al., 2002), Italian (Colli et al., 2013;Matassino et al., 2014) and American donkeys (Jordana et al., 2016). There was a wide range of values concerning NPA among Chinese donkey breeds. The Qinghai donkey had particu-larly high NPA values. Furthermore, there were eight Chinese donkey breeds that had less than two private alleles. Additionally, the results of F statistics in the donkey populations showed that over half of the breed-locus combinations deviated from HWE (P <0.05; Table S3). This might be due to a predominance of mating between close relatives or small effective population sizes in these donkey breeds. With the enhancement of agricultural mechanization during the last four decades, the Chinese donkey population suffered from a severe reduction in population size (Ma et al., 2003). As a result, available breeding males were limited. Genetic differentiation among the breeds was characterized by estimating overall and pairwise F ST values. The total F ST of Chinese donkey breeds is 0.0599, suggesting that 94.11 % of the total genetic variation resulted from genetic differentiation within breeds (Table 1), which showed a higher value compared to Italian donkeys (Colli et al., 2013;Matassino et al., 2014), but lower than that of donkeys in Africa (Rosenbom et al., 2015) and America (Jordana et al., 2016). Our results indicated a moderate degree of population differentiation in Chinese donkey breeds. Relationship among 12 Chinese native donkey breeds In this study, the analysis with the STRUCTURE program revealed that Chinese donkeys were grouped into two lineages when K = 2 (Fig. 3): cluster A included Kulun, Guanzhong, Liangzhou, and Xiji donkey breeds and cluster B gathered Dezhou, Gunsha, Biyang, and Taihang breeds, while other donkey breeds (Xinjiang, Qinghai, Qingyang, and Jiami) appeared to be the contact zone between both clusters, as individuals had mixed lineages. The results support the previous genetic research about the origin of the Chinese donkey, in which Chinese donkeys have two distinct mitochondrial maternal lineages, known as Nubian wild ass (Equus africanus africanus) and the Somali wild ass (Equus africanus somaliensis) (Lei et al., 2007;Han et al., 2014). When K = 3 (Fig. S1), Taihang donkeys were separated within cluster B and have a genetic relationship with Xinjiang donkeys, which is presumably the result of an ancient founder effect that took place at the early stages of colonization. In addition, the joint influence of isolation and selection pressure may also contribute to particular phenotypes. According to the results of structure analysis (K = 4; Fig. S1), the Xiji population seems to have evolved independently due to inefficient transportation and has experienced a genetic drift process. Indeed, the Xiji breed is a unique genetic resource with nearly 100 years breeding history. They are today still bred in Xiji County of the Ningxia Hui Autonomous Region with complex landforms and limited traffic conditions. Furthermore, Xiji donkeys are mainly breeding in restricted and small populations by local people. The government introduced the Guanzhong donkey in 1964, but the influence was low. After that, Xiji donkeys never crossed with any other donkey breeds (China National Commission of Animal Genetic Resources, 2011). All of these reasons may contribute to Xiji donkeys differing from other 11 Chinese donkey breeds. The NJ tree and PCoA also recapitulated these findings that all 12 donkey breeds could be clustered into two groups (Figs. 4 and 2). Additionally, two main groups suggest that the colonization process and expansion of donkeys across China followed at least two main pathways. According to textual research and ancient DNA studies (Han et al., 2014), the earliest domestic Chinese donkeys were from the small donkeys of ancient Xinjiang and entered the mainland 2000 years ago (west Han Dynasty). They arrived in the Hexi Corridor of the northern Qilian Mountains along the Silk Road and then developed into Liangzhou donkeys. After entering the west of Liupan mountain, they lived in Xiji County of the Ningxia Hui Autonomous Region and its environs. They adapted to the semi-arid mountainous climate and developed into the Xiji donkey (Yang, 1991). Based on the historical record, the Silk Road of the Song Dynasty (1000 years ago) entered the central plains and was not from the Hexi Corridor but from the Yan'an area (close to the Guanzhong Plain area). Therefore, donkeys of western regions could adapt well to the alpine steppe ecological types in the specific ecological environment of the Mu Us Desert and developed into Kulun donkeys, which might contribute to the close relationship between Guanzhong and Kulun donkey breeds (Fig. 3). The results of the NJ tree showed that Xinjiang, Liangzhou, Xiji, Guanzhong, Kulun, and Taihang donkeys are clustered together, which is consistent with their geographical distribution and breeding history. During the Tang Dynasty, when the Silk Road reached its golden age, the number of Chinese domestic donkeys had increased primarily to meet the demand for the expansion of trade (Han et al., 2014). After arriving in the Guanzhong Plain area (the Chang'an, now Xi'an city was the center of politics, economy and culture in ancient China), donkeys of the western regions were rapidly imported to Qinghai, Shaanxi, Henan, Hebei, and Shandong provinces along the Yellow River Basin, and developed into the famous Qinghai, Jiami, Gunsha, Biyang, Qingyang, and Dezhou donkey breeds (Yang and Hong, 1989). Therefore, these donkey breeds clustered into another group (Fig. 4) Our results also support the previous hypothesis for three dispersal routes of Chinese donkeys: (1) the spread of Chinese domestic donkeys in history was from Xinjiang via Ningxia, Gansu to the Guanzhong Plain of Shaanxi Province; (2) at the same time, Chinese domestic donkeys dispersed in parallel from Xinjiang to Inner Mongolian and Yunnan Province; (3) finally, Chinese domestic donkeys dispersed from Guanzhong Plain to other regions of China (Lei et al., 2007). Conclusions To conclude, these results reveal an insight into the genetic diversity and relationships between the Chinese donkeys, which demonstrated that indigenous donkey populations of China retain relatively abundant genetic diversity and the ge-netic relationships between different donkey breeds correspond to their geographic distribution and breeding history. The information presented here will be used to optimize reproductive management and provide tools for adopting adequate breeding strategies aimed at preserving its genetic variability. Data availability. The data sets are available upon request from the corresponding author.
v3-fos-license
2021-10-26T13:27:08.391Z
2021-10-26T00:00:00.000
239770414
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.721783/pdf", "pdf_hash": "597eb5ec5bf93cdc642ee37ef2c253ce73219332", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44625", "s2fieldsofstudy": [ "Psychology" ], "sha1": "597eb5ec5bf93cdc642ee37ef2c253ce73219332", "year": 2021 }
pes2o/s2orc
Can Masked Emotion-Laden Words Prime Emotion-Label Words? An ERP Test on the Mediated Account The present event-related potential (ERP) study explored whether masked emotion-laden words could facilitate the processing of both emotion-label words and emotion-laden words in a valence judgment task. The results revealed that emotion-laden words as primes failed to influence target emotion-label word processing, whereas emotion-laden words facilitated target emotion-laden words in the congruent condition. Specifically, decreased late positivity complex (LPC) was elicited by emotion-laden words primed by emotion-laden words of the same valence than those primed by emotion-laden words of different valence. Nevertheless, no difference was observed for emotion-label words as targets. These findings supported the mediated account that claimed emotion-laden words engendered emotion via the mediation of emotion-label words and hypothesized that emotion-laden words could not prime emotion-label words in the masked priming paradigm. Moreover, this study provided additional evidence showing the distinction between emotion-laden words and emotion-label words. INTRODUCTION Across various studies of different languages, it has been consistently found that emotion-label words and emotion-laden words differ in a variety of tasks, such as lexical decision task Altarriba, 2015, 2016a,b;Zhang et al., 2017Zhang et al., , 2018a, flanker task Zhang et al., 2019a,b), and affective Simon task (Altarriba and Basnight-Brown, 2011). Emotion-label words (e.g., shame and ecstasy) directly describe an emotional state, whereas emotion-laden words (e.g., butterfly and surgery) indirectly induce emotion via elaboration (Sutton and Altarriba, 2016;Zhou and Tse, 2020). Recent event-related potential (ERP) studies found that more substantial brain activation was evoked by emotion-label words than emotion-laden words in both Chinese Wang et al., 2019) and English (Zhang et al., 2018a) in a lexical decision task. For example, Zhang et al. (2018a) found that emotion-label words in English provoked larger N170 than emotion-laden words. There was also research demonstrating that the discrepancy between emotion-laden words and emotion-label words had an impact on perception of emotion (Wu et al., , 2020. Affective picture valence judgment was facilitated by preceding emotion-label words over emotion-laden words with accentuated processing speed and weaker electrophysiological responses, and the facilitation effect was found in both Chinese (Wu et al., 2020) and English . Although the separation between emotion-laden words and emotion-label words has received much support , it is still unclear how the two kinds of emotion words relate to each other. Emotion-label words and emotion-laden words are by no means irrelevant because when recognizing emotion-laden words, individuals will naturally activate related emotion-label words accordingly . For example, when individuals read the word "reward, " they will be reminded of the experiences of receiving a reward and emotions that are embodied in those experiences will also be induced. However, the connection between emotion-label words and emotion-laden words is not one-to-one. Negative emotionladen words, such as death, will activate various negative emotion concepts, such as fear, sadness, and so on. Therefore, one emotion-laden word can have multiple connections to many emotion-label words, but each of the connections is dependent on situations. The current study employed the masked priming paradigm to examine whether or not masked emotion-laden words could prime emotion-label words. Investigating the association between emotion-laden words and emotion-label words is of close relevance to the theory explaining the distinction between emotion-label words and emotion-laden words (Knickerbocker, 2014). Altarriba and Basnight-Brown (2011) proposed a mediated account to explain the differences among emotion-label words and emotion-laden words and argued that emotion-laden words could be regarded as a type of "mediated" emotion concepts. Unlike emotion-label words that label emotion concepts straightforwardly, emotionladen words elicit emotion mediated by the emotion-label words after emotional experiences related to the emotionladen words are elaborated. Therefore, emotion-label words generated increased emotion activation than emotion-laden words, and this finding has been widely reported (Knickerbocker and Altarriba, 2013;Altarriba, 2015, 2016a,b;Zhang et al., 2017Zhang et al., , 2018aZhang et al., , 2019aWang et al., 2019;Wu et al., , 2020. However, the mediated account claimed by Altarriba and Basnight-Brown (2011) does not specify how emotion-laden words are mediated by emotion-label words. As elucidated before, one emotion-laden word does not activate a single and certain emotion labeled by one emotion-label word. Rather, each emotion-laden word has oblique relationships with emotion activation via unpredictable connections to emotion-label words. This individualized and contextualized mapping of emotion-laden words and emotion-label words is an important addition to the original mediated account (Altarriba and Basnight-Brown, 2011). Extant studies have already provided partial support for the mediated account. Kazanas and Altarriba (2015) examined how emotion-label words and emotion-laden words were different in provoking priming effects in masked and unmasked priming paradigms. In their study, emotion-label words primed emotionlabel words, and emotion-laden words primed emotion-laden words. The results revealed a significant priming effect for both emotion-label words and emotion-laden words with the priming effect being larger for emotion-label words, suggesting that emotion-label words and emotion-laden words are distinct categories. Although Altarriba (2015, 2016a,b) offered much insight into how emotion-label words and emotion-laden words are semantically presented, they did not provide a certain answer to how emotion-laden words and emotion-label words are related to each other. In other words, in their study, when target words were emotion-label words, the primes were also emotion-label words, and the same procedure was applied to emotion-laden words. This operation resulted in one unresolved issue whether emotion-laden words could prime emotion-label words. According to the mediated account (Altarriba and Basnight-Brown, 2011), if emotion-laden words elicit emotion via emotion concepts that are labeled by emotion-label words in an individualized and a contextualized manner, it can thus be predicted that emotion-laden words in the masked priming paradigm would not prime emotion-label words as targets. This study attempted to examine whether masked emotion-laden words could prime emotion-label words and test the hypothesis derived from the mediated account (Knickerbocker et al., 2019). As suggested by Knickerbocker et al. (2019), one extension of the study by Kazanas and Altarriba (2015) was to investigate how emotion-label words are primed by emotion-laden words. One recent event-related potential (ERP) study (Wu et al., 2021) explicitly examined how emotion-label words and emotion-laden words as primes influence target emotion-laden word processing in both masked and unmasked priming paradigms. The overall results confirmed the division of the two types of words, and, more importantly, this distinction could still be observed in the masked condition. Specifically, masked emotion-label words inhibited target emotion-laden words by increasing the error rate and decreasing the processing speed than those target words preceded by masked emotion-laden words. However, one unresolved problem was how emotionladen words could influence emotion-label words. This study aimed to answer this question according to a previous study (Wu et al., 2021). In addition, this study measured electrophysiological responses using the ERP technique. One late ERP component (late positivity complex, LPC) related to elaboration during emotion word processing was explored in this study. Emotion-laden words were found to elicit enhanced LPC than emotion-label words, suggesting that processing emotionladen words is more effortful than emotion-label words (Zhang et al., 2018a). If emotion-label words could be primed by emotion-laden words in the masked condition, it is predicted that emotion-label words would generate a larger LPC when the emotion-label words are preceded by unrelated emotion-laden words (from the different valence) than by related emotion-laden words (from the same valence). If emotion-laden words could not facilitate emotion-label word processing, no modulation on LPC is expected. METHOD Participants About 25 Chinese-English bilingual speakers from the University of Macau were recruited for this study. Due to exceeding artifacts, 5 were excluded, and the remaining 20 Chinese speakers were kept for further processing (3 men, mean age: 27 years). All of the participants were right-handed. They reported that they did not suffer from psychiatric disorders or brain damage. Participants had normal or corrected-to-normal vision. The sample size was determined by calculating the prior power using G * power (Faul et al., 2007). Repeated-measure ANOVA requires at least 20 participants when the power is 0.8, and effect size is medium, partial η 2 = 0.1, also in line with previous studies Zhang et al., 2019a). Stimuli Two sets of Chinese emotion words were selected for the stimuli (Wu et al., 2021). The first set of emotion-laden words that were used as primes was obtained from a recent Chinese norming database (Yao et al., 2017). There were 160 emotion-laden words, including 80 positive words and 80 negative words. Both the negative and positive words were divided into two halves to prime 160 target emotion words. The target words formed the second set of Chinese words, including 80 emotion-label words (40 positive and 40 negative words) and 80 emotion-laden words (40 positive and 40 negative words). The primes were not different among the different conditions in terms of word frequency, strokes, and arousal, all ps > 0.05. The same restriction was applied to target words, all ps > 0.05 (see Tables 1, 2 for more details). Primes and targets were randomly combined to control the semantic association between the primes and targets. We calculated the word association strength between emotion-label words and emotion-laden words from a recent Chinese word database (Lin et al., 2019) and found no association between the primes and targets (see more details in Discussion). Procedure All procedures have been approved by the Institutional Review Board at the University of Macau before the experiment. All participants first signed a consent form before they started the experiment. After setting up the Geodesic Sensor Net (EGI, Eugene, OR, USA), the experimenter described the experimental procedure to participants. At the same time, participants could read the written instruction that was displayed on the monitor at a distance of 70 cm. The task was to determine the valence of the target Chinese emotion words. Each trial started with a 500 ms fixation. A forward mask lasting for 500 ms was followed by a prime word (50 ms). The prime was also masked by a backward mask (10 ms) that was created by overlapping several complex Chinese characters (Zhang et al., 2018b). Afterward, a target emotion word (Song font, 48 points) was presented to participants and disappeared as soon as participants made a response. Each prime was presented two times in two different blocks and was randomly paired with a target word with the same (related condition) or different (unrelated condition) valence. Altogether, 320 trials were dispersed into 8 blocks, each of which included 40 trials. Trials within each block and the blocks were presented randomly. The short rests were inserted between the blocks. ERP Recording and Analysis Scalp voltages were recorded with a 129-channel Geodesic Sensor Net with a sampling rate of 1,000 Hz. The impedance was retained below 50 k during recording. To avoid eyeblinks, at the end of each trial, a notice of allowing eyeblink was displayed for 1,000 ms. The offline data were first filtered with a bandpass of 0.1-30 Hz. The continuous EEG data were further segmented into epochs from 100 ms prior to the presence of the target words. The segmentations were passed through an artifact scan (eyeblink, 70 µV; eye movement, 27.57 µV) and were discarded if the epochs were labeled as an eyeblink or eye movement. The channel was labeled as bad if the change of amplitude exceeded 200 µV. The bad channels were replaced by peripherical sites. However, we deleted the segmentations with more than 10% bad channels. The EEG data were referenced to the average of all electrodes. A baseline correction of −100 to 0 ms on the onset of stimuli was performed. For LPC, three electrodes (Cz, C2, and C4) were chosen during the time window of 500-800 ms. LPC was mostly identified around central sites (Zhang et al., 2018a). We also determine the electrodes using a visual inspection. Moreover, previous studies also found LPC was more salient in the right hemisphere than in the left hemisphere (Zhang et al., 2018a), thus the right central sites were chosen for LPC in this study. ERP Results We performed a 2 (Valence: negative and positive) × 2 (Relatedness: related and unrelated) × 2 (Emotion word type: emotion-laden words and emotion-label words) repeatedmeasure ANOVA. The priming effect was confirmed such that emotion words in the related condition (0.58 µV) provoked a smaller LPC than in the unrelated condition (0.97 µV), [F (1, 19) = 5.732, p < 0.05, partial η 2 = 0.232]. To further compare the priming effect between emotion-label words and emotionladen words, additional ANOVAs containing the two withinsubject factors (valence and relatedness) for emotion-label words and emotion-laden words were conducted separately. The result showed that emotion-laden words only produced a priming effect on emotion-laden words, [F (1, 19) = 4.585, p < 0.05, partial η 2 = 0.194], rather than on emotion-label words, [F (1, 19) = 1.327, p > 0.1]. A larger LPC was elicited by the target emotion-laden words that were preceded by the different valence emotion-laden words (1.17 µV) than those that were preceded by the same valence emotion-laden words (0.64 µV). However, no priming effect was found for emotion-label words as targets (0.52 µV in the related condition and 0.77 µV in the unrelated condition). No other main effects or interactions were identified, ps > 0.05 (refer to Figure 1). DISCUSSION In this experiment, we investigated whether emotion-laden words as primes could influence target emotion-label and emotion-laden words in the masked priming paradigm. The behavioral results showed the processing differences between the two types of emotion words, replicating many prior examinations Altarriba, 2015, 2016a,b;Zhang et al., 2017Zhang et al., , 2018aZhang et al., , 2019aWang et al., 2019;Wu and Zhang, 2019a,b;Wu et al., , 2020. Electrophysiological evidence further supported the emotion word-type effect by showing that emotion-laden words rather than emotion-label words could be facilitated by emotionladen words as masked primes, suggesting that the two categories are emotion-label words and emotion-laden words . This study aimed to examine the mediated account that explained the relationship and the differences between emotionlabel words and emotion-laden words (Altarriba and Basnight-Brown, 2011). Altarriba and Basnight-Brown (2011) argued that emotion-laden words are "mediated" by the emotion concepts that are indicated by emotion-label words. Therefore, the emotion activation that is induced by emotion-laden words is achieved after related emotion concepts are activated. For example, a recent study showed that emotion-laden words generated a more substantial electrophysiological activation than emotion-label words, indicating that emotion-laden word processing is more effortful than emotion-label words (Zhang et al., 2018a). However, at a first glance, this study showed contradictory evidence that emotion-laden words had a higher processing speed than emotion-label words with an increased accuracy rate, implying that emotion-label words were harder to recognize in a valence judgment task. This result pattern was, indeed, in line with one recent study that used the emotion flanker task and found that Chinese emotion-laden words were recognized faster than emotion-label words . One difference between the two studies was that there were only 6 emotion-label words in each category in that study , but there were 80 emotion-label words in this study. Therefore, this study, using a large number of emotion words, was an extension of a previous study by showing that judging the valence of emotion-label words was hard in both flanker and affective priming tasks. The reason for the difficulty in evaluating the valence of the emotion-label words is that most emotion concepts that are labeled by emotion-label words are not a valence-based representation, especially for negative emotions (e.g., shame, sadness, fear, anger, and boredom). Many researchers (Ekman, 1992;Izard, 2007;Ekman and Cordaro, 2011;Lench et al., 2011) theorized that negative emotions were discrete and negativity was not sufficient to explain the variance between the negative emotions, such as fear and sadness. The ambiguous valence conveyed by emotion-label words was to fill an important void in the mediated account explaining the association between emotion-label words and emotion-laden words. The mediated account claimed that emotion-laden words were mediated by emotion-label words but did not specify how emotion-laden words were mediated by emotion-label words. The emotionladen words failed to prime emotion-label words, suggesting that emotion-laden words were mediated by emotion-label words in ambiguous associations between the two kinds of emotion words. For example, the word "wedding" can induce many related positive emotions (e.g., happiness and excitement). More importantly, emotions are discrete in both negative and positive categories (Lench et al., 2011;Shiota et al., 2017), increasing the difficulty of judging the valence of target emotion-label words. It could be argued that word association might influence the priming effect (Hines et al., 1986). We controlled the word association between primes and targets by randomly pairing primes and targets. Therefore, it is assumed that the word association between primes and targets is nearly zero. Based on the Chinese Lexical Association Database (CLAD), one recent Chinese association norming database (Lin et al., 2019), we further analyzed the word association between primes and targets. We retrieved the Baroni-Urbani measure on clauses for each prime emotion word and found that almost all the primes were not associated to targets both in related and unrelated conditions, except a very few words [see the Appendix (Supplementary Material) for the word list and word association, and the two prime words were not found in CLAD]. Further comparisons between primes and targets in related and unrelated for emotion-label words and emotion-laden words found that the word association strength was equally weak for emotion-label words and emotion-laden words as targets, [F (1, 77) = 0.737, p > 0.39], for emotion word type, [F (1, 77) = 1.572, p > 0.21], for relatedness, and [F (1, 77) = 0.145, p > 0.70] for the interaction between emotion word type and relatedness. Based on the restricted word association between primes and targets, it is clear that the priming effect for emotion-laden words was not semantic but affective in essence. Several limitations of this study should be noted. The first limitation is the definition of emotion-label words and emotionladen words. It is argued that research on emotion-label words and emotion-laden words is lacking an objective measurement of determining what is an emotion-label word or an emotion-laden word (Hinojosa et al., 2020). One recent normative database of 1,286 Spanish words proffered the ratings of emotional prototypicality that refers to the degree of the typicality of an emotion word (Pérez et al., 2021). The higher prototypicality of an emotion word means that it is more reasonable to be defined as an emotion-label word, such as fear. However, this study did not use this approach to define emotion-label words. Future studies can use the prototype approach to objectively define an emotion-label word. The second limitation is that only adults were included in this study. There is a recent urge to explore how emotional concepts are acquired by children (Hoemann et al., 2019). Therefore, how children process emotion-label words and emotion-laden words would enlighten the emotional development of children across cultures. Future exploration could follow this trend to differentiate emotion-label words and emotion-laden words and examine how emotion-laden words and emotion-label words are associated in mental lexicon of children. The third limitation is that we did not control the concreteness between emotion-label words and emotion-laden words. We retrieved the concreteness of primes and targets from a recent Chinese normative database (Xu and Li, 2020) and found that emotion-label words (48 words were found) were more abstract than emotion-laden words (60 words were found), [F (1,106) = 61.426, p < 0.001]. Therefore, the distinction between the two kinds of words can be attributed to the influence of concreteness. However, the primes (emotion-laden words) for target emotion-label words (55 prime words were found) and emotion-laden words (45 prime words were found) were the same on concreteness, [F (1, 98) < 1, p > 0.47]. The priming effect was found only for emotion-laden word pairs, and this result thus is not related to concreteness. Further research on emotion-label words and emotion-laden words should consider controlling concreteness (Wang et al., 2019). In addition, although we used the masked affective priming paradigm to preclude the strategic influence of processing prime words, future studies can also exploit the unmasked priming paradigm and vary the duration of primes (extending to 1,000 ms) to explore the affective priming of the two types of words (Kazanas and Altarriba, 2016a). To summarize, the results in this study supported the mediated account, assuming that there was no priming effect of emotion-laden words on emotion-label words. However, the priming effect of emotion-laden words on emotion-laden target words was identified, in line with the mediated account and previous studies Altarriba, 2015, 2016a,b). Moreover, the distinction between the two kinds of emotion words was also replicated, compatible with the emotion conflict study using the flanker task . DATA AVAILABILITY STATEMENT The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Institutional Review Board in the University of Macau. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS CW and JZ developed the research idea and determined the research design. ZY commented on the research design critically. CW conducted the research, analyzed the data, and drafted the manuscript. JZ and ZY reviewed the manuscript and provided insightful revisions at all stages. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2017-10-03T20:47:58.523Z
2017-01-05T00:00:00.000
10286069
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/pdf/spmj/v135n1/1806-9460-spmj-135-01-00034.pdf", "pdf_hash": "c555b2d1dd65423353bfa534ba1080d1afa56a3f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44626", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c555b2d1dd65423353bfa534ba1080d1afa56a3f", "year": 2017 }
pes2o/s2orc
Living near the port area is associated with physical inactivity and sedentary behavior ABSTRACT CONTEXT AND OBJECTIVE: The impact of the port of Santos, Brazil, on the population’s health is unknown. We aimed to evaluate the association between living near the port area and physical inactivity and sedentary behavior. DESIGN AND SETTING: Cross-sectional study developed at a university laboratory and a diagnostic clinic. METHODS: 553 healthy adults were selected and their level of physical activity in daily life was assessed using accelerometers. Multiple linear and logistic regressions were performed using physical inactivity and sedentary behavior as the outcomes and living near the port area as the main risk factor, with adjustments for the main confounders. RESULTS: Among all the participants, 15% were resident near the port area. They took 699 steps/day and presented, weekly, 2.4% more sedentary physical activity, 2.0% less time in standing position and 0.9% more time lying down than residents of other regions. Additionally, living near the port area increased the risk of physical inactivity by 2.50 times and the risk of higher amounts of sedentary behavior (≥ 10 hours/day) by 1.32 times. CONCLUSION: Living near the port of Santos is associated with physical inactivity and higher sedentary behavior among adults, regardless of confounders. The reasons for this association should be investigated in longitudinal studies. INTRODUCTION Historically, ports are considered to be engines of economic development for the cities and regions where they are located. The port of Santos in Brazil is one of the most important ports in Latin America due to its size and export capacity. 1 This is the main gateway for incoming and outgoing products in this country.Despite boosting the economy, it is known that ports cause a negative impact on the health of residents of the surrounding areas. 2 Living near the port area is associated with low socioeconomic status, 3 and the pollution of the port increases the risk of developing respiratory 4 and cardiovascular disease. 5cording to the global recommendations on physical activity for health, "adults aged 18-64 should do at least 150 minutes of moderate-intensity aerobic physical activity throughout the week or do at least 75 minutes of vigorous-intensity aerobic physical activity throughout the week or an equivalent combination of moderate and vigorous-intensity activity." 6 Thus, physical inactivity is characterized as failure to reach the recommendations mentioned above. 7Sedentary behavior, in turn, can be defined as "any wakeful behavior characterized by energy expenditure of 1.5 or fewer metabolic equivalent tasks (METs) while in a sitting or reclining posture". 8[11][12][13][14] Examples of sedentary behavior include watching television, sitting, playing video games and using computers. 15Current studies have been investigating associations of physical activity and sedentary behaviors separately or combined. Our previous results showed that the proportion of physically inactive subjects in a sample in the city of Santos was between 14% and 20% and that there was an association between physical inactivity and restrictive lung patterns detected by spirometry. 16,17The level of physical activity in daily life is influenced by the physical environment in which subjects live, with their social and individual correlates, 18 but may also be related to chronic exposure to air pollutants.The vicinity of the port area in Santos seems to be a violent area with few or no safe public spaces where people can perform physical activities.Moreover, it is a highly polluted area, where the annual average levels of particulate matter grossly exceed what is recommended by the World Health Organization. 19formation about the impact of the port of Santos on the population's health is scarce, especially in relation to the level of physical activity within daily life and sedentary behavior directly evaluated by means of triaxial accelerometers.Our hypothesis was that living in neighborhoods close to the port of Santos would be associated with higher prevalence of physical inactivity and increased levels of sedentary behavior, regardless of the main confounders. OBJECTIVE We aimed to evaluate the association between living near the port of Santos and physical inactivity and sedentary behaviors among adults.We divided the participants into two groups: people residing near the port area and people residing in other surrounding neighborhoods within the metropolitan area of Santos.We used the map of the city to select residents of neighborhoods that are adjacent to the port area.We defined the participants' socioeconomic level according to the mean income of each neighborhood based on official documents held by the city of Santos, which include a map of the city according to the average income of heads of households. In the early clinical evaluation, personal and demographic data were collected.In addition, the participants answered the physical activity readiness questionnaire 20 in order to evaluate some possible risks relating to performing physical exercises such as cardiopulmonary exercise testing.They also answered questions about any history of respiratory illness, based on the American Thoracic Society questionnaire, 21 to investigate exposure to pollutants, history of asthma and smoking status; and cardiovascular disease risk stratification was performed as specified by the American College of Sports Medicine. 22 excluded participants with a self-reported diagnosis of heart disease, lung disease or musculoskeletal disorders.We made objective measurements to evaluate physical activity in daily life through triaxial accelerometry and lung function through spirometry; and conducted cardiopulmonary exercise testing using a ramp protocol on a treadmill.We also investigated the presence of self-reported major risk factors for cardiovascular disease, including age (≥ 45 years for males and ≥ 55 years for females), systemic arterial hypertension, diabetes/hyperglycemia, dyslipidemia/hypercholesterolemia, current cigarette smoking and family history of premature coronary heart disease.A family history of premature coronary heart disease was defined as myocardial infarction or sudden death of father or other male first-degree relative before 55 years of age, or of mother or other female first-degree relative before 65 years of age.Education level was reported as illiterate or completed primary, secondary or tertiary education. Smoking was also investigated through self-reporting.The subjects were considered to be smokers if they reported current tobacco use and had smoked 100 or more cigarettes during their lifetime. 23e participants were informed about the possible risks and discomforts of this study and signed a consent form.The local Ethics Committee for Human Research approved this study (protocol: 186.796). Anthropometric measurements Body weight and height were measured, and the body mass index was calculated in accordance with standardized methods. 24 Spirometry Spirometry was performed using a handheld spirometer (Quark PFT/CPET, Cosmed, Pavona di Albano, Italy) in accordance with the criteria established by the American Thoracic Society. 25e forced expiratory volume in the first second (FEV 1 ), forced vital capacity (FVC) and FEV 1 /FVC ratio were quantified.The predicted values were calculated using national reference equations. 26 Cardiorespiratory fitness The maximum/symptom-limited exercise capacity was assessed during cardiopulmonary exercise testing on a treadmill (ATL, Inbrasport, Curitiba, Brazil), following a ramp protocol.After 3 minutes at rest, the speed and inclination were automatically incremented according to the estimated maximal oxygen consumption (V'O 2 max), with the aim of completing the test in about 10 minutes. 27,28Cardiovascular, ventilatory and metabolic variables were analyzed breath by breath, using a gas analyzer (Quark PFT, Cosmed, Pavona di Albano, Italy).Oxygen uptake (V'O 2 ), carbon dioxide production (V'CO 2 ), minute ventilation (V'E), and heart rate were monitored throughout the test. The data were filtered every 15 seconds for further analysis.Peak V'O 2 was defined as the arithmetic average of the last 15 seconds at the end of the incremental phase of the cardiopulmonary exercise testing. Accelerometer-based sedentary behavior and physical activity in daily life 0][31] The equipment consisted of a small, lightweight box (4.6 cm x 3.3 cm x 1.5 cm) that was attached to the waist above the dominant hip, by means of a band (total weight = 19 g).It had the capacity to measure human movement along the vertical, sagittal and mediolateral axes.The participants were subjected to seven consecutive days of evaluation during their wakeful hours.To be considered valid, data collection days needed to have at least 10 hours of continuous monitoring, starting when the subject woke up, together with absence of excessive counts (> 20,000).We instructed the participants to remove the accelerometer at bedtime and during showers and aquatic activities. Periods with fewer than 60 counts per minutes (cpm) on the accelerometer were interpreted as periods when the accelerometer was not worn, with a tolerance of 2 minutes for periods with some movement, i.e. less than 50 cpm.The thresholds for the intensity of the physical activity were as follows: 32 1. very light (100-759 cpm); 2. light (760-1951 cpm); and 3. moderate-to-vigorous (> 1951 cpm). The minimum quantity and intensity levels for physical activity to be considered as such was 150 minutes of moderate-to-vigorous physical activity per week. 33,34Individuals who did not reach this level of physical activity were considered to be physically inactive. The total amount of sedentary behavior was determined based on the number of minutes with counts less than 100.On the other hand, active time was considered to be time spent on activities with ≥ 100 cpm.By means of the inclinometer located inside the accelerometer, the time spent in each body position was measured (i.e. reclining during wakeful hours, sitting or standing).The measurements were calculated as minutes/week and as percentages of the total time.Sedentary behavior was also assessed as a categorical variable in accordance with the threshold recently described. 13,14rticipants who performed ≥ 10 hours/day of sedentary activities were classified in a group with a high amount of sedentary behavior, whereas the group with a low amount was defined as < 10 hours/ day of such activities.Only data from the participants who used the accelerometer for at least four valid days were analyzed. Statistical analysis The sample size was calculated in accordance with the prevalence of physical inactivity of around 20% that was observed in previous findings from the EPIMOV study in the metropolitan area of the city of Santos. 16Through taking a 99% confidence interval, it was found that at least 423 participants needed to be enrolled in the present study.We performed the sample size calculation using the free tools available on the website www.openepi.com. Our first statistical analysis was a descriptive analysis of the data.We then evaluated whether being a resident in the port area was associated with physical inactivity in daily life and sedentary behavior, by means of multiple linear regression, regardless of socioeconomic and educational level.We developed two multiple logistic regression models in which physical inactivity and sedentary behavior were taken to be the outcomes and living near the port area was the main exposure.Adjusted odds ratios and 95% confidence intervals were calculated. RESULTS Fifteen percent (n = 83) of our participants were residents in the port area.These were significantly younger and had higher socioeconomic status (Table 1).However, the univariate analysis showed that sex, race, anthropometry, lung function, exercise capacity, smoking status, physical inactivity and risk of cardiovascular disease variables were not statistically different between residents and non-residents in the vicinity of the port.The prevalences of diabetes mellitus, hypertension and dyslipidemia in this study were similar to those found in population-based studies in Brazil. The results from the linear multiple regression analysis showed that there was an association between living near the port area and increased sedentary behavior, as evaluated using triaxial accelerometers.Other variables such as socioeconomic status, education level and smoking were also significant determinants of higher amounts of sedentary behavior ( fitness reduced the risk of physical inactivity (Table 3). Regarding sedentary behavior, 51.7% of our participants performed ≥ 10 h/day of sedentary activities.Living near the port increased the risk of high amounts of sedentary behavior by 32%. In this multiple logistic regression model, age, gender, socioeconomic status, education level and smoking were also selected as determinants of high amounts of sedentary behavior.There was a positive association between higher socioeconomic status and higher amounts of sedentary behavior (Table 4). Through multiple regression analysis, the residents of the port area showed higher amounts of sedentary behavior, i.e. less time standing and more time reclining, and also a lower number of steps/day, in comparison with people who did not live in the port area (Table 5). DISCUSSION This study investigated the association between living near the largest port in Latin America and physical inactivity and sedentary behavior among adults.The associations found indicated that living near the port of Santos increased the risk of physical inactivity and sedentary behavior among adults, regardless of socioeconomic status, education level, cardiovascular risk, lung function or cardiorespiratory fitness. Unlike what we expected, the residents of the port area were younger and had higher socioeconomic status than people who did not live in the port area.These results contrast with previously published data.Grobar 3 observed that the unemployment and poverty rates are significantly higher in port districts.This disparity is possibly due to a peculiarity of the city of Santos.The neighborhood of Ponta da Praia, one of the neighborhoods with the highest average income of the city, is located very close to one of the main terminals of the port.Nevertheless, living near the port region increased the risk of physical inactivity and sedentary behavior, regardless of the higher socioeconomic status of the residents of Ponta da Praia.This finding is interesting because studies have shown that low socioeconomic status groups perform an insufficient amount of physical activity to achieve health benefits. 35Our results suggest that living next to a major port could affect lifestyle, even among people with privileged socioeconomic status in relation to Brazilian patterns.Therefore, whether living in the port area in Santos is different from living in another port area elsewhere in the world remains to be clarified. Although there was no association between socioeconomic status and physical inactivity, we observed a positive association between higher socioeconomic status and higher amounts of sedentary behavior.It has been suggested that the associations between socioeconomic status and sedentary behavior present different directions in high-income countries, compared with low and middle-income countries, and that this varies according to the domain of sedentary behavior.Overall, the association between socioeconomic level and sedentary behavior is inverse. 36However, Mielke et al. 36 observed that this relationship varies according to the income level of the country.In high-income countries, socioeconomic status presented an inverse association with sedentary behavior (effect size: 0.67; 95% CI: 0.62-0.73),whereas a positive relationship was observed in low to middle-income countries (effect size: 1.18; 95% CI: 1.04-1.34).Unlike in high-income countries, in which all indicators of socioeconomic level were negatively associated with sedentary behavior, only resources showed a significant positive association in low to middle-income countries. Despite the significant relationship mentioned above, living in the port area remained a significant determinant of higher amounts of sedentary behavior. Residents near port areas are exposed to increased levels of air pollution due to emissions of particulate matter derived from the exhaust fumes of trucks and ships, and as a result of mechanical processes of milling operations and the ensuing street dust suspensions.8][39] In one of these studies, particulate matter and O 3 levels were correlated with reduction in physical activity in daily life and the number of steps/day, among patients with chronic obstructive pulmonary disease (COPD). 38Although air pollution was not assessed in our study, we believe that this in the port of Santos presented here. 19r results also showed that smoking was associated with physical inactivity and with greater amounts of sedentary behavior, independently.Previous results from the EPIMOV study 40 reinforce the findings of the present study.We compared two groups of physically active individuals, one formed by smokers and the other by nonsmokers.Although they performed the same amount of moderate-to-vigorous physical activity, as assessed directly using triaxial accelerometers, and were matched regarding major confounders, the smokers performed higher amounts of sedentary physical activity and spent more time sitting and lying down per week.Like in the present study, other recent studies have reported an association between smoking and physical inactivity. 41,42 we expected, cardiorespiratory fitness was inversely associated with physical inactivity and living near the port did not alter the risk of physical inactivity.Ecological models for physical activity and sedentary behavior identified influences from several attributes, including individual components, the social environment, the physical environment and public policy.Some of the main barriers preventing physical activity are lack of motivation, awareness and time, and lack of structure for physical activity. 43ople may have the necessary knowledge, skills, attitudes and motivation to be physically active, but if they do not have access to the necessary opportunities, they may be restricted or prohibited from being active.Building or enhancing facilities for physical activity can require a large amount of time and resources.Public health policies and intervention programs designed with a focus on increasing the level of physical activity and decreasing sedentary behavior are probably necessary for this region of Santos. Regarding the determinants of physical inactivity and sedentary behavior, cohort studies are needed to investigate the causes of the associations of physical inactivity and greater amounts of sedentary behavior with living near the port area of Santos. This study has limitations that need to be described.The crosssectional design did not allow us to establish any relationship between cause and effect.However, our objective was to evaluate the association between living near the port area of Santos and physical inactivity and sedentary behavior.We found that these associations were consistent.Our findings may guide new research questions towards identifying other determinants of physical inactivity and sedentary behavior relating to major ports. CONCLUSIONS Living near the largest port in Latin America, located in the city of Santos, Brazil, is associated with physical inactivity and sedentary behavior among adults, regardless of socioeconomic status, education level, cardiovascular risk, lung function or cardiorespiratory fitness.Whether this association is related to environmental exposure and/or to lack of equipment for physical activity in this region should be investigated in cohort studies. Five hundred and fifty-three adults (≥ 20 years of age) were selected from the Epidemiology and Human Movement Study, i.e. the EPIMOV (Estudo Epidemiológico sobre o Movimento Humano) study.Briefly, the EPIMOV study is an ongoing cohort study with the primary objective of investigating the longitudinal association of sedentary behaviors and physical inactivity with occurrences of hypokinetic diseases, especially cardiorespiratory and musculoskeletal diseases.The present study is a cross-sectional study from the first year of the EPIMOV study.The volunteers who participated in it were recruited through publicity in social networks, folders displayed in the universities of the region, local magazines and newspapers. Both multiple logistic regressions were adjusted according to the following: age; sex; race (i Table 2 ) . Living in the port area increased the risk of physical inactivity more than twofold, independently of any other confounder.Age and smoking also increased the risk of physical inactivity, after adjusting the logistic regression model according to age, gender, education level, socioeconomic status, risk factors for cardiovascular disease, cardiorespiratory fitness, lung function and smoking.On the other hand, cardiorespiratory Table 1 . General characteristics of the sample Data presented as mean ± standard deviation or as count and percentage.*P < 0.05: residents of the port area versus residents of other neighborhoods; † Assessed using triaxial accelerometers.FVC = forced vital capacity; FEV 1 = forced expiratory volume in the first second; V'O 2 = oxygen uptake. Table 2 . Results from linear multiple regression analysis on the association between sedentary behavior evaluated using accelerometers and living in the port area CI = confidence interval.Models adjusted for age, gender, education level, socioeconomic status, hypertension, diabetes mellitus, dyslipidemia, obesity, cardiorespiratory fitness, lung function and smoking. Table 3 . Results from the logistic regression analysis between physical inactivity assessed using accelerometers and factors associated to it (exposures) Models adjusted for age, gender, education level, socioeconomic status, hypertension, diabetes mellitus, dyslipidemia, obesity, smoking, lung function and cardiorespiratory fitness.FEV 1 = forced expiratory volume in the first second; V'O 2 = oxygen uptake. Table 4 . Results from the logistic regression analysis between sedentary behavior assessed by accelerometers and factors associated to it (exposures) Models adjusted for age, gender, education level, socioeconomic status, hypertension, diabetes mellitus, dyslipidemia, obesity, smoking, cardiorespiratory fitness and lung function.FEV 1 = forced expiratory volume in the first second; V'O 2 = oxygen uptake. may partly explain the higher proportion of physically inactive people and larger amount of sedentary behavior among residents of the port area.In fact, a recent large study conducted in Brazil showed that the particulate matter monitoring in the city of Santos is poor and started only in 2011.Moreover, Santos only has two air-monitoring stations and is classified as having the sixth highest concentration of particulate matter in the state of São Paulo, Brazil.The average level of particulate matter in the metropolitan area of the city of Santos was 37.23 µg/m 3 (annual mean) in 2011, which was significantly above the levels recommended by the World Health Organization.Despite the lack of assessment of particulate air pollution in the present study, it would be rational to suppose that environmental exposure to particulate matter may play a major role in the results Table 5 . Comparison between residents of the port area and people living in other areas regarding sedentary behaviors and the number of steps/day *P < 0.05: residents of the port area versus residents of other neighborhoods.
v3-fos-license
2023-11-07T06:42:31.146Z
2023-11-06T00:00:00.000
265033969
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aclanthology.org/2023.emnlp-main.477.pdf", "pdf_hash": "f9a9b842c0a7101f9cb80188043cd66162aec5a0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44628", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "fd3f976c47ea4c51fc0d3e159912d22fc751db99", "year": 2023 }
pes2o/s2orc
GLEN: Generative Retrieval via Lexical Index Learning Generative retrieval shed light on a new paradigm of document retrieval, aiming to directly generate the identifier of a relevant document for a query. While it takes advantage of bypassing the construction of auxiliary index structures, existing studies face two significant challenges: (i) the discrepancy between the knowledge of pre-trained language models and identifiers and (ii) the gap between training and inference that poses difficulty in learning to rank. To overcome these challenges, we propose a novel generative retrieval method, namely Generative retrieval via LExical iNdex learning (GLEN). For training, GLEN effectively exploits a dynamic lexical identifier using a two-phase index learning strategy, enabling it to learn meaningful lexical identifiers and relevance signals between queries and documents. For inference, GLEN utilizes collision-free inference, using identifier weights to rank documents without additional overhead. Experimental results prove that GLEN achieves state-of-the-art or competitive performance against existing generative retrieval methods on various benchmark datasets, e.g., NQ320k, MS MARCO, and BEIR. The code is available at https://github.com/skleee/GLEN. Introduction Generative retrieval has emerged as an innovative approach to document retrieval (Metzler et al., 2021).Unlike conventional retrieval methods following the "index-retrieve-then-rank" pipeline, it unifies an entire search process.Specifically, it directly generates the identifier of a relevant document for a given query.By formulating the entire search process as a sequence-to-sequence problem, it bypasses an auxiliary index structure and can be optimized through end-to-end learning. Despite these benefits, generative retrieval faces major challenges in how to define and train docu-ment identifiers.As depicted in Table 1, existing studies are categorized into two pillars: identifier types and identifier learning strategies.Identifier types.Some canonical works employ numeric identifiers for document representation, e.g., hierarchical clustering using document representations (Tay et al., 2022;Wang et al., 2022) and product quantization (Zhou et al., 2022).However, numeric identifiers struggle to fully exploit the knowledge of pre-trained language models (PLMs) due to a semantic discrepancy between natural language and numeric identifiers.Other studies pre-define lexical identifiers using titles (Lee et al., 2023) or URLs (Zhou et al., 2022;Ren et al., 2023).Although they can narrow the semantic gap between PLM knowledge and identifiers, such information may be inadequate for representing documents and does not exist depending on the dataset.Identifier learning strategies.Depending on the strategy of training identifiers, we refer to an identifier as being static if it does not change during training.Meanwhile, when an identifier evolves during training, we refer to it as being dynamic.Static identifiers may lead to a performance bottleneck in generalizing for unseen documents during training.To overcome this limitation, Sun et al. (2023) proposed a method to dynamically learn numeric identifiers.However, it is still non-trivial to learn appropriate identifiers due to the task discrepancy between training and inference; the models focus on generating the identifier during training, but they need to rank documents during inference. To this end, we introduce a new generative retrieval method, namely Generative retrieval via LExical iNdex learning (GLEN) using the dynamic lexical identifier in a right-bottom cell in Table 1. The key novelty of GLEN is (i) to define lexical identifiers from documents using the knowledge of PLMs, (ii) to learn them from relevance between queries and documents, and (iii) to effectively rank documents with the same identifier for inference. Training of GLEN.It utilizes a two-phase index learning strategy to define lexical identifiers and to learn them dynamically.First, in the keywordbased ID assignment phase, GLEN defines identifiers from documents and learns them.To alleviate the discrepancy between the knowledge of PLMs and the semantics of identifiers, we depict identifiers in the pre-trained vocabulary space leveraging self-supervised signals by extracting key terms from documents.The model can learn how to map its knowledge to the unique nature of identifiers. Then, the ranking-based ID refinement phase is used to effectively learn dynamic identifiers.We directly incorporate query-document relevance in learning through the elaborate design of two loss functions.Specifically, GLEN explicitly learns query-document relevance using pairwise ranking loss to capture the ranking relationships and pointwise retrieval loss to learn the relationship between a query and a relevant document.Thus, GLEN can generate identifiers that better encapsulate the subtle semantics of the query-document relationship. Inference of GLEN.It employs collision-free inference using an identifier weight to deal with the document identifier collision problem, i.e., the same identifier can be assigned to multiple documents if they are semantically similar.A simple solution is to force a different identifier for those documents.However, it can make the identifier too long or potentially interfere with the semantic learning of the identifier.Instead of enforcing uniqueness during training, we leverage the document identifier logits during inference to rank the collided documents.Notably, this simple-yet-effective solution avoids high computational costs by using the generation logit as the weight. In summary, our key contributions are as follows. (i) We propose GLEN, which learns lexical identifiers in a dynamic fashion.To our knowledge, it is the first generative retrieval method using learningbased lexical identifiers.(ii) We devise a two-phase index learning strategy with keyword-based ID assignment and ranking-based ID refinement to generate identifiers reflecting query-document relevance.(iii) We present a collision-free inference via ranking using identifier weight while effectively preserving identifier semantics.(iv) We evaluate the effectiveness of GLEN on three benchmark datasets: Natural Questions (Kwiatkowski et al., 2019), MS MARCO Passage Ranking (Nguyen et al., 2016), and BEIR (Thakur et al., 2021). 2 Related Work Document Retrieval Document retrieval aims to seek relevant documents to the user query from a large document corpus.Most existing methods have followed the "index-retrieve-then-rank" pipeline.Traditional sparse retrieval methods (Robertson and Walker, 1994;Formal et al., 2021;Choi et al., 2022) rely on the inverted index utilizing term matching signals.On the other hand, dense retrieval methods (Karpukhin et al., 2020;Xiong et al., 2021;Khattab and Zaharia, 2020) calculate the vector similarity of dense representations via an approximate nearest neighbor index.Although dense retrieval has shown a remarkable performance, the model cannot be optimized end-to-end and has a drawback in the cost of the external index structure. Generative Retrieval Apart from traditional retrieval, generative retrieval uses only a unified model (Metzler et al., 2021) that directly generates an identifier of a relevant document for a given query.As shown in Table 1, we categorize the existing methods according to how they define and train identifiers.Numeric identifier.2021) defined a title as an identifier.They can leverage the knowledge of PLMs to decode identifiers, enjoying the benefit of pre-trained vocabulary space.However, external information such as URLs and titles may not exist depending on the datasets and may not adequately represent the document.To overcome these limitations, we define lexical identifiers by extracting keywords from documents and dynamically refine them by directly optimizing query-document relevance. Proposed Method In this section, we formulate the generative document retrieval task (Section 3.1) and present GLEN (Section 3.2), as depicted in Figure 1.To tackle the challenges of identifier design and training strategy, GLEN adopts a two-phase lexical index learning (Section 3.3).For inference, we devise a collisionfree inference using identifier logits (Section 3.4).While maintaining its simplicity, GLEN handles identifier collisions where semantically similar documents share the same lexical identifier. Task Formulation Generative retrieval aims to autoregressively generate the identifier of the relevant document for a given query.Specifically, it involves computing the probability P (z|q) of generating a document identifier z for the query q. P (z|q) = where n is the number of tokens in the identifier. To address the key challenges of generative retrieval: (i) how to define identifiers and (ii) how to train query-document relevance, we propose a dynamic lexical identifier by defining it using keywords and refining it through relevance. Model Architecture We propose a novel generative retrieval method, Generative retrieval via LExical iNdex learning (GLEN).Specifically, it consists of two components: (i) An indexing model takes a document d as input to generate a document identifier z, and (ii) a retrieval model takes a query q as input to generate the identifier of a relevant document. We describe the process of deriving identifier z from document d using the indexing model, and the retrieval model can proceed in the same way.Both models are initialized with the pre-trained language model with the Transformer architecture (Vaswani et al., 2017) and share parameters.For the indexing model, a document representation is defined as follows. where d t ∈ R m is the final hidden representation of the decoder at time t.An embedding vector e j ∈ R m is the j-th vector of the word embedding matrix E ∈ R |V |×m where m is the dimension of embedding vectors, and |V | is the dimension of vocabulary space.Let Enc(•) and Dec(•) represent the transformer encoder and decoder, respectively. For GLEN, we define the probability of generating an identifier z from a document d as follows. where P (z t = j|d, d <t ) denotes the probability that z t is the j-th token in the vocabulary space, and Softmax j (•) is the j-th element of the softmax function output.For the original transformer decoder, the output of each step, i.e., z <t , is fed for the next step.Here, we use the final hidden representations d <t for the decoder input instead of z <t .This method ensures that the decoder input does not fluctuate even if the document identifier fluctuates during training, allowing for stable training.(Empirically, we observed about 14.4% gain in Recall@1.See Section 5.2 for details.) Two-phase Lexical Index Learning To effectively train the lexical identifier, we introduce a two-phase training strategy: keyword-based ID assignment to learn the semantics of the corpus and the characteristics of identifiers and rankingbased ID refinement to adjust appropriate identifiers that encapsulate relevance signals. Keyword-based ID Assignment A document identifier should be concise yet informative, unlike a typical natural language sentence.Due to its unique nature, it is challenging to learn from scratch to assign appropriate identifiers to documents.We bridge the generation task gap between the pre-trained language model task and the identifier generation task by training the model to generate representative keywords.Specifically, we choose top-n tokens with the highest tf-idf scores using BM25 (Robertson and Walker, 1994) as the keyword identifier z key for the document.This ensures that the model can construct the semantics of the document from self-supervised signals extracted from the corpus and naturally learns the nature of identifiers.The model learns it using sequence-to-sequence cross-entropy loss. where z key t is a token at t-th step of z key .In addition, we also utilize a query as an input and train to infer keywords of its relevant documents. Ranking-based ID Refinement It is crucial to incorporate query-document relevance and learn how to generate identifiers of relevant documents from queries in training.To this end, we design two losses: (i) pairwise ranking loss for learning ranking and (ii) pointwise retrieval loss for learning the query-identifier relationship.Consequently, GLEN can dynamically learn how to generate lexical identifiers from the relevance signal. Pairwise ranking loss.First, we introduce pairwise ranking loss, incorporating query-document relevance into identifier learning.It helps the model to represent queries as close to relevant documents d + and far from irrelevant documents d − ∈ N , where N is a set of negative documents obtained via prefix-aware dynamic negative sampling, which will be described later.The pairwise ranking loss is defined as follows. . (5) For pairwise ranking loss, we define the relevance score of a query q and a document d as described. rel(q, d) where q t ∈ R m is the final hidden representation of the decoder for a query at time t.Note that r t is used as a document representation, not d t . In the inference phase, the query should generate the identifier.In this regard, we exploit the identifier representation r t for representing documents, thus mitigating the gap between training and inference.In addition, since arg max(•) to calculate z t in Eq. ( 2) is non-differentiable, we get r t with Softmax(•) and temperature τ .If τ is low enough, it yields a similar effect to arg max(•).However, when the model is not sufficiently trained, r t may become collapsed regardless of the document.As such, we adopt an annealed temperature, i.e., τ = max(10 −5 , exp(−t)) where t denotes the training epochs.(See Section 5.2 for the effectiveness of annealing). Pointwise retrieval loss.To ensure the model can capture the relationship between the query and the relevant document identifier, we design a pointwise retrieval loss as follows: where z + indicates the identifier predicted from the positive document d + , and z + t is a t-th token of z + .e zt ∈ R m is the word embedding vector of z t .The first loss term is a cross-entropy loss that maps a query q to the identifier z + of relevant documents.It alleviates the gap between training and inference in that mapping from queries to identifiers is performed in the inference.The second loss term utilizes the identifier logits w q , w d + and allows the model to learn the relative importance of the identifier tokens, e.g., "Olympic" is more important than "list" in the example of Figure 1.Here, λ dist is the hyperparameter to adjust the importance between the pointwise loss terms.For distance function dist(•), we adopt cosine distance. The final loss is the sum of the pairwise ranking loss and the pointwise retrieval loss: where λ point is the hyperparameter to control the importance of the pointwise retrieval loss.It enables end-to-end optimization of the retrieval task. Prefix-aware dynamic negative sampling.To improve top-ranking retrieval performance robustly, we devise prefix-aware dynamic negative sampling for the pairwise ranking loss (Eq.( 5)).As pointed out in Zhan et al. (2021), dynamic hard negatives, which are sampled during training based on retrieval results of the model itself, can effectively improve the ranking performance.To reflect the nature of the autoregressive model, we obtain a set of negative documents N based on the identifier prefix.Concretely, we determine the candidate negatives for each document in the following manner.Given an identifier length of n, we first take the documents that have the same identifier as the target document, i.e., we fetch documents with the same prefix for the first n tokens.If the resulting set of documents does not reach the desired count of N neg , we opt for documents with the same prefix at the first n − 1 tokens.We repeat it by reducing the length of the prefix until the set reaches N neg documents.Our approach iteratively samples documents based on the prefix.However, the cost was negligible in our experiments, and we found it effective for ranking, as shown in Section 5.2. Collision-free Inference The inference process of GLEN is straightforward: (i) we proceed over the documents for assigning identifiers to the document offline, and (ii) infer the identifiers of relevant documents from the query online.We finally assign a dynamically learned identifier to each document predicted by the model.We also employ constrained decoding to generate only the valid identifiers, and a ranked list of documents is obtained by beam search. If an identifier is assigned to documents by learning, the documents with similar semantics may be mapped to the same identifier, i.e., identifier collision.It incurs that documents with conflicting identifiers cannot be ranked.Existing studies (Wang et al., 2022;Tay et al., 2022) have appended additional digits (e.g., X-X-0, X-X-1) to address this problem, but such manually defined identifiers may distort the subtle semantics of the identifier.On the other hand, we do not force the identifier to be unique for semantic learning of identifiers. We introduce a novel solution, collision ranking using identifier logit, to resolve the collision issue at inference time.Specifically, we utilize a logit of each step in generating a lexical identifier z from query q (or document d).The relevance between a query and a document using identifier logits is defined as follows. rel ID (q, d) = cos(w q , w d ). (9) For each query, we first rank the document identifiers via P (z|q) = n t=1 (q t • e ⊤ zt /( i q t • e ⊤ i ).If multiple documents share a single identifier, they are ranked using rel ID (q, d).In this way, the collision problem can be avoided without unnecessary intervention in the semantic learning of identifiers.In particular, it has the advantage that there is only a negligible additional cost for ranking since the weights of the document identifiers w q , w d are already used to compute P (z|d), P (z|q). Datasets Natural Questions (NQ320k) (Kwiatkowski et al., 2019) consists of 320k query-document relevant pairs, 100k documents, and 7,830 test queries, which has been actively used in existing generative retrieval methods (Tay et al., 2022;Wang et al., 2022).We also follow the setup in Sun et al. (2023), splitting the test set into two subsets: seen test and unseen test.seen test consists of queries where the annotated target documents are included in the train set, while unseen test consists of queries where no labeled documents are included in the train set.MS MARCO passage ranking (MS MARCO) (Nguyen et al., 2016) is a large-scale benchmark dataset with 8.8M passages collected from Bing's results and 1M realworld queries.We use the official development set consisting of 6,980 queries with a full corpus, i.e., 8.8M passages, following Ren et al. (2023).BEIR (Thakur et al., 2021) is a benchmark dataset for zero-shot evaluation on diverse text retrieval tasks.Following Sun et al. (2023), we assess on Arguana (Arg) (Wachsmuth et al., 2018) and NF-Corpus (NFC) (Boteva et al., 2016).For train data, we follow published train data constructed by Wang et al. (2022) for NQ320k for a fair comparison.For MS MARCO, we use the MS MARCO training set, which consists of 500k queries and randomly split 1,000 queries for validation.For the generated query of the MS MARCO, we used the published predicted queries1 . Metrics We report Recall and MRR for NQ320k following existing works (Sun et al., 2023).MRR@10 and nDCG@10 is the official metric of MS MARCO Passage Ranking and BEIR, respectively. Baselines We compare GLEN with three types of baseline models, including two sparse retrieval models (BM25 (Robertson and Walker, 1994) and DocT5Query (Nogueira and Lin, 2020)), four dense retrieval models (DPR (Karpukhin et al., 2020), ANCE (Xiong et al., 2021), Sentence-T5 (Ni et al., 2022a), and GTR (Ni et al., 2022b)), and six generative retrieval models.For generative retrieval methods, we categorize them following the Table 1.(i) Static numeric identifier.DSI (Tay et al., 2022) uses a sequence-to-sequence model to generate numeric identifiers built by hierarchical k-means clustering.DSI-QG (Zhuang et al., 2023) and NCI (Wang et al., 2022) are built upon DSI while adopting augmented data via query generation and prefix-aware weight-adaptive decoder, respectively.(ii) Static lexical identifier.GENRE (Cao et al., 2021) utilizes a title as an identifier.SEAL (Bevilacqua et al., 2022) generates arbitrary n-grams to retrieve relevant documents, utilizing the FM-Index structure.TOME (Ren et al., 2023) performs retrieval by generating the document URLs via a two-stage generation architecture.(iii) Dynamic numeric identifier.GENRET (Sun et al., 2023) learns how to assign numeric identifiers based on a discrete auto-encoding scheme.(iv) Dynamic lexical identifier.To our knowledge, GLEN is the first work to employ the dynamic lexical identifier.For details of sparse and dense retrieval models, see Section A.1. Implementation Details We initialized GLEN with T5-base (Raffel et al., 2020).The batch size is set to 128, and the model is optimized for up to 3M steps and 30K steps using the Adam optimizer with learning rates 2e-4 and 5e-5 for keyword-based ID assignment and ranking-based ID refinement, respectively.We use beam search with constrained decoding and a beam size of 100 for inference.For two-phase lexical index learning, we set the length of document id n = 3, n = 7 after tuning among {2, 3, 5, 7, 10} for NQ320k and MARCO, respectively.λ point and λ dist are set to 0.5 and 0.5 after tuning in {0, 0.25, 0.5, 1, 2, 4}, respectively.For τ , we set it to 1e-5 with temperature annealing.We set the number of negative documents per query N neg to 8 after tuning in {0, 1, 2, 4, 8} and adopted in-batch negatives, where all passages for other queries in the same batch are considered negative.Further details for model architecture and training hyperparameters can be found in Section A.1. Main Results Evaluation on NQ320k.Table 2 presents the retrieval performance on the NQ320k.The key observations are as follows: (i) GLEN shows outperforming performance among sparse and dense baselines and competitive performance with generative re-Model NQ320k (7,830) Seen test (6,075) Unseen test (1,755) R@1 R@10 MRR@100 R@1 R@10 MRR@100 R@1 R@10 MRR@100 Sparse & dense retrieval BM25 (1994) 29 Evaluation on MS MARCO.Table 3 shows the retrieval performance on the MS MARCO Passage Ranking set.GLEN yields a clear improvement over the best competitive generative retrieval methods and BM25 (Robertson and Walker, 1994) by 15.6% and 9.3% in MRR@10, respectively.Existing generative retrieval methods still struggle to memorize the knowledge of the corpus and thus often fail to work in large-scale corpora (Pradeep et al., 2023).In contrast, GLEN successfully works on large-scale corpora owing to learning identifiers via directly learning the relevance of queries and documents.Table 5: Performance comparison of GLEN with different solutions for collision in inference.Random ranking denotes a randomly ranked result for colliding documents.We report an average of 10k runs.Zero-shot evaluation on BEIR.We thoroughly investigate the generalization capability of GLEN via the zero-shot performance on the BEIR (Thakur et al., 2021) dataset after training on NQ320k, reported in Table 4. GLEN shows the best average accuracy in generative retrieval methods, surpassing the best competitive generative model by 38.8% on average.We also observe that the dynamic identifiers (i.e., GLEN and GENRET) consistently outperform static identifiers, showing that they are more effective at capturing the complex semantics of documents and can be generalized in a zero-shot setting. For dynamic identifiers, GLEN outperforms GENRET in a zero-shot evaluation.The difference in performance stems from two primary distinctions: (i) identifier types and (ii) solutions to identifier collisions.GLEN can assign a generalized identifier to a new document by leveraging knowledge from the PLM, while GENRET may have difficulty allocating numeric identifiers to new documents.Besides, the same identifier can be assigned to semantically similar documents, especially if the documents are out-of-domain.It often happens since models are not trained to differentiate between them.To rank these same identifier documents, GLEN introduces collision-free inference to break the tie, while GENRET cannot distinguish between them and places them randomly.Table 6: Ablation study of GLEN.Note that keyword means keyword-based ID assignment, and annealing means temperature annealing for τ in Eq. ( 6). In-depth Analysis Effect of collision-free inference.As shown in Table 5, we validate the effectiveness of the collision ranking by comparing it with random ranking, which randomizes the ranking of colliding documents.For a thorough comparison, we further constructed a subset (i.e., collision) by collecting queries where at least one labeled document is colliding documents.For NQ320k and MS MARCO datasets, we found that collision ranking improves performance by 30.2% and 35.5%, respectively, over random ranking for the collision query subset.MS MARCO dataset has a higher ratio of collision queries due to its larger corpora than NQ320k (e.g., 8.8M vs. 109K).It verifies that colliding documents are effectively ranked without additional cost using identifier weight, while the semantics of identifiers are well preserved. Effect of prefix-aware dynamic negative.Figure 2 depicts the effect of prefix-aware dynamic negative sampling along the training step.The prefixaware dynamic negative exhibits the most effective performance for robust ranking, showing 1.0% and 1.2% gains in R@1 over the random negative and BM25 negative sampling, respectively.Furthermore, prefix-aware negatives delivered a 1.2% performance improvement in R@1, compared to not using hard negatives.It highlights that the nature of an autoregressive model is effectively reflected via the prefix, and dynamically sampled negatives are conducive to learning. Ablation study.the knowledge of PLM.(ii) The temperature annealing for an identifier representation (in Eq. 6) contributes to stable training, yielding a 2.2% gain in R@1.(iii) Replacing decoder input d <t to z <t , the accuracy drops by 12.6%, suggesting that d <t for decoder input enhances stable training.The proposed pairwise ranking loss and the pointwise retrieval loss contribute to the accuracy compared to adopting a single loss by up to 2.6% in R@1. Case study.Table 7 exhibits a case study focusing on an identifier to elucidate how generative retrieval is performed.We take one query from NQ320k and show the retrieval results from GLEN and NCI.Our observations are as follows: (i) GLEN can assign the same identifier to documents with similar semantics (e.g., "G0 phase" and "G2 phase"), but it effectively ranks them via collision-free inference. It indicates that GLEN successfully learns the subtle semantics of document-identifier relationships. (ii) GLEN refines identifiers through the refinement phase, changing from a keyword ID "phase-cellsnutri" to GLEN ID "phase-phase-cell".(iii) Static numeric identifiers in NCI fail to reflect the semantics of the documents.Although some documents are semantically similar (e.g., "G0 phase" and "G2 phase"), they have completely different identifiers. Conclusion In this paper, we proposed a novel lexical index learning method, namely Generative retrieval via LExical iNdex learning (GLEN).To effectively tackle the critical challenges of generative retrieval, we adopt a dynamic lexical identifier learning framework that can mitigate (i) the discrepancy between the knowledge of pre-trained language models and identifiers and (ii) the discrepancy between training and inference.GLEN successfully enjoys the benefits of a dynamic lexical document identifier via a delicately devised two-phase index learning scheme and collision-free inference.To our knowledge, it is the first work introducing a dynamic lexical identifier for generative retrieval.Experimental results demonstrate that GLEN achieves state-of-the-art or competitive performance among generative retrieval methods. Limitations This work proposes a new generative retrieval approach, GLEN, that dynamically learns lexical identifiers.Though we verified the performance of GLEN on a large corpus, it still exhibits a performance gap with longstanding conventional retrieval methods (e.g., ColBERTv2 (Santhanam et al., 2022), LexMAE (Shen et al., 2023)), which still hold state-of-the-art performance.This implies that generative retrieval still faces limitations in learning large-scale corpus.It may require using large models or designing new training schemes, leaving many research problems to be explored.In addition, we experimentally verified the proposed model in a zero-shot setting.We showed that it outperforms the generative retrieval method but still performs less than sparse retrieval.It suggests that generative retrieval still suffers from limited generalization compared to well-designed dense or sparse retrieval models. Ethics Statement This work complies with the ethics of ACL.The scientific artifacts we used are available for research with permissive licenses.The use of the artifacts in this paper adheres to their intended use. ) is an evaluation benchmark dataset of 18 publicly available datasets from diverse text retrieval tasks and domains, which is widely used for evaluating the generalization capabilities of models.The task of Arguana and NFCorpus is argument retrieval and bio-medical information retrieval, respectively. A.1.2 Metrics To measure the effectiveness, we use widely accepted metrics for information retrieval, including recall, mean reciprocal rank (MRR), and normalized discounted cumulative gain (nDCG), mean average precision (MAP) with retrieval size K. Recall is defined as , where i is the position in the list, k is the number of relevant documents and rel i ∈ {0, 1} indicates whether the i-th document is relevant to the query or not.MRR is defined as 1 |Q| |Q| i=1 1 rank i , where rank i refers to the rank position of the first relevant document for the i-th query.nDCG considers the order of retrieved documents in the list.DCG@K is defined as i+1) where rel i is the graded relevance of the result at position i. nDCG is the ratio of DCG to the maximum possible DCG for the query, which occurs when the retrieved documents are presented in decreasing order of relevance. A.1.3 Model architecture GLEN is based on the transformer-based encoderdecoder architecture.The number of transformer layers is 12, the hidden size is 768, the feed-forward layer size is 3072, and the number of self-attention heads is 12 for the encoder and decoder.We implemented GLEN using PyTorch based on the Tevatron library2 (Gao et al., 2023) and adopted the gradient cache (Gao et al., 2021) to accommodate large batch size with limited hardware memory. A.1.4Baselines BM25 (Robertson and Walker, 1994) is the traditional sparse retrieval model using lexical matching.DocT5Query (Nogueira and Lin, 2020) extends the document terms by generating relevant queries from the documents using T5 (Raffel et al., 2020).DPR (Karpukhin et al., 2020) is a bi-encoder model trained with in-batch negatives, which retrieves the documents via a nearest neighbor search.ANCE (Xiong et al., 2021)is a bi-encoder model trained with asynchronously selected hard training negatives.Sentence-T5 (Ni et al., 2022a) is similar Datasets NQ320k MS MARCO Metrics R@1 R@10 MRR@100 MRR@10 w/o refinement 66.9 84.9 73.6 6.5 w/ refinement 69.1 86.0 75.4 20.1 Table 8: Ablation study of GLEN on the ranking-based ID refinement phase. to DPR but utilizes T5 (Raffel et al., 2020) as a backbone.GTR (Ni et al., 2022b) is a scaled-up bi-encoder model with a fixed-size bottleneck layer based on Sentence-T5, which is a state-of-the-art dense retrieval model.For BM25, we followed the official guide3 for reproducing.For NQ320k and BEIR, we refer to the results reported by Sun et al. (2023) and Ren et al. (2023).For MS MARCO, we refer to the results from Pradeep et al. (2023).The results of NCI are obtained based on the publicly released checkpoint by Wang et al. (2022). A.1.5 Reproducibility The weight decay is 1e-4.We set the max sequence length for a query to 32, the max sequence length for a document to 156, and the dropout rate to 0.1.We conducted all experiments on a desktop with 4 NVidia RTX V100, 512 GB memory, and a single Intel Xeon Gold 6226. A.2 Effect of Ranking-based ID Refinement Table 8 reports the effect of the ranking-based ID refinement phase of GLEN on NQ320k and MS MARCO.We observed that the refinement phase led to a performance gain of 3.3% for NQ320K and 209.4% for MS MARCO, respectively.This underscores the significance of the refinement phase, which trains on both pairwise ranking loss and pointwise retrieval loss as a key component in dynamic identifier learning.Also, it is shown that the ranking-based ID refinement phase is effective, especially for the large-corpus set (i.e., MS MARCO).This is due to the fact that learning the mapping relations between documents and predefined identifiers becomes more challenging as the number of documents increases. Figure 1 : Figure 1: Overview of training and inference for GLEN.For training, the keyword-based ID assignment phase is performed, which learns identifiers via self-supervised signals, followed by the ranking-based ID refinement phase to learn identifiers dynamically.For inference, GLEN generates identifiers for a query, and the documents are ranked with the logits when the collision occurs.The number below the identifier token indicates the logit for each token.recently, Sun et al. (2023) proposed an identifier learning framework to overcome the limitations of static identifiers.However, numeric identifiers inherently suffer from the difficulties of leveraging the knowledge of PLM due to a gap between natural language and numeric values.Lexical identifier.Bevilacqua et al. (2022) proposed a method to consider n-grams in a document as identifiers using the FM-Index structure.Zhou et al. (2022) and Ren et al. (2023) utilized URLs as a document identifier, while Chen et al. (2022), Lee et al. (2023), and Cao et al. (2021) defined a title as an identifier.They can leverage the knowledge of PLMs to decode identifiers, enjoying the benefit of pre-trained vocabulary space.However, external information such as URLs and titles may not exist depending on the datasets and may not adequately represent the document.To overcome these limitations, we define lexical identifiers by extracting keywords from documents and dynamically refine them by directly optimizing query-document relevance. Figure 2 : Figure 2: Performance comparison of GLEN depending on the negative sampling strategy by training step for ranking-based ID refinement phase on NQ320k. Table 1 : Category of existing generative retrieval models based on (i) identifier types and (ii) identifier learning strategies.GLEN introduces a dynamic lexical identifier. Table 2 : Ren et al. (2023)rison for the proposed method and baseline models for NQ320k.The best generative retrieval model is marked bold, and the second best model is underlined.The number in parentheses indicates the number of queries.We refer to the results of baselines reported bySun et al. (2023)andRen et al. (2023).Results not available are denoted as '-'. Table 3 : Pradeep et al. (2023)n for the proposed method and baseline models for MS MARCO passage dev.The best generative retrieval model is marked bold, and the second best model is underlined.We refer to the results of baselines reported byPradeep et al. (2023). Table 4 : Zero-shot performance for the proposed method and baseline models for the BEIR dataset.The best generative retrieval model is marked bold, and the second best model is underlined.Average means the average accuracy over two datasets.We refer to the results of baselines reported bySun et al. (2023). Table 7 : A retrieval example of GLEN and NCI on NQ320k.The number in parentheses denotes the rank of a document for each model.Keyword ID is the extracted identifier used at the keyword-based ID assignment phase of GLEN.Note that the tokenization process for GLEN ID and Keyword ID is simplified.
v3-fos-license
2018-04-03T00:51:57.267Z
2017-06-15T00:00:00.000
12195718
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1038/s41598-017-03778-7", "pdf_hash": "270b7b448689b91490deec74a2993264ae64a7be", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44630", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "3054f3a58e55e652f5bb9138f2f2fea63b381d45", "year": 2017 }
pes2o/s2orc
Ecological diversity and co-occurrence patterns of bacterial community through soil profile in response to long-term switchgrass cultivation Switchgrass (Panicum virgatum L.) is a cellulosic biofuel feedstock and their effects on bacterial communities in deep soils remain poorly understood. To reveal the responses of bacterial communities to long-term switchgrass cultivation through the soil profile, we examined the shift of soil microbial communities with depth profiles of 0–60 cm in five-year switchgrass cultivation and fallow plots. The Illumina sequencing of the 16S rRNA gene showed that switchgrass cultivation significantly increased microbial OTU richness, rather than microbial Shannon diversity; however, there was no significant difference in the structure of microbial communities between switchgrass cultivation and fallow soils. Both switchgrass cultivation and fallow soils exhibited significant negative vertical spatial decay of microbial similarity, indicating that more vertical depth distant soils had more dissimilar communities. Specifically, switchgrass cultivation soils showed more beta-diversity variations across soil depth profile. Through network analysis, more connections and closer relationships of microbial taxa were observed in soils under switchgrass cultivation, suggesting that microbial co-occurrence patterns were substantially influenced by switchgrass cultivation. Overall, our study suggested that five-year switchgrass cultivation could generated more beta-diversity variations across soil depth and more complex inter-relationships of microbial taxa, although did not significantly shape the structure of soil microbial community. Switchgrass (Panicum virgatum L.) is a perennial C-4 grass with high photosynthetic efficiency and biomass production potential 1 . It has received considerable attentions during the last several decades since it is recognized as a promising crop for biofuel production by the US Department of Energy (DOE) Herbaceous Energy Crops Program (HECP) [2][3][4] . The widespread of this perennial biofuel crops could shift the land use towards the renewable, biomass-based energy systems, and influence the soil ecosystems subsequently 5 . Particularly, soil microbes can respond rapidly to the environmental changes caused by plant 6,7 . The plant could regulate soil microbial community structure via root exudates 8 . Switchgrass can release up to 20% of fixed carbon to the rhizosphere through exudation 9 . Although agronomic knowledge of switchgrass has grown increasingly 2, 4 , however, their influence on soil microbial community still remain uncovered. Soil microbes play fundamental roles in soil biogeochemical processes of the carbon, nitrogen, and inorganic element cycles 10 . The vast majority of researches on soil microbial communities have focused on the top 15 cm of the soil column or less, therefore our understanding of soil microbes is limited to surface horizons 11,12 . The microbial biomass often exhibits exponential decreases with depth and is greatest in surface soil, while there is still a large population of microbes in the subsoil (below 15 cm) because of the large volume throughout the depth Assembly patterns of bacterial community. Microbial alpha-diversity was measured using the observed OTU richness and Shannon-Wiener index. The observed OTU richness was significantly higher in switchgrass cultivation soil samples than that in fallow soils, tested by Wilcoxon rank-sum test (P < 0.05; Supplementary Fig. S2). While, Shannon index did not significantly different between these two groups. On the other hand, we found Shannon index significantly decreased with soil depth in switchgrass cultivation plots (P < 0.05); whereas, this trend was not significant in fallow plots (Fig. 2). And the observed OTUs richness did not show significantly change with soil depth in either switchgrass cultivation or fallow plots. The CAP analysis based on Bray-Curtis distance (Fig. 3A), demonstrated that bacterial community varied with depth, which were confirmed by ANOSIM (P < 0.05). The Canonical discriminant analysis (CDA) of the predominant microbial taxa (relative abundance >0.5%) at genus levels revealed taxonomic associations with soil depth (Fig. 3B). Different layers of soil profiles distinguished specific microbial taxa. In layer 1, Aquicella, Kaistobacter, Sphingomonas and Gemmata were the abundant genera; Steroidobacter and Candidatus Nitrososphaera were dominant in soils of layer 2; Lysobacter, Pirellula, Nitrospira and Planctomyces were dominant in layer 3; Halomonas, Shewanella and Ruminococcus were abundant genera in soils of layer 4. There was no significant difference in the structure of microbial communities between switchgrass cultivation and fallow soils, either in integrate soil profiles (ANOSIM P = 0.113; PERMANOVA P = 0.203) or each single layer. While some significant taxonomic differences between these two groups soils were examined by Wilcoxon rank-sum test (P < 0.05) based on the top 1000 most abundant OTUs (Supplementary Figs S3 and S4). For example, Novosphingobium, Fluviicola, Flavobacterium, Alcanivorax, Shewanella and Sorangium were significantly higher in relative abundance in soils with switchgrass cultivation; whereas, the abundance of families Rhodospirillaceae and Gaiellaceae, and the genera Gemmata and Pilimelia increased significantly in fallow soils. Vertical spatial variations of bacterial community. To investigate the vertical spatial variations of bacterial community, we estimated the relationships between soil depth profiles and bacterial community similarities based on Bray-Curtis distance (Fig. 4). The significant negative vertical spatial decay of bacterial community relationships were found in linear regression for both switchgrass cultivation and fallow soils, indicating that more vertical depth distant soils had more dissimilar communities. In particular, the switchgrass cultivation soils had a steeper slope, indicating that there were more beta-diversity variations with the increased vertical depth under switchgrass cultivation. To further explore vertical spatial variations of the dominated bacterial taxa, we estimated the correlations between relative abundance of these taxa and soil depths via Pearson coefficient (Supplementary Table S1 and Table 2). In fallow soils, phylum Crenarchaeota and classes Gemmatimonadetes, Thaumarchaeota, Saprospirae and Cytophagia were significantly and negatively correlated with soil depth; and classes Gammaproteobacteria were positively correlated with soil depth. For switchgrass cultivation soils, as soil depth increased, the relative abundances of phyla Acidobacteria, Verrucomicrobia and Armatimonadetes were significantly decreased in switchgrass cultivation soils; while, the abundance of Firmicutes and Cyanobacteria significantly increased. At class level, Gammaproteobacteria, Clostridia and Bacteroidia were significantly and positively correlated with soil depth; and Acidobacteria-6, Betaproteobacteria, Chloracidobacteria, Pedosphaerae, Cytophagia and Saprospirae were negatively correlated with soil depth. Additionally, the significant taxa were more in switchgrass cultivation soils than in fallow soils, confirmed more beta-diversity variations under switchgrass cultivation. Co-occurrence network analysis. The soil microbial networks were generated for switchgrass cultivation and fallow soils, respectively (Fig. 5). The topological properties were calculated to describe the complex pattern of inter-relationships among nodes, and to distinguish differences in taxa correlations between these two group soils (Table 3). Specifically, the structural properties of the switchgrass network were greater than the fallow network, indicating more connection and closer relationships of microbial taxa under switchgrass cultivation. Based on betweenness centrality scores, the top five genera identified as keystone taxa were Arenimonas, Clostridium, Thiobacillus, Lysobacter and Nitrospira in fallow network; Shewanella, Acinetobacter, Rhodoplanes, Aeromonas and Bacteroides were keystone taxa in switchgrass network. The keystone taxa differed greatly between these two networks. Furthermore, betweenness centrality of the switchgrass network was much stronger than that of the fallow network (P < 0.05, Wilcoxon rank-sum test; Supplementary Fig. S5), which could confirm that switchgrass network have more complex inter-relationships of microbial taxa. Table 2. The vertical spatial variations of the dominated microbial taxa at class level in the fallow and switchgrass soils, correlations between relative abundance of these taxa and soil depths were estimated via Pearson coefficient. Figure 5. Network of co-occurring microbial genera based on correlation analysis for fallow (A) and switchgrass cultivation (B) soils. A connection stands for a strong (Spearman's ρ > 0.6) and significant (P < 0.01) correlation. The size of each node is proportional to the relative abundance; the thickness of each connection between two nodes (edge) is proportional to the value of Spearman's correlation coefficients. The nodes were colored by phylum. Discussion As a high photosynthetic efficiency and biofuel production potential perennial C-4 grass, the widespread planting of switchgrass might provide great economic value. However, whether switchgrass cultivation influences the soil ecosystems, particularly in the deep soils profiles, still remain uncovered. The present study aimed to reveal the responses of microbial communities to long-term switchgrass cultivation within the soil profiles of 0-60 cm. Our results showed that switchgrass cultivations did not significantly change the structure of soil microbial community, but generated more beta-diversity variations across soil depth and more complex inter-relationships of microbial taxa. Plant could regulate soil microbial community structure through the root architecture, exudates, and mucilage 8 . The rhizodeposits from plant roots appear to be a major driving force in the regulation of microbial diversity and activity [29][30][31] . Previous study revealed that switchgrass could enrich specific microbial species in the rhizosphere, which were able to utilize root exudates 5 . However, we did not observe significant difference in bacterial communities between switchgrass cultivation and fallow soils. It might be explained that the affected zones of roots are small, and plants might not be enough to influence the whole soil ecosystems. In our study, the soils were obtained from a five-year switchgrass cultivation area. Thus, our results suggest that long-term switchgrass cultivations could not significantly change the structure of soil microbial communities. In other context, switchgrass cultivation caused some specific taxonomic differences compared with the fallow soils. The enriched taxa in soils with switchgrass cultivation were mainly affiliated with Proteobacteria, Bacteroidetes and Acidobacteria ( Supplementary Fig. S3). Previous study reported that Proteobacteria and Acidobacteria were the dominant members in the switchgrass rhizosphere soils 5 . Particularly, Proteobacteria were active utilizers of fresh photosynthate; while, Acidobacteria preferred to complex organic matter, rather than simple root-derived dissolved organic carbon 5 . Soil Bacteroidetes were typically copiotrophic and were most abundant in nutrient rich soils, including rhizosphere soils 32 . Additionally, we found that genera Novosphingobium, Fluviicola and Flavobacterium were enriched under switchgrass cultivation. Novosphingobium and Flavobacterium were the dominant root exudate utilizers in switchgrass rhizosphere reportedly 32 . Fluviicola was isolated as an endophytic bacterium through addition of plant extract to nutrient media 33 . Root exudates are a key factor in shaping microbiome, and the ability to utilize root exudates is an important trait that allows microorganisms to be competitive in the rhizosphere 5 . Additionally, soil microbiome-plant feedback mechanisms are closely associated with ecosystem function and primary productivity in terrestrial habitats 34,35 . Switchgrass has been reported to require much less fertilizer input and to generate high yields compared to many other crops 36,37 . The enriched microbial taxa in switchgrass cultivation soils were selectively assembled, and might be benefit of plant growth and health. These beneficial microbes might support nutrients for the high annual biomass production of switchgrass, which usually referred to as plant growth promoting rhizobacteria (PGPR). In our study, the switchgrass cultivation enriched bacteria belonged to Flavobacterium, Xanthomonadaceae and Pseudomonadaceae were reported as PGPR 38,39 . Previous work demonstrated that the diversity of microorganisms typically decreases with soil depth 12,14 . In present study, we only found that microbial Shannon diversity significantly decreased with soil depth in switchgrass cultivation plots, while not in fallow plots. Switchgrass cultivation might provide nutrients via root exudates, which might different across the soil profiles due to the length of root. This could be supported by another work of our lab, which was conducted in the same experimental area (manuscript submitted). My colleagues found that soil organic carbon was found significantly higher in switchgrass cultivation soils than that in fallow soils through soil layers (Supplementary Fig. S6). This could also explain that the microbial richness was significantly higher under switchgrass cultivation. Soil depth had a highly significant effect on the structure of microbial communities, especially in the switchgrass cultivation plots. Both switchgrass cultivation and fallow soils exhibited significant negative vertical spatial decay of microbial community similarity relationships, and the switchgrass cultivation soils had a steeper slope (Fig. 4). This indicated that more vertical depth distant soils had more dissimilar communities, and switchgrass cultivation generated more beta-diversity variations across soil depth. Switchgrass cultivation could provide different kinds of nutrients via root exudates, resulting in the complex environmental heterogeneity throughout the soil depth. Higher amplitude of variation in environmental conditions could explain the high variations in beta-diversity 40 . Previous researches showed that the subsoil microbial communities were distinct from topsoil communities 11,12,17 . In present study, the relative abundance of Firmicutes, Cyanobacteria, Gammaproteobacteria and Bacteroidia increased with soil depth. Some observed changes was similar to other studies 11,20,41 . Firmicutes can survive in extreme environments, and Cyanobacteria generally occur in harsh desert environments 42 . Gammaproteobacteria were likely to promote plant and root growth by fixing nitrogen and producing growth hormones 43 . On the other hand, the relative abundance of Acidobacteria, Verrucomicrobia, Crenarchaeota, Betaproteobacteria and Gemmatimonadetes decreased as soil depth increases. Previous works reported that Acidobacteria was negatively correlated with pH, which was increased with soil depths 14,44,45 . Crenarchaeota, dominated by class Thaumarchaeota in our study, is widespread speculation of driving the autotrophic Table 3. Topological properties of co-occurring networks obtained from switchgrass cultivation and fallow soils. nitrification 46 . Soil Verrucomicrobia were oligotrophic and able to grow under conditions of low C availability 47 . While ecological niches inhabited by Crenarchaeota and Verrucomicrobia remain largely undetermined 12 . Although the entire soil microbial communities were not significantly changed, the microbial inter-relationships were substantially influenced by switchgrass cultivation. Through co-occurrence network analysis, we found that structural properties of the switchgrass network were greater than the fallow network, indicating more connection and closer relationships of microbial taxa under switchgrass cultivation. Comparing network-level topological features can provide us with insight into variations in the co-occurrence patterns between different communities 48 . Additionally, betweenness centrality of the switchgrass network was much stronger than that of the fallow network, which could confirmed more complex inter-relationships of microbial taxa under switchgrass cultivation. Discerning the modules maintaining the connectivity in network, betweenness centrality represents the potential of an individual node influence on the interactions of other nodes in the network, and has been used to define the keystone species in the ecosystems [49][50][51][52] . High betweenness centrality value indicates a core and central location of this node in the network, whereas low betweenness centrality value indicates a more peripheral location 48 . Switchgrass could secrete root exudates to the soil ecosystems, including sugars, amino acids and other organic acids 53 , which can be easily utilized by complex microbial communities. This might be supported by higher values of soil organic carbon under switchgrass cultivation ( Supplementary Fig. S6). For microorganisms, wide niches can support the coexistence of species within the communities 54 . In this case, plants could supply carbon (C) to soil generating intense microbial activities and interactions 55 . In previous study, rhizosphere networks for wild oat were more complex than those in surrounding soils, indicating the rhizosphere has a greater potential for interactions and niche-sharing 56 . Roots might promote the development of niches populated by dominant taxa, which would concurrently yield greater interactions, greater co-variations due to shared niches, and overall result in more complex co-occurrence patterns over time. Conversely, the complex microbial interactions including cooperative or syntrophic interactions among PGPRs might also be benefit for plant growth and health. Microorganisms can communicate with each other through various signal molecules 57 . Specially, rhizosphere microorganisms are more competent at producing signal molecules 58 , which might enhance the microbial feedback with plants. Conclusion Overall, our results showed that soil depth had a highly significant effect on the bacterial communities. Both switchgrass cultivation and fallow soils exhibited the significant negative vertical spatial decay of bacterial similarity relationships. Some dominated taxa regularly changed across soil profiles. However, five-year switchgrass cultivations did not significantly change the structure of soil bacterial community, but generated more beta-diversity variations across soil depth. Furthermore, the bacterial co-occurrence patterns were substantially influenced by switchgrass cultivation. More connection and closer relationships of bacterial taxa were observed in soils under switchgrass cultivation. In future works, more complete information of microbial taxonomic and functional data should be integrated to better understand of the microbial ecology of the soil profile and their response to long-term switchgrass cultivation. Materials Study area and soil sampling. The switchgrass experiment was carried out over the period 2011-2015 in an experimental area of Northwest A&F University, located in the Guanzhong plain of Shaanxi Province (Fig. 1). The soil series was a clay loam. Switchgrass (cultivars Cave-in-rock and Sunburst) plots were established in September 2011, where winter wheat was cultivated before. Switchgrass was sown into the plots at a seeding rate of 11.2 kg pure live seed ha −1 and fertilized with 56 kg N ha −1 . The fallow plots were adjacent to the switchgrass plots. Both plots were rain fed and no irrigation. After planting, no weed control and no additional fertilizers were applied. The research plots for switchgrass and fallow were 5 × 6 m and replicated three times. Soil samples were randomly collected from the field in each switchgrass and fallow plots on October 15, 2015. Soil cores were collected with a core sampler at four depths (0-10, 10-20, 20-30 and 30-60 cm). In total, twenty-four soil samples (two plots × four depths × three replicates) were collected, transported to the laboratory in sterile plastic bags on dry ice, and then stored at −80 °C for microbial analyses. DNA extraction and purification. Community DNA was extracted from 0.5 g of soil samples using the MP FastDNA ® SPIN Kit for soil (MP Biochemicals, Solon, OH, USA) according to the manufacturer protocol. The V4 hypervariable regions of the 16S rRNA gene was amplified using primers 515 F (5′-GTG CCA GCM GCC GCG GTA A-3′) and 806 R (5′-GGA CTA CHV GGG TWT CTA AT-3′), with the forward primer modified to contain a unique 6 nt barcode at the 5′ end. All PCR reactions were performed with 30 μl system with 15 μL of Phusion ® High-Fidelity PCR Master Mix (New England Biolabs), 0.2 μM of forward and reverse primers and about 10 ng template DNA. The thermal cycling conditions as following: initial denaturation at 98 °C for 1 min, followed by 30 cycles of denaturation at 98 °C for 10 s, annealing at 50 °C for 30 s, and extension at 72 °C for 60 s, and an extension step at 72 °C for 5 min after cycling was complete. All samples were amplified in triplicate, and no-template controls were included in all steps of the process. Triplicate PCR amplicons were pooled together and then mixed with the same volume of 1 × loading buffer (containing SYB green). They were detected by electrophoresis in a 2% (w/v) agarose gel. PCR products with bright bands were mixed in equal density ratios and purified with GeneJET Gel Extraction Kit (Thermo Scientific, MA, USA). The purified PCR amplicons were sequenced using the Illumina HiSeq 2500 platform at Novogene Bioinformatics Technology Co., Ltd. (Beijing, China). Scientific RepoRts | 7: 3608 | DOI:10.1038/s41598-017-03778-7 Sequence analysis of the 16S rRNA amplicons. Paired-end reads were merged using FLASH (V1.2.7, http://ccb.jhu.edu/software/FLASH/), and filtered according to the literature 59 . The acquired sequences were chimera detected and removed using USEARCH software based on the UCHIME algorithm 60 . The sequences were assigned to each sample with the unique barcodes. Sequence analysis was performed by the UPARSE software package using the UPARSE-OTU and UPARSE-OTUref algorithms. Operational taxonomic units (OTUs) were clustered at the 97% similarity level 61 . Singletons were removed from downstream analyses. The representative sequences for each OTU were assigned to their taxonomic group using the RDP classifier at an 80% confidence threshold 59 . Data analyses. Alpha and beta diversity were calculated based on 29126 reads per sample (minimum number of sequences required to normalize the differences in sequencing depth) using QIIME (http://qiime.org/ index.html), with multiple indices (observed species and Shannon-Wiener index) and the Bray-Curtis distance between samples. Constrained analysis of principal coordinates (CAP) based on Bray-Curtis distance was performed to investigate the relationship between microbial community composition and soil depth under switchgrass and fallow plots. Canonical discriminant analysis (CDA) was used to identify the taxa associated with different soil layers based on genera with relative abundance levels >0.5%. ANOSIM 62 and permutational multivariate analysis of variance (PERMANOVA) 63 were performed to determine whether samples from each groups contained significant differences in their species diversity. The vertical spatial decay of microbial similarity was calculated as the linear least-squares regression relationships between soil depth and the microbial similarity (based on 1 -dissimilarity of the Bray-Curtis distance metric). Network was used to explore co-occurrence patterns of microbial taxa within switchgrass and fallow soils. The genera with relative abundances above 0.05% were selected. A Spearman's correlation between two genera was considered statistically robust if the Spearman's correlation coefficient (ρ) was >0.6 and the P-value was <0.01 22 . All the robust correlations identified from pairwise comparison of the genera abundance form a correlation network where each node represents one genus, and each edge stands for a strong and significant correlation between the nodes. To describe the topology of the resulting networks, a set of measures (number of nodes and edges, average path length, network diameter, average degree, clustering coefficient and modularity) was calculated using igraph 64 packages in R environment and networks were visualized using the interactive platform Gephi [65][66][67] . The betweenness centrality values of each node were estimated. This topological feature indicated the relevance of a node as capable of holding together communicating nodes, were used to define the keystone species 49,52 . All statistics analyses were performed in R environment (http://www.r-project.org) unless otherwise indicated.
v3-fos-license
2021-12-17T13:00:10.652Z
2022-03-01T00:00:00.000
245261156
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/preprint/Subjective_versus_Objective_Incentives_and_Teacher_Productivity/21107617/1/files/37449850.pdf", "pdf_hash": "2aec703764b48347f0d550cd20f9e986a435ef95", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44632", "s2fieldsofstudy": [ "Economics", "Education" ], "sha1": "ff45fef3a32ca874a987b59ef87117126948a0d8", "year": 2022 }
pes2o/s2orc
Subjective versus Objective Incentives and Teacher Productivity A central challenge facing firms is how to incentivize employees. While objective, output-based incentives may be theoretically ideal, in practice they may lead employees to reduce effort on non-incentivized outcomes and may fail in settings where effort is weakly tied to output. We study the effect of subjective incentives (manager performance evaluation) and objective incentives (test score-based) relative to no incentives for teachers using an RCT in 230 Pakistani schools. First, we show that subjective and objective incentives both increase test scores and have similar magnitude effects. However, objective incentives decrease non-test score student outcomes relative to subjective incentives. Second, we show that teachers’ effort response is very different under each scheme, with attendance increasing under subjective and teaching quality decreasing under objective. Finally, we rationalize these effects through the lens of a moral hazard model with multi-tasking. We use within-treatment variation to isolate the causal effect of contract noise and distortion and show that these channels explain most of our reduced form effects. RISE Working Paper 22/092 March 2022 Introduction How should schools incentivize teachers when effort is non-verifiable or non-contractable? Contract theory provides an answer. The second best is to incentivize on outcomes of the employee's production function. However, this introduces two new problems -distortion, over-incentivizing measurable outcomes while ignoring others, and noise, outcomes are a noisy function of employee effort. How do most non-schools actually incentivize workers? They use manager-discretionary (subjective) incentives rather than outcome-based (objective) ones. Raises, promotions, and terminations are subject to manager discretion for most employees. In the US, 85% of full-time employees have at least one aspect of their compensation determined by their manager, and 90% of teacher performance evaluations have a subjective component (Engellandt and Riphahn, 2011;National Center for Education Statistics, 2011). Despite the prevalence of subjective incentives, there is limited causal evidence on the effect of these incentives and whether they could work in the teaching setting. In this paper, we ask two questions: What is the effect of subjective versus objective incentives on teacher productivity? Are subjective incentives able to help alleviate problems of noise and distortion, which often plague objective incentives? We answer these questions by conducting an 18-month randomized controlled trial with 234 private schools in Pakistan. We randomize schools to provide core teachers with one of three contracts: (i). control: flat raise -all teachers receive a raise of 5% irrespective of performance, (ii). treatment 1: subjective performance raise -teachers receive a raise from 0-10% based on their manager's rating of their performance 1 , or (iii). treatment 2: objective performance raise -teachers receive a raise from 0-10% based on their students' midyear and end of year test performance (Barlevy and Neal, 2012). Both treatments are within-school tournaments and have the same distribution of raise thresholds. These similarities allow us to isolate the effort response from just changing the performance metric (manager rating versus test score) while holding other features of the incentive structure constant. We use detailed administrative, survey, test, and classroom observation data to understand each contract's effect on teacher effort and student outcomes. Student outcomes are measured along two dimensions: test scores and socio-emotional development. Test score data comes from an endline test conducted by the research team, one month after the end of the contract. Students are tested in core subjects (English, Urdu, math, science, and economics) in grades 4-13. A variety of question types and sources allow us to test whether effects are driven by memorization-type questions. Socio-emotional development is measured along four dimensions: love of learning, ethical behavior, inquisitiveness, and global competency. These dimensions are measured using self-report survey items drawn from several psychological indices used for measuring socio-emotional development in children. 2 1 Managers are generally principals or vice-principals and spend about a third of their time on employee management tasks, such as observations, feedback, and professional development. 2 Items are drawn from the National Student Survey, Learning and Study Strategies Inventory, Big Five (children's In our first main result, we show that both subjective and objective contracts are equally effective at increasing test scores. Both contracts increase test scores by 0.09 sd, which is very similar to average effects from meta-analyzes of performance pay for teachers (Pham et al., 2020). These results are consistent across subject and grade and are not driven by rote-memorization type questions. However, we find, in contrast to the test score results, objective and subjective incentives have different effects. Objective incentives negatively affect student socio-emotional development, including a significant decrease in love of learning and an increased likelihood students say they want to change schools. Subjective incentives result in a small positive effect on overall socio-emotional skills. These combined effects suggest that teachers under objective contracts focused exclusively on improving student academic improvement, at the cost of more well-rounded development for students. Whereas, teachers under the subjective contract were able to prioritize both areas. To understand teachers' behavioral responses to these incentive contracts, we compile rich data on teacher behavior inside and outside the classroom. We record 6,800 hours of classroom footage and review it using a standard classroom observation rubric (Pianta et al., 2012). The rubric captures teacher behavior along dozens of dimensions, from the use of punitive discipline to the proportion of student versus teacher talk time. The rubric also measures the amount of time spent on test-taking or test-preparation activities. To measure effort outside the classroom, we have teachers complete a time use questionnaire. Combined these two data sources allow us to understand teacher behavior change under subjective versus objective incentives. In our second main result, we find both subjective and objective incentives lead to changes in classroom practices. As one might expect, subjective incentives spur actions that managers value, and objective incentives spur actions that most quickly and easily translate into test score gains. Subjective incentives lead to increased targeting of individual student needs within the classroom and the use of technology in the classroom. Both teaching practices are one's principals identified as markers of high-quality teaching. Objective incentive schools see a five-fold increase in class time on test preparation activities. These teachers also exhibit more negative discipline techniques, such as yelling at students. Our reduced form effects suggested that subjective performance incentives increase teacher effort without producing distortionary effects. How are managers able to accomplish this? We find on average managers place significant value on teachers value-added and pedagogy. We also do not find any evidence of favoritism or gender bias. However, there is heterogeneity in managers' application of the contract. We cannot reject there is no effect of subjective performance pay for the worst quintile of managers. We then draw on the model of moral hazard with multi-tasking to explain our main reduced form results: i). similar, positive effects of subjective and objective incentives on test scores, ii). negative effects of objective incentives on socio-emotional development, iii). significant differences in teacher scale), Eisenberg's Child-Report Sympathy Scale, Bryant's Index of Empathy Measurement, Afrobarometer, World Values Survey, and Epistemic Curiosity Questionnaire. classroom behavior across the two treatments. Moral hazard models with multi-tasking (Baker, 2002) isolate two main components of the incentive structure which affect employee response: noise (correlation between employee action and incentive pay) and distortion (correlation between piece rate for different actions and marginal return to those actions on firm outcomes). Our paper seeks to understand whether noise and distortion serve as important mechanisms of the reduced form effects we see. Our empirical approach for this mechanism analysis proceeds in three steps. First, we show differences in employee's perception of the noisiness and distortion for subjective versus objective incentives. Second, we exploit partially exogenous heterogeneity within a given treatment to isolate the causal effect of noise and distortion each individually on student outcomes. Finally, we bring those two estimates together and show that given the difference in levels of noise and distortion across the contracts and the effect of noise and distortion on student outcomes, we can explain a large portion of the reduced form effects through these channels. We explain each step in detail below. The first step is showing that teachers believe there are differences in the extent of noise and distortion across the two treatments. We do this by asking teachers at endline the extent to working harder will increase their incentive pay. If they believe their effort closely maps into their pay then this is a less noisy incentive system. Then we ask what types of actions (lesson planning, improving pedagogy, helping other teachers, etc) are rewarded under each system. This allows us to measure teachers perception of whether the incentive is distorted toward certain student outcomes at the cost of others. We find that teachers believe subjective performance incentives are less noisy than objective incentives, and, therefore, view subjective incentives as more effective at motivating behavior. They view test-score based incentives as much less within their control because so many other factors beyond their effort affect student scores. We also find that teachers in the objective treatment are more likely to prioritize the type of actions which lead to test score gains, at the cost of other areas of student development. Teachers under subjective contract prioritize actions that lead to academic gains and also prioritize administrative tasks, which are likely to be preferred by their manager. We also show there are no other differences beyond noise and distortion across the two treatment arms. We show there is similarity in implementation timelines, understanding of the contract treatments, and beliefs about the fairness of each treatment arm. The second step of our mechanisms analysis is to demonstrate that noise and distortion themselves affect student outcomes. To do this, we zoom in to the subjective treatment schools and look at settings with high and low noise and then high and low distortion. By controlling for other differences across settings, we are able to isolate the effect of these two mechanisms on outcomes. To determine the effect of noise on student outcomes, we compare subjective treatment schools with managers whom teachers rate as accurate in assessing teacher effort versus managers rated as inaccurate in assessing teacher effort. We use this rating of managers' accuracy interacted with treatment status as an instrument for the perceived noisiness of the contract. We show that this rating of managers only affects teacher's rating of noisiness in the subjective arm. This instrument for noise is robust to controlling for many other features of the contract and school environments. Using this instrument for noise, we find that a 1 SD increase in the perceived noisiness of the contract decreases hours worked by 13 and decreases student test scores by 0.2 SD. These results are robust to a variety of controls. This suggests that employees are very sensitive to the noisiness of the contract, and that this affects the success performance pay has in inducing an effort response from employees. To understand the effect of distortion on student outcomes, we again exploit variation within the subjective performance pay schools. We use data on managers' preferences prior to the start of the experiment. Before the treatments are announced managers sit down with the teachers and delineate goals for the following year for that teacher. Example goals include increasing students' English proficiency, reaching certain grade targets, or improving lesson plans. We code these goals using text analysis and categorize them into four types of teacher actions: administrative tasks, professional development and collaboration tasks, improvements in teacher pedagogy, and test-score based goals. A month after these goals are set between managers and teachers, we announce the treatment assignment. Of course, schools in which managers focus on administrative goals versus those in which managers focus on pedagogy goals are likely different in many ways. Therefore, our approach is to interact these goals with the subjective treatment, to isolate the effect of these goals in settings in which teachers would be more likely to focus on them (those who were assigned subjective treatment) relative to places where the goals have no financial stake (objective and flat treatment schools). We use the interaction of subjective treatment and goal, controlling for level differences, to isolate the effect of these goal differences on student outcomes. We find that a larger focus on test scores and professional development increases students' endline test scores. However, more focus on test scores results in negative effects on student socio-emotional development. These results are robust to controlling for other features of the contract environment. Combined, these results help us understand why it is possible to have the same effect on test scores without needing to incentivize test scores directly. Subjective incentives are less noisy, producing a larger overall response, and less distorted, allowing teachers to prioritize multiple areas of student development. We find that the noise and distortion channel are able to explain a substantial portion of the reduced form effects we see. Our paper makes three key contributions. First, it is the first study, to our knowledge, to isolate the causal effect of subjective versus objective incentives and the effect of subjective versus flat incentives for employees in any sector (Lazear and Oyer, 2012;Oyer and Schaefer, 2011). Existing studies have tested bundled incentives (a combined subjective and objective incentives versus no incentives) on employee behavior (Khan et al., 2019;Fryer, 2013). Previous work has also compared the effect of heterogeneity across plants to measure the effect of more or less steep subjective incentives on employee overtime (Engellandt and Riphahn, 2011). There is also evidence that managers, especially in educational settings, may have imperfect information about worker effort or may be biased toward certain groups (Jacob and Lefgren, 2008;Gibbs et al., 2004). Second, we add to a robust literature on the effect of performance pay for teachers by providing two new findings (Lavy, 2007;Muralidharan and Sundararaman, 2011;Fryer, 2013;Goodman and Turner, 2013). We show the first evidence of objective performance pay having detrimental effects on non-academic student outcomes, consistent with multi-tasking models. Next, we show direct evidence that objective incentives result in teachers distorting their effort toward teaching pedagogy that impacts test performance at the cost of other areas of student development. This includes the use of class time doing test prep and the use of punitive discipline. Both of these results have long been suspected, but we provide the first documentation of such effects (Baker, 2002;Leigh, 2013). Third, we provide, what we believe is, the first evidence on measuring the extent of noise and distortion within an employee's contract and isolating the effects of those mechanisms on firm outcomes. There is a rich theoretical literature on the importance of these mechanisms (Baker, 2002). Empirical work has also investigated the role of noise on employee response (Prendergast, 1999;Prendergast and Topel, 1993;Prendergast, 2007). The remaining sections are organized as follows. Section 2 details the treatment and control conditions, the data collected, and standard implementation checks. Section 3 provides the main results of subjective and objective performance incentives on teacher effort and student outcomes. Section 4 gives an overview of the standard moral hazard model with multi-tasking and highlights the two key mechanisms which underpin the reduced form effects we find. Section 5 unpacks the mechanisms underlying the main effects in light of the moral hazard model, and section 6 concludes. Performance Incentive Treatments We partnered with a large private school system in Pakistan to implement the research design. Schools are randomized to receive one of three contracts which determine the size of teachers' raises at the end of the calendar year. 3 The three contracts were: • Control: Flat Raise -Teachers receive a flat raise of 5% of their base salary -Objective Treatment Arm: Teachers are evaluated based on their average percentile value-added (Barlevy and Neal, 2012) for the spring and fall term. Percentile value-added is constructed by calculating students' baseline percentile within the entire school system and then ranking their endline score relative to all other students who were in the same baseline percentile. 5 We then average across all students the teacher taught during the two terms. The contract applied to all core teachers (those teaching Math, Science, English, and Urdu) in grades 4-13. Elective teachers and those teaching younger grades received the status quo contract. All three contracts have equivalent budgetary implications for the school. We over-sampled the number of subjective treatment arm schools due to partner requests, so the ratio of schools is 4:1:1 for subjective treatment, objective treatment, and control, respectively. Both the subjective and objective treatment arms have several features in common, allowing us to isolate the effect of differing the performance metric and nothing else about the incentive structure. Both treatments are within-school tournaments, so this holds the level of competition fixed between the two treatments. In addition, the variance in the distribution of the incentive pay is equivalent across the two treatments. As we showed in section 4, holding the variance constant allows us to interpret differences in noise levels between the two systems as equivalent to differences in incentive 4 An example set of criteria are provided in Appendix Table A1. 5 Percentile value-added has several advantageous theoretical properties (Barlevy and Neal, 2012) and is also more straightforward to explain to teachers than more complicated calculations of value-added. steepness. The performance evaluation timeline also played out the same for all groups. Before the start of the year, managers set performance goals for their teachers irrespective of treatment. Teachers were evaluated based on their performance in January through December, with testing conducted in June and January to capture student learning in each term of the year. 6 To ensure teachers and managers had full understanding of how each contract would work, we conducted an intensive information campaign with schools. First, the research team had an in-person meeting with each manager, explaining the contract assigned to their school, and, in the case of the subjective treatment, explaining what would be expected of them and when. Second, the school system's HR department conducted in-person presentations once a term at each school to explain the contract. Third, teachers received frequent email contact from school system staff reminding them about the contract and half-way through the year contract teachers were provided midterm information about their rank based on the first 6 months. 7 Control teachers were also provided information about their performance in one of the two metrics, in order to hold the provision of performance feedback constant across all teachers. Timeline and Data Our study was conducted from October 2017 through June 2019. It covered one performance review cycle conducted from January-December 2018 in which the contracts were in place. Figure 1 presents the main treatment implementation (detailed in section 2.1) and data collection activities (detailed below). Our data allows us to understand how teachers changed their effort under each incentive scheme, why the incentives affected effort in the way they did, and the resulting effect this had on student outcomes. We draw on data from (i). the school system's administrative records, (ii). baseline and endline surveys conducted with teachers and managers (iii). endline student testing and survey and (iv). detailed classroom observation data. Administrative Data: The administrative data details position, salary, performance review score, attendance, and demographics for all employees. We also have biometric clock in/out data for all schools. The data was provided by the school system for the period of July 2016 to June 2019. It includes classes and subjects taught for all teachers, and end of term standardized exam scores for all students (linked to teachers). From September through December 2018, we also have data on 6 The school systems' central office designed and administered the June test to all students in a given grade. However, tests are graded locally by the school, often by the students' teacher. Due to concerns of grade manipulation, grading was audited by the research team. 10% of all teacher's exams were regraded. If the teachers' grade and the auditor's grade were off by more than 5%, another 10% of their tests were audited. If the average was still off by more than 5%, all of the teacher's exams were regraded. Overall, grade manipulation was small and was generally driven by cases where teachers bumped up students' grades from just failing to just passing. There was no heterogeneity in grading accuracy by treatment. The January test was conducted exclusively by the research team (described in section 2.2 below). These tests are not used as an outcome measure in this paper. 7 An example midterm information note is provided in Appendix Figure A2. classroom observations conducted by managers. Managers use a similar rubric to the one used by the research team to conduct classroom observations (detailed below). Baseline Survey: The baseline survey measured teachers' preferences over different contracts and beliefs about their performance under each contract. 40% of schools were randomly selected to participate in an in-person baseline survey conducted in October 2017. 2,500 teachers and 119 managers were surveyed. These outcomes are primarily used for a companion paper on teacher selection in response to performance pay (Brown and Andrabi, 2020). The choice of these four areas came from the school system's priorities. They are the four areas of socio-emotional development they expect their teachers to focus on. These areas are posted on the walls in schools, and teachers receive professional development on these areas. Some managers also specifically make these areas part of teachers' evaluation criteria. In addition to these four areas, the survey also asked whether students liked their school or wanted to change to a different school. Classroom Observation Data: To measure teacher behavior in the classroom, we recorded 6,800 hours of classroom footage and reviewed it using the Classroom Assessment Scoring System, CLASS (Pianta et al., 2012), which measures teacher pedagogy across a dozen dimensions. 10 11 We also recorded whether teachers conducted any sort of test preparation activity and the language fluency of teachers and students. Performance Evaluation Data: The school system had an existing performance evaluation system in which managers rated their teachers in December on performance criteria set in the previous December. We layered these new contracts on top of that existing system. In December 2017, before the announcement of treatments, managers set a number of performance criteria for each teacher, as they do each year. In a randomly chosen 3/4 of the subjective schools, those goals then become the evaluation criteria used to determine teachers' raises for the following year. In the rest of the schools (objective, control, and the remaining subjective) those goals are used to provide feedback to teachers but have no financial consequence. In the remaining 1/4 of subjective schools, managers were required to create a new set of goals now that they knew there would be financial stakes attached to those goals. They were encouraged to set the goals to be focused on employee effort, rather than employee characteristics, like training or credentials. Since the performance evaluation system exists for all employees, we can use data on what goals were set and the scores on those goals to understand manager priorities and ratings with and without financial stakes tied to the performance rating. Sample and Characteristics of the Employee-Manager Relationship Teachers The study was conducted with a large private school system in Pakistan. The student body is from an upper middle-class and upper-class background. School fees are $2,300-$4,300 USD (PPP) per year. Teachers are generally younger and less experienced than their counterparts in the US, though they have similar levels of education. Table 1 presents summary statistics of our sample 10 There are tradeoffs between conducting in-person observations versus recording the classroom and reviewing the footage. Videotaping was chosen based on pilot data which showed that video-taping was less intrusive than human observation (and hence preferred by teachers). Videotaping was also significantly less expensive and allowed for ongoing measurement of inter-rater reliability (IRR). 11 We did not hire the Teachstone staff to conduct official CLASS observations as it was cost-prohibitive and we required video reviewers to have Urdu fluency. Instead we used the CLASS training manual and videos to conduct an intensive training with a set of local post-graduate enumerators. The training was conducted over three weeks by Christina Brown and a member of the CERP staff. Before enumerators could begin reviewing data, they were required to achieve an IRR of 0.7 with the practice data. 10% of videos were also double reviewed to ensure a high level of ICC throughout the review process. We have a high degree of confidence in the internal reliability of the classroom observation data, but because this was not conducted by the Teachstone staff, we caution against comparing these CLASS scores to CLASS data from other studies. compared to a representative sample of teachers in US (National Center for Education Statistics, 2011). Our sample is mostly female (80%), young (35 years on average), and inexperienced (5 years on average, but a quarter of teachers are in their first year teaching). All teachers have a BA and 68% have some post-BA credential or degree. Salaries are on average $17,000 USD (PPP). Managers In order to understand the effects of subjective performance pay, we need to understand who the managers are and what role they play in overseeing teachers. Managers here are either a principal in small schools or a vice principal in larger schools. They are tasked with overseeing the overall operations of the school and managing employees, including teachers and other support staff. Table 2 presents information about managerial duties compared to a US sample of principals. Like in the US, our managers are generally older (45 years old), less likely to be female (61%), and more experienced (9.6 years) than teachers. Most were previously teachers and transitioned into an administrative role. Managers spend about a 1/3 of their working hours overseeing their staffobserving classes, providing feedback, meeting with teachers and reviewing lesson plans. The rest of their time is spent on other tasks related to the schools functioning. The distribution of time use is fairly similar to the principals in the US. However, teachers in our sample spend much more time directly observing teachers. They do about twice the number of classroom observations each year (4.7 versus 2.5 in the US). They also rate themselves higher in most areas of the management survey questions (4.3 versus 2.8 out of 5), including formal evaluation, monitoring and feedback systems for teachers. This is an important difference as these management practices could positively effect the success of the subjective treatment arm, and may help us understand the extent of external validity of these results. Intervention Fidelity In this section, we provide evidence to help assuage any concerns about the implementation of the experiment. First, we show balance in baseline covariates. Then, we present information on the attrition rates. Finally, we show teachers and managers have a strong understanding of the incentive schemes. Combined, this evidence suggests the design "worked". Schools in the two treatment arms and control appear to be balanced along baseline covariates. Appendix Table A1 compares schools along numerous student and teacher baseline characteristics. Of 27 tests, one is statistically significant at the 10% level and one is statistically significant at the 5% level, no more than we would expect by random chance. Results presented include specifications which control for these few unbalanced variables. Administrative data is available for all teachers and students who stay employed or enrolled during the year of the intervention. During this time 23% of teachers leave the school system, which is very similar to the historical rate of turnover. 88% of teachers completed the endline survey. While teachers were frequently reminded and encouraged to complete the survey, some chose not to. We do not see differences in these rates by treatment. Finally, for the endline test, parents were allowed to opt out of having their children tested. Student attrition on the endline test was 13%, with 3 pp of that coming from students absent from school on the day of the test and the remaining 10 pp coming from parents choosing to have students opt out of the exam. On both the endline testing and endline survey, we do not find differences in attrition rate by treatment. We also do not find that lower performing students were more likely to opt out. Teachers have a decent understanding of their treatment assignment. Six months after the end of the intervention, we ask teachers to explain the key features of their treatment assignment. 60% of teachers could identify the key features of their raise treatment. Finally, most teachers stated that they came to fully understand what was expected of them in their given treatment within four months of the beginning of the information campaign. Results We now present the main reduced form results of the paper. First, we test the effects of each incentive on student test performance and socio-emotional development. Then, we show the effects of the incentives on teacher effort, which helps us to understand the student effects. Specification Our main specification is: The main dependent variable of interest is student outcome, Y i1 , for child, i, at endline, t=1. Student outcomes include test scores in Math, Science, English and Urdu and socio-emotional development. SubjectiveT reatment s and ObjectiveT reatment s are a dummy for whether the student's school, s, was assigned to subjective or objective performance raises. The left out group is the control group (flat raise). The coefficients of interest are β 1 and β 2 , and their test of equality. For test scores, we control for student's baseline score, Y i0 , to improve efficiency as there is high auto-correlation in test scores. 12 We also control for strata fixed effects, subject and grade, χ j . Standard errors are clustered at the school level (the unit of randomization), and both standard and randomization inference p-values are provided in each table. Results Test Scores We find that both subjective and objective performance incentives have similar effects on test scores, of about 0.09 sd. Figure 2 and table 3 presents the results of each performance incentive on endline test scores. Column (1) shows results for all tests and question items. Effects are similar between the subjective and objective incentives, with an effect of 0.086 sd and 0.092 sd, respectively. In the row titled "F-test p-value (subj=obj)", we present a test for the equality of β 1 = β 2 . We cannot reject equality of effects between the two treatments on test scores. All results appear unchanged whether we consider standard p-values (in parentheses) or randomization inference p-values (in brackets). Column (2) and (3) provide tests on the effect of the treatment by question item type to understand whether these effects are due to memorization of class content or actual learning. Column (2) only includes questions from the prior grade's content and column (3) only includes questions that were added by the researchers from external standardized test sources including PISA, TIMSS, PERL and LEAPS. 1314 Both sets of questions provide a useful test because it would not be possible for students to have memorized the answers to the questions. Remedial content (from previous grade levels) and external content are never tested on the school system's standardized exam, and so teachers would not have prepared specifically for this material. Given that we find similar if not larger effects on these types of questions, it appears that treatment effects are coming from actual learning as opposed to memorizing curriculum. Again, we do not see a significant difference between the subjective or objective treatment. Column (4) and (5) present the results by subject, splitting by math and science exams versus the two reading exams (English and Urdu). Magnitudes are similar, around 0.09 sd, for both subjects, though we are less powered to detect overall effects with the smaller sample when we split by subject. Again, we cannot reject equality between the two treatments and the magnitude of the effects is highly similar. Socio-Emotional Development While the effects on test scores were similar between both treatments, the effects on socio-emotional development paint a different picture. Figure 3 and table relative to subjective is coming from a differential effect on "love of learning" and whether students like their school or would like to change schools. We can reject equality of the two treatments on these sub-areas at the 10% and 1% levels, respectively. This suggests that while objective incentives led to an increase in test scores, it was at the cost of enjoying school. Whereas, subjective incentives were able to accomplish the same learning gains without these negative consequences. On three other areas, ethical behavior, being a global citizen and inquisitiveness, we cannot reject the equality of the two treatments. Specification To understand why we see similar results on test scores but different effects on student's socioemotional development, we need to understand teacher's behavioral response. To do this, we look at the effect of each treatment on classroom observation ratings and time use. We have a similar main specification, this time at the teacher level: The main dependent variable of interest is outcome, Y i , for teacher, i. Teacher outcomes include classroom observation scores and time use. We again control for grade and strata fixed effects, χ j , and standard errors are clustered at the school level (the unit of randomization). 16 Results Classroom Observations The effect of each incentive on classroom behavior sheds light on the student effects we see. Overall, we find teachers under objective incentives using teaching strategies which provide the largest marginal return on test scores but may hamper other areas of human capital development for students. Teachers in the subjective treatment however, do not exhibit any of those distortionary teaching strategies. Figure 4 and table 5 presents the effects of each incentive on teachers' overall classroom observation score, using the CLASS rubric. On average, objective teachers exhibit worse teaching pedagogy. They score 0.07pts lower on the 7pt CLASS rubric scale. Subjective teachers have no noticeable change in pedagogy quality, and we can reject the equality of the two treatments at the 10% level. We then break down the 12 CLASS dimensions of pedagogy into three main areas, "class climate", "differentiation", and how "student-centered" the lesson is. "Class climate" captures whether the atmosphere of the classroom is positive, supportive and joyful or negative, punitive and dull. "Differentiation" captures whether the lesson is structured in a way to meet students who are different proficiency levels and/or have different learning styles. Finally, "student-centered" measures how much of the lesson is teacher-directed versus student-involved. Teachers under the objective incentive contract have a more negative class climate and less student-centered lessons. Both see a decrease of around 0.1 pts. We can reject equality of treatments at the 10% level. There is also an increase in level of differentiation in the subjective and objective treatment schools. We also measure the amount of class time devoted to test preparation activity. This includes practice tests, testing strategies (such as how to approach a multiple-choice test), or lecturing about the importance of doing well on tests. We find a large increase in the time spent on these activities in objective treatment schools. Relative to a control group mean of 0.14 min out of the 20-minute observation spent on test preparation activities, objective classes see a 5-fold increase, with a total of 0.76 minutes spent on these activities. We can reject equality of treatments at the 5% level along this dimension. Together with the student outcomes, these classroom observations paint a picture of objective schools as ones that were able to achieve test score gains by taking the path of least resistance for teachers -doing more test preparation and maintaining a stricter, less student-centered classroom. This then results in other negative outcomes on students human capital development, such as love of learning. Subjective classrooms on the other hand are able to accomplish the same academic gains without any negative effects on teacher practices or student socio-emotional development. This suggest that managers are able to prevent these distortionary behaviors, solving, at least to some extent, the multi-tasking problem. One concern with classroom observation data is that teachers may worry the videos of their classrooms will be provided to their manager, and for subjective teachers that has more a consequence than for the other treatment arms. We do several things to help alleviate these concerns. First, in the consent form and during the camera set up, we communicate to teachers that the videos are confidential and will only be reviewed by the research team. We also let them know that only aggregated data at the school level will be provided to the school system head office. Second, visits were a surprise within a two-month window, so teachers could not adjust their lessons beforehand. Third, we recorded several hours back to back for each teacher. We find teachers are most aware (and responsive) of the camera in the first hour of taping. We can remove that data and repeat the same analysis and find very similar results. Attendance and Time at Work We find that the subjective treatment results in a significant increase in the number of days a teacher is present at work relative to no incentives. Table 6 presents the results of the biometric clock in/out data. Relative to a control group mean of 145 days, subjective teachers are present an additional 6 days. We do not find an effect on hours spent at work for either treatment relative to the control. We cannot reject equality of treatments in either outcome. Columns (2) and (4) restrict to a sample of teachers who were present in the school system both terms and did not take any long leaves (health, maternity, etc.) to ensure the days present result is not driven by these effects. Results are robust to this sample restriction. How do Managers Implement the Subjective Incentive? In the objective treatment schools there is less scope for heterogeneity. The implementation of the contract and employee's response is likely to be similar across schools and comparable to other experiments which used test score-based performance pay. However, the subjective treatment arm could vary substantially across schools and firms depending on the type of oversight managers have of employees, the oversight firms have on managers and how managers themselves are incentivized. In this section, we unpack what types of teacher actions managers value, the extent to which managers are biased or show favoritism, and heterogeneity in treatment effects my manager quality. To understand how managers use the subjective treatment arm, we draw on data from the endline teacher and manager survey and managers evaluation scores of their teachers. What do managers value in rating teachers? We use three approaches to help understand what types of teacher actions' managers reward. In an ideal setting, we would randomize teacher actions to see how this affects managers' performance ratings of teachers. We are unable to do that exact exercise here. However, using a combination of detailed data and survey vignettes, we can accomplish something similar. Combined, these three sources of evidence suggest that managers highly value teacher actions which are related to human capital development and are not just focused on administrative tasks or actions unrelated to student development. Our first piece of evidence on what managers value in teachers, comes from endline survey data from both teachers and managers. We asked both teachers and managers to respond to a hypothetical situation, in which a teacher asks them for advice about how to achieve a higher raise in the following year. They are then asked to rate how much time the teacher should spend on different types of actions. Table A3 presents the data from the survey question. Column 2 shows teachers' responses about which actions would be most highly valued under the subjective contract. Column 3 presents responses to the same question posed to managers. Both subjective teacher and managers agree that improved pedagogy, like making lessons student centered and tailoring lessons to students at different initial levels, would increase their subjective rating. However, managers put additional weight on spending time collaborating with other teachers. Neither subjective teachers nor principals believe more superficial administrative tasks like volunteering at afterschool events or meeting with parents are important drivers of the subjective performance rating. Our second piece of evidence also comes from the endline survey. We provide a vignette describing a hypothetical teacher to managers, and we ask them to provide a performance rating of the hypothetical teacher. The vignette randomizes the hypothetical teacher's name, and rank in terms of value added, classroom behavioral management and attendance. 17 Table A4 Favoritism and bias A primary concern about subjective performance pay is whether managers are biased against certain employees or show favoritism toward preferred individuals. To assess whether this is a significant concern in this setting, we ask teachers at endline whether they felt their manager discriminated against certain groups or played favorites toward certain colleagues. 19 Table A6 presents the results from these survey questions. On average, teachers in the subjective treatment arm are no more likely than teachers in the objective treatment arm to say that the contract unfairly favors certain teachers or that certain groups are discriminated against under this contract. Teachers also state that bias, gaming and favoritism is not a significant concern in either students' test score growth, in the [bottom/middle/top] 10% of teachers in terms of behavioral management, and is in the [bottom/middle/top]10% in terms of attendance and timeliness at work." Managers rated three such vignettes with characteristics randomized across vignettes. 18 There is a negative relationship between subjective rating and hours spent at school. This relationship may be driven by the fact that certain grades and teaching positions have different requirements about the length of the workday, so this could be picking up that variation rather than teacher effort. 19 One concern with this approach is that teachers may be hesitant to provide honest assessment in a survey. To help minimize this concern teachers' responses are anonymized and we communicate this to teachers at the time of consenting to the survey. We also ask the question several ways, including asking teachers to report such behavior about other schools or about the school system in general. This type of questions phrasing allows teachers to report problematic manager behavior while providing plausible deniability for their own manager. contract. Though teachers do not say that overt bias is a significant concern, we may be worried that there are more subtle types of bias at play. The primary type of bias we were concerned about in this setting is gender bias. In Pakistan, gender bias in employment is rampant (World Bank Group, 2018), and managers are more likely to be male then the employees they oversee. As part of the vignette survey questions, we include a way to test for subtle gender bias. In the vignettes we randomize the hypothetical teachers' name to be a traditionally male or female Pakistani name. Table A4, column 3, presents the results of this test. We do not find that managers rate vignettes with female names lower. Both of these pieces of evidence suggest that favoritism and bias is not a substantial concern within the subjective treatment arm. Neither result is able to perfectly measure whether any favoritism or bias occurred, but combined they provide suggestive evidence favoritism and bias are not a first-order concern under this contract. Heterogeneity in treatment effects by manager characteristics On average the subjective treatment arm appears to have been successful at improving student outcomes and teacher effort, but there may be heterogeneity in how successfully managers implement the contract. We test for heterogeneity in treatment effects along several dimensions. First, table A7 presents heterogeneity in the subjective treatment arm by three manager characteristics: gender, age, and experience. We do not find significant differences in the effectiveness of the subjective treatment by these manager characteristics. Second, table A7 presents heterogeneity in treatment effects by several dimensions of manager "quality". We find that subjective performance pay is significantly less effective in schools where teachers believe their managers do not have an accurate perception of teacher effort. We measure this by asking teachers to rate how accurate their manager is in rating a fellow teacher. 20 We find there is no effect of subjective performance pay on student test scores for managers who are in the top quintile of this inaccuracy measure. We do not find heterogeneity and treatment effects by world management survey overall manager score (shown in Table A7, column (5)) or personnel management sub-score (column 6). However, as discussed in section 2.2, because this data was collected from manager self-report, we should be cautious about the interpretation, as managers may over rate themselves on these survey questions. This suggests that while subjective performance pay is on average very successful at producing learning games, these contracts may be ineffective in settings where employees do not trust their managers to implement them accurately. Theoretical Framework The experimental design is motivated by a model of moral hazard with multi-tasking, as presented in Baker (2002). This theoretical framework helps us rationalize the teacher behaviors and student outcomes we see as a result of each performance incentive. In this section, we summarize the framework and key prediction, demonstrate how this translates to the teaching context, and map out how the experimental design connects to the model. Moral Hazard with Multi-tasking The firm, a school, produces a single outcome -human capital, H(a, e) -through a simple linear production function: Human capital is a function of an n-dimensional vector of actions teachers can take, a, and the n-dimensional vector of marginal products of those actions, f. Human capital is also a function of many other things outside the teacher's action set (environment, parental support, peers, etc.), which are captured by the noise term, e, which is mean zero and has a variance of σ 2 e . Schools cannot perfectly observe all components of a, but they can observe some features of human capital (for example, test scores) and some actions (for example, teacher attendance). Schools construct a performance contract that pays teachers based on a performance measure, P (a, φ), which could be a combination of observable outputs (test scores, student attendance, etc.) and/or actions (teacher attendance, lesson plans, etc.). Teacher's performance measure, and therefore their pay, then is: The performance measure, P (a, φ), is a function of teacher's actions, a, and the marginal return to those actions on the performance measure, g. In effect, g translates to a piece-rate for each action. φ captures everything outside the teacher's actions, which affect the performance measure. It is mean zero and has variance σ 2 φ . Two types of noise are captured by φ. First is noise coming from features of the performance measure, which are outside the teacher's control. For example, if the performance measure is students' test scores, this could be the students' home environment. Second is the noise coming from mis-measurement of a given action, a n . For example, if the performance measure is teacher attendance, but principals have error-ridden records of attendance, then this contributes to the noisiness of the performance measure. Teacher's utility is a function of their pay and a quadratic cost of effort. 21 Teachers choose the optimal set of actions that maximizes their utility. Taking the derivative of Eq. 5, we have that the optimal decision is to set each action amount equal to the piece rate, a * 1 = g 1 , a * 2 = g 2 , ...a * n = g n . Given teacher's optimal action set, the average human capital produced by each teacher is: Average human capital then is a function of the length of the marginal production on human capital vector, f , the length of the piece-rate vector, g , and the alignment between these two vectors, cos(θ). In other words, human capital is increasing in the steepness of the incentives and how aligned those piece rates are with the human capital production function. We now go beyond Baker (2002) by making one additional assumption relevant in our context. We can further re-arrange the expression to show the effect that noise in the performance measure has on average human capital. Taking the variance of Eq. 4, we have var(P ) = g 2 var(a) + σ 2 φ . Re-arranging, we can substitute this in for g into Eq. 6. Average human capital then is: Here f and var( a) are constant across the two types of performance measures, subjective and objective, we will be comparing. In addition, due to the design of our subjective and objective incentives, var(P ), is also constant across the two schemes. 22 Theoretical Predictions We are then left with two components of the performance measure that affect average human capital. The key predictions of the model are that average human capital produced by the school is: • decreasing in performance measure distortion, 1 − cos(θ) • decreasing in performance measure noise, σ 2 Distortion Distortion captures the correlation between the piece rates for different actions and the marginal return to human capital of those actions. In essence, do we pay teachers more for the actions which are more related to developing human capital? The more distorted a contract is, the more employees focus on actions that are less helpful toward firm outcomes. Noise Noise captures how much of the performance incentive is unrelated to employee's actions. This could operationalize as other factors outside the employee's control affecting the performance measure (school resources, shocks, etc.) or mis-measurement of employee actions, if the contract attempts to measure teacher actions. It is important to flag that traditionally the way noise enters the optimal contract design is through reducing risk-averse employee's utility. This requires firms to raise the fixed part of an employee's salary to meet employee's participation constraint. Here we are not focused on that consequence of noise as we are not focused on employee entry or exit in this paper 23 . The effect of noise we focus on here is equivalent to a decrease in the incentive scheme's average piece rate. Since σ 2 φ = var(P ) − g 2 var(a), and var(P ) and a are constant given the tournament nature of each incentive scheme, increasing σ 2 φ directly decreases g . Therefore, increasing noise then reduces the extent of the effort response, a * . This effect of noise exists in any incentive scheme with a fixed variance, which includes all tournament or threshold-type incentives. Understanding the Experiment within the Theoretical Framework An important distinction between our experimental performance pay system and this simple model is that our treatments are performance pay tournaments, where teachers are ranked relative to other teachers at their school and are allocated to one of five performance categories based on their relative performance. As a result, this framework is a simplification of the teacher's problem in our experiment. However, the key predictions discussed above also hold in the more complicated tournament model, so we think it is illustrative of the mechanisms we will discuss in the next section. The theoretical framework allows us to understand incentive scheme's key features that should affect how teachers respond and, as a result, the impact on human capital. Ex-ante, it is not clear whether subjective or objective incentives would be more or less distorted in the teaching context. On the one hand, subjective incentives may solve the multi-tasking problem by prioritizing more than just measurable student learning. One of the key critiques of objective incentives is that teachers may focus on actions which enhance test scores (such as test prep skills, memorization, etc.), but have small or zero effects on human capital (Muralidharan and Sundararaman, 2011). Subjective performance incentives would ideally penalize these types of behaviors by teachers, in favor of more well-rounded teaching. On the other hand, it could be that managers prioritize the wrong actions -because they do not know what the human capital production function is, because they value 23 A companion paper (Brown and Andrabi, 2020) studies employee sorting in response to these contracts only certain aspects of human capital and not others, or most nefariously, they weight actions which make their job easier. It is also uncertain whether subjective of objective would be less noisy. Test scores are notoriously noisy measures of teacher effort (Chetty et al., 2014). One of the most common complaints teachers have against test score-based incentives is that they are mostly unrelated to teacher actions (Podgursky and Springer, 2007). Subjective performance pay could be less noisy than objective performance pay because managers could focus on rewarding actions rather than outcomes. However, this requires managers to observe effort accurately. Subjectivity could even introduce additional noise though, if managers introduce bias or favoritism into their evaluations. Our experiment connects to the model in two ways. First, in sections 5.3, we explicitly test the two predictions of the model using exogenous variation within one of the treatment arms that varies the level of noise and distortion. We then see the effect of these mechanisms on firm outcomes. Second, in section 5.2 and 5.4 we show that the difference in reduced form effects of each contract can be explained through differences in noise and distortion across the two contracts. Mechanisms How can we square the results that we see very different effort responses, similar test score effects and different socio-emotional effects across subjective and objective incentives? We argue that differences in the levels of noise and distortion across the two treatments help explain these outcomes. We structure our argument as follows. First, in section 5.1, we present the similarities between the two treatments to help eliminate possible channels that could drive the difference in treatment effects. Second, in section 5.2, we highlight the differences between the systems. We show teachers believe subjective incentives to be less noisy and less distorted. Third, we provide evidence that noise and distortion does, in fact, affect outcomes. Section 5.3 shows that noise and distortion are related to student outcomes as predicted in the theoretical framework -more noise reduces the effect of incentives and more distortion diverts employee effort toward those actions. We conduct these tests by exploiting heterogeneity in levels of noise and distortion within a given treatment, to isolate the effect of noise or distortion on outcomes. Finally, in section 5.4, we bring together the estimates from section 5.2 and 5.3 to understand how much of the difference in the reduced form student effects can be explained by differences in noise and distortion. Similarities between Treatments In order to isolate the effect of the performance measure (percentile value-added versus manager rating), we hold a number of features constant between the two treatments. Both treatments are within-school tournaments. Both treatments provide a raise from 0-10% with the same set of rank thresholds corresponding to raise amounts within that range. Both treatments were introduced at the same time in schools and had a similar performance review timing -manager completed midterm feedback in June 2018 and final ratings in December 2018 and the objective score was based on the average of tests in June 2018 and January 2019. At endline, we survey teachers about their experience with their incentive scheme. We find no difference in teachers reported experience along a number of dimensions. There is no difference in their responses to the following survey questions: i). when teachers said they understood what was expected of them, ii). awareness of contract main features, iii). how frequently they thought about their contract, and iv). whether the system unfairly favors certain types of teachers (age, gender, etc). Figure 5 and table A6 provides results for each of these survey questions, showing no statistical difference between teachers' responses by treatment. Differences Across Treatments: Noise and Distortion In this section through section 5.4, we will focus on two of the remaining differences between the treatments: noise and distortion. As highlighted in the theoretical framework, noise captures the extent to which a teacher's actions affect their incentive payment. Distortion captures the extent to which actions which have the largest marginal return to human capital also are actions which have a higher effective piece rate under the given performance measure. First, we will show that the levels of noise and distortion are different across the treatments. Noise We measure noise using teacher's perceptions of the noisiness of their incentive treatment. 24 To measure perceived noise, we ask teachers to agree or disagree (on a 5pt scale), whether under their contract, "their raise is out of their control", "those who work harder, earn more" and whether "I feel motivated to work harder". Figure 6 presents the average response to each question with 1 being strongly disagree and 5 being strongly agree. We see that teachers in the subjective treatment, feel their raise is more in their control, hard work is rewarded, and they feel more motivated. The average difference is 0.14sd across the three areas, and we can reject equality of treatments for all three questions at the 5% level. Distortion We measure distortion using endline survey data from teachers. We ask teachers to imagine a teacher who really wants to receive a higher raise at the end of the year and commits to work ten additional hours a week to increase their raise. Then we ask teachers how much of those ten hours should the teacher allocate different activities, such as collaborating with other teachers, incorporating higher order thinking skills into lessons, preparing practice tests, helping with extracurricular activities, etc. We then group these 17 different actions into four categories: administrative tasks (grading, helping with extracurriculars, monitoring duty), professional development (collaboration, training, improved English skills and content knowledge), pedagogy (use of studentcentered and differentiated lessons) and test preparation (achieving certain grade targets). We find that teachers in subjective versus objective schools feel that there are some slight differences in which actions should be prioritized in order to increase their raise. Figure 7 and table 7 presents the differences in stated valuation of each area. Overall, teachers think those under the subjective contract should prioritize more administrative tasks and slightly less on test preparation. We will show in the next section that these actions have different implications for student outcomes. Effect of Noise and Distortion on Outcomes Noise We showed that teachers believe there is less noise in the subjective performance measure. However, we do not know if noise actually reduces the effectiveness of the incentive scheme. We showed that theoretically with a fixed variance incentive scheme, a more noisy incentive scheme leads to a lower power incentive, but there is limited empirical evidence on this effect. To test whether noise affects outcomes, we exploit heterogeneity within the subjective treatment in noisiness. Managers vary in their accuracy of assessing teacher effort. Some managers observe lessons for each of their teachers every week. Others sit down and review paper lesson plans, and some are more hands off. To measure whether a manager has an accurate perception of what their teachers do, we ask teachers to answer the following question about three fellow teachers in their school, "The appraisal score their manager would give them is... [Too high/low by more than one raise category], [Too high/low by about one raise category], [Too high/low by less than one raise category], or [Accurate]". We then construct an average of these ratings per manager, capturing average perceived inaccuracy. On average, teachers believe their managers over or under rate their fellow teachers by 0.8 of an appraisal step (out of the five-step system shown in section 2.1. However, there is considerable heterogeneity. Those most inaccurate quintile of managers are perceived to rate other teachers incorrectly by greater than two steps. More inaccurate managers may be different than their fellow managers in many ways (experience, age, school environment). However, manager accuracy should only affect perceived noisiness of the incentive scheme in subjective treatment schools. In control or objective treatment schools, managers still rate their teachers but have no control over the incentive raise in those schools. Therefore, we use M anagerAccuracy * SubjectiveT reatment as the instrument for N oise, controlling for M anagerAccuracy and SubjectiveT reatment. We find that M anagerRatingInaccuracy j significantly predicts teacher's rating of the noisiness of their appraisal system in subjective but not objective/control schools, as we would expect. A 1 sd increase in manager inaccuracy increases beliefs about the noisiness of the contract by 0.1-0.4 sd in subjective schools. Table 8 presents the results from the first stage for data at the teacher and student level 25 . Columns (2) and (4) add additional controls, including teacher's beliefs about the preference for different actions ("distortion") and teacher beliefs about other non-noise features of the contract (timing, understanding, etc). The coefficient on M anagerAccuracy * SubjectiveT reatment is very robust to the inclusion of these controls, suggesting that this instrument is picking up difference in noise and not other features of the contract environment. To test for the effect of noise on teacher and student outcomes, we use the following two-stage least squares specification: where α 3 is the coefficient of interest, N oise is instrumented using M anager Rating Inaccuracy j * SubjectiveT reat i . χ ij are controls, such as school and grade and baseline controls when available for a given outcome. We find that noise significantly reduces the effectiveness of performance incentives (table 9). A 1 sd increase in noisiness of the incentive scheme reduces teachers' hours worked by 13.2 hours per week and reduces test scores by 0.175 sd. We do not find an effect of noise on socio-emotional scores. Because our effective first stage has an f-stat of less than 10, we present the AR test p-values which are our preferred test, given that they are robust to weak instruments in the just-identified case. Columns (2) Distortion Distortion is a measure of how correlated the marginal returns to human capital for different actions are with the effective piece rates for those actions. In order to measure distortion, we therefore need an estimate of marginal returns to different actions. To do this, we again exploit heterogeneity across managers' preferences for different actions, combined with the subjective treatment. The idea behind this strategy is that managers have different preferences for actionssome state they want teachers to focus more on improving their lesson plans, others want teachers to help out more with administrative tasks, etc. We can interact those preferences with subjective treatment status versus objective and control. We can see the effect of preferences toward certain actions on student outcomes. δ j Points on Action j i (9) Here the coefficient of interest is β j , which gives the effect of manager preference toward certain types of tasks on student outcomes. Actions are grouped into four categories: admin (grading, helping with extracurriculars, monitoring duty), professional development (collaboration, training, improved English), pedagogy (use of student-centered and differentiated lessons), and test prep (achieving certain grade targets). We also add additional controls to capture other features of the contract environment, such as noisiness, understanding of the contract, etc. We find that several of the action categories are related to student outcomes. (2) and (4)). Contribution of Noise and Distortion to Reduced Form Effects Finally, we can pull the results together to understand the extent to which noise and distortion can explain the reduced form results we saw in section 3.1. To do this we decompose the total reduced form effect into the component from noise, distortion and an unexplained component, : The overall effect of subjective relative to objective on test scores was close to zero (-0.006sd, from table 3). The effect of noise on test scores is -0.17 (table 9) and there is 0.14sd less noise in the subjective arm than the objective arm ( figure 6). For the distortion component, we repeat the same approach for each of the four action categories (admin, professional development, pedagogy and test prep). We take the difference between subjective and objective for each area (table 7), multiply each category with the return to preference for that action on test scores (table 10) and sum. In total, ∂T estScore ∂Distortion * dDistortion, then is -0.03. Subjective schools put slightly less focus on test scores. Combined, the positive effect of subjective having less noise and the negative effect of them placing less focus on test scores almost cancel each other out. Overall, the remaining unexplained portion, , is just 0.0002sd, suggesting noise and distortion are effective at explaining the student results. We can repeat the same approach for socio-emotional skills. The overall effect of subjective relative to objective on socio-emotional development was 0.0433 sd (table 4). The effect of noise on socio-emotional skills is -0.06 and there is -0.14sd less noise in the subjective arm than the objective arm. The subjective teachers focus more on tasks which are related to socio-emotional skills. Overall ∂T estScore ∂Distortion * dDistortion is 0.011 sd. The remaining unexplained portion is 0.024 sd, or about half of the difference between the subjective and objective treatment. This is perhaps unsurprising given the results throughout this section. Noise and distortion were much less related to socio-emotional skills than test scores. This could be because there is in fact a weak relationship between them. Alternatively, we may not be as successful at measuring socioemotional skills and certainly have a harder time capturing what aspects of teacher's behavior is related to developing these skills. Better measurement along these areas is an important area for future work. Conclusion In this paper, we provide evidence on the effect of subjective versus objective incentives for teachers. We find that both subjective and objective incentives increase test scores, but objective incentives result in negative effects on socio-emotional development. These student outcomes make sense given the teacher behaviors we see under each incentive. In subjective treatment schools, teachers make small improvements in pedagogy and are involved in more professional development. In objective treatment schools, teachers distort effort toward test preparation. They spend much more time on practice tests and test strategies and use more punitive discipline. While there is heterogeneity in manager application of the subjective treatment arm, we do not find evidence of widespread favoritism or bias. We then try to understand the mechanisms underlying the reduced form effects. We show evidence that the two incentive schemes are similar along most dimensions except for two areas: noise and distortion. We show teachers believe that the subjective incentive is less noisy and that it prioritizes both test and non-test student outcomes. Using heterogeneity within treatments we attempt to isolate the effect of noise and distortion itself on student outcomes. Finally, we show that noise and distortion are able to explain a large portion of the reduced form test score effects but a smaller fraction of the reduced form socio-emotional skill effect. Notes: This figure presents the experimental timeline. It includes data collection activities and treatment implementation activities. Objective Subjective Notes: This figure presents the effects of each performance incentive treatment on student endline test scores relative to the control group. • The blue bars present the coefficient of the effect of the objective treatment relative to the control (flat raises). The red bars present the coefficient of the effect of the subjective treatment relative to the control. • The observation is at the student-subject level. The y-axis presents the coefficient from a regression of student's z-score on a given endline exam on treatment dummy variables. • The sample includes students tested in grades 4-13 in five subjects: Math, Science, English, Urdu, Economics. • The first two bar graphs includes all test subjects and question items. The next two bars restricts to math and science exams. The next restrict to English, Urdu and Economics exams. The next restrict to question items drawn from external sources, such as PISA and TIMSS. The last two restrict to question items which were from the previous grade. • All regressions include strata fixed effects and control for baseline student average test score, baseline school average test score, grade and subject. Standard errors are clustered at the school level. 95% confidence intervals are shown on each bar. Stars just above a bar show the significance of the treatment group relative to the control. A bracket above two bars denotes the significance between the two treatments (subjective versus objective). * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This figure presents the effects of each performance incentive treatment on student socio-emotional outcomes relative to the control group. • The blue bars present the coefficient of the effect of the objective treatment relative to the control (flat raises). The red bars present the coefficient of the effect of the subjective treatment relative to the control. • The observation is at the student level. The y-axis presents the coefficient on a regression of student's z-score on a given socio-emotional dimension from an endline survey of students conducted in January 2019. • The first two bars provides the average across all five dimensions of socio-emotional outcomes. The remaining ten provide effects on each individual dimension. • All regressions include strata fixed effects and control for student's grade. Standard errors are clustered at the school level. 95% confidence intervals are shown on each bar. Stars just above a bar show the significance of the treatment group relative to the control. A bracket above two bars denotes the significance between the two treatments (subjective versus objective). * p < 0.10, * * p < 0.05, * * * p < 0.01. Objective Subjective Notes: This figure presents the effects of each performance incentive treatment on teacher behavior as rated based on classroom videos relative to the control group. • The blue bars present the coefficient of the effect of the objective treatment relative to the control (flat raises). The red bars present the coefficient of the effect of the subjective treatment relative to the control. • The observation is at the classroom observation level. Teachers may be observed multiple times over the course of the intervention. The y-axis presents the coefficient from a regression of classroom observation score in a given dimension on treatment dummy variables. • The sample includes teachers from grades 4-13 in core academic subjects. • The first two bars presents the effects on the average score on the CLASS rubric (Pianta et al., 2012), on a 7-pt scale. The next six bars provide effects on scores on sub-areas of the class rubric. The last two bars provide effects on time spent on testing or test-prep activities (in minutes). • All regressions include strata fixed effects and control for grade and video coder fixed effects. Standard errors are clustered at the school level. 95% confidence intervals are shown on each bar. Stars just above a bar show the significance of the treatment group relative to the control. A bracket above two bars denotes the significance between the two treatments (subjective versus objective). * p < 0.10, * * p < 0.05, * * * p < 0.01. Female Older teachers Is their any bias in favor or against: (scale: 1-5) Notes: This figure presents teacher's responses to questions regarding their incentive contract for the previous year. • Figure A shows teachers responses to questions about what actions they believe fellow teachers take to increase their raise. Figure B shows their responses to questions about whether certain groups are favored by the incentive scheme. • The red (blue) bars presents the average response for teachers in the objective (subjective) treatment schools. The observation is at the teacher level and come from the endline survey of teachers. • In figure A, the outcome is a 5pt scale from Strongly Disagree (1) to Strongly Agree (5). In figure B, the outcome is a 5pt scale (1, lots of bias against, 3, no bias, 5, lots of bias in favor). • Standard errors are clustered at the school level. 95% confidence intervals are shown on the subjective bar comparing it to the objective treatment. Stars just above the bar show the significance of the subjective group relative to the objective. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This figure presents teacher's responses to a questions about how they respond to their incentive. • The red (blue) bars presents the average response for teachers in the objective (subjective) treatment schools. The observation is at the teacher level and come from the endline survey of teachers. • The questions are on a 5pt scale from Strongly Disagree (1) to Strongly Agree (5). • Standard errors are clustered at the school level. 95% confidence intervals are shown on the subjective bar comparing it to the objective treatment. Stars just above the bar show the significance of the subjective group relative to the objective. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This figure presents teachers' responses to a hypothetical scenario in which they are advising a teacher which actions they should take to increase their raise under a given treatment. • The red (blue) bars presents the average response for teachers in the objective (subjective) treatment schools. The observation is at the teacher level and come from the endline survey of teachers. • The questions are on a 5pt scale from Strongly Disagree (1) to Strongly Agree (5). • Standard errors are clustered at the school level. 95% confidence intervals are shown on the subjective bar comparing it to the objective treatment. Stars just above the bar show the significance of the subjective group relative to the objective. * p < 0.10, * * p < 0.05, * * * p < 0.01. (1) and (2) comes from administrative data collected from our partner school system. Data in panel B, columns (1) and (2) is from an endline survey conducted with 189 principals and vice principals and 5,698 teachers in our study sample. Data in panel A, B and C, columns (3) and (4) (1) and (2) comes from administrative data collected from our partner school system. Data in panel B and C, columns (1) and (2) is from an endline survey conducted with 189 principals and vice principals in our study sample. Data in panel A and B, columns (3) and (4) (3) and (4) is from the World Management Survey data conducted by the Centre for Economic Performance (Bloom et al., 2015). We restrict to the 270 schools located in the US from that sample. Notes: This table presents the effects of each performance incentive treatment on student endline test scores. The outcome is student's z-score on a given endline exam. The sample includes students tested in grades 4-13 in five subjects: Math, Science, English, Urdu, Economics. Column (1) includes all test subjects and question items. The observation is at the student-subject exam level. Column (2) restricts to question items which were from the previous grade. Column (3) restricts to question items drawn from external sources, such as PISA and TIMSS. Column (4) restricts to math and science exams. Column (5) restricts to English, Urdu and Economics exams. All regressions include strata fixed effects and control for baseline student average test score, baseline school average test score, grade and subject. Values in parentheses are standard p-values. Values in brackets are randomization inference p-values. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the effects of each performance incentive treatment on student socio-emotional outcomes. The outcome is student's z-score on a given socio-emotional dimension. Observations are at the student level and come from an endline survey of students in January 2019. Column (1) provides the average across all five dimensions of socio-emotional outcomes. Columns (2)-(6) provide each individual dimension. All regressions include strata fixed effects and control for student's grade. Values in parentheses are standard p-values. Values in brackets are randomization inference p-values. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the effects of each performance incentive treatment on teacher behavior as rated based on classroom videos. The unit of observation is at the classroom observation level. Teachers may be observed multiple times over the course of the intervention. Column (1) presents the average score on the CLASS rubric (Pianta et al., 2012), on a 7-pt scale. Columns (2)-(4) provide scores on sub-areas of the class rubric. Column (5) provides the number of minutes during the observation that were spent on testing or test-prep activities. All regressions include strata fixed effects and control for grade and video coder fixed effects. Values in parentheses are standard p-values. Values in brackets are randomization inference p-values. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the effects of each performance incentive treatment on teacher attendance and time at work. The outcome is the number of days present at work and the number of hours at work. Data comes from biometric clock in and out data collected at all schools. The restricted sample removes teachers who took long leaves of absence or only worked at the school system for one of the two terms. All regressions include strata fixed effects and control for baseline school average test score, grade and subject. Values in parentheses are standard p-values. Values in brackets are randomization inference p-values. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table reports teachers' responses to a hypothetical scenario in which they are advising a teacher which actions they should take to increase their raise under a given treatment. Data was collected as part of the endline survey, and observations are at the unit of the teacher. Actions are categorized into four categories: administrative tasks, pedagogy, professional development, and test preparation. Table A3 provides teacher's weight for the full list of activities by treatment. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the relationship between manager rating inaccuracy and teacher's rating of how noisy their contract was. The outcome is teacher's rating of how noisy their contract was as measured by an index of their response to the three questions shown in Figure 6. Columns (1) and (2) uses data at the teacher level. Columns (3) and (4) uses data at the teacher-student exam level. Student exam data is matched to all teachers who taught the student in the given exam subject for at least one term from January-December 2018. All regressions control for subject, class and manager inaccuracy squared. Columns (3) and (4) also control for school and student test baseline. Columns (2) and (4) add in additional controls to pick up other non-noise differences across contracts. These controls include weight placed on each of the four activity groups listed in Table 7, those values interacted with the Subjective treatment, when teachers said they learned about the treatment and how often they received information about the treatment. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the relationship between teacher's rating of the noisiness of their contract, instrumented by manager inaccuracy*Subjective Treatment, on teacher and student outcomes. Columns (1) and (2) use data at the teacher level. Columns (3) and (4) use data at the teacher-student exam level. Student exam data is matched to all teachers who taught the student in the given exam subject for at least one term from January-December 2018. Columns (5) and (6) uses data the student level. All regressions control for subject, class, subjective treatment, manager inaccuracy, and manager inaccuracy squared. Columns (3) and (4) also control for school and student test baseline. Columns (2), (4) and (6) add in additional controls to pick up other non-noise differences across contracts. These controls include weight placed on each of the four activity groups listed in Table 7, those values interacted with the Subjective treatment, when teachers said they learned about the treatment and how often they received information about the treatment. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the relationship between evaluation criteria interacted with treatment on student outcomes. Data is at the teacher level. All regressions control for the four categories of evaluation criteria and subjective treatment. Columns (2) and (4) add in additional controls to pick up other non-distortion differences across contracts. These controls include noise index, belief about whether the contract affects teacher competition, favors certain teachers, when teachers said they learned about the treatment, how often they received information about the treatment and all of these outcomes interacted with subjective treatment. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Figure A1: Example Performance Criteria Notes: This figure shows an example set of performance criteria a teacher would have set in collaboration with their manager at the beginning of the year. This list of criteria was located on their employment portal, and available to access throughout the year. Managers could set individual criteria for each of their employees. These ranged from 4 to 10 criteria spanning numerous aspects of the teacher's job descriptions. Figure A2: Example Midterm Information Notes: This figure shows an example notification sent to teachers during the summer between the two school years. The notification gave teachers a preliminary performance rating based on the first term of the experiment. Teachers received this information via email and as a pop-up notification on their employment portal. This example shows the notification that subjective treatment teachers would receive. Teachers in the objective treatment received midterm performance information based on their students percentile value added from the first term. Teachers in the control schools received information about either their performance along the subjective criteria that by their manager or their students' percentile value added. Teacher percentile (0-1) in: -0.003** -0.017*** -0.013*** Notes: This table reports teachers' responses to a hypothetical scenario in which they are advising a teacher which actions they should take to increase their raise under a given treatment. Data was collected as part of the endline survey, and observations are at the unit of the teacher/manager. * p < 0.10, * * p < 0.05, * * * p < 0.01. (2) includes the full sample of teachers and column (3) just includes teachers for whom we conducted a classroom observation. Hours and Days present are from biometric clock in and out data provided by the school system. Value-added is calculated using administrative test scores and endline test scores. The remaining variables are from classroom observations. The first 12 are the dimensions of the CLASS rubric and the rest are additional elements of teaching not captured by the CLASS rubric. * p < 0.10, * * p < 0.05, * * * p < 0.01. The final column report mean differences between treatment group and report if any are statistically significant. The three "Is there any bias" questions are on a 5 pt scale (1, lots of bias against, 3, no bias, 5, lots of bias in favor). The remaining questions in panel A and B are on a 5-pt scale from 1 (strongly disagree) to 5 (strongly agree). Questions in panel C were on a scale from 1 to 8. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01. Notes: This table presents the treatment effects by manager characteristics. The row Interaction lists which characteristic is used as the interaction variable for a given column. Age, experience and gender are from administrative records. Manager inaccuracy is from teacher endline survey data. Mangement rating and Personnel management rating are from manager endline survey responses to World Management Survey questions. Standard errors are clustered at the school level. * p < 0.10, * * p < 0.05, * * * p < 0.01.
v3-fos-license
2018-04-03T02:32:44.787Z
2014-05-10T00:00:00.000
14319158
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1186/s13550-014-0020-z", "pdf_hash": "08109f35fe76242aa972882a26efda919b05cd33", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44633", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "f6578bf9928181fd97b7e62248a23b0c114d6054", "year": 2014 }
pes2o/s2orc
Utility of 18 F-FDG and 11C-PBR28 microPET for the assessment of rat aortic aneurysm inflammation Background The utility of 18 F-FDG and 11C-PBR28 to identify aortic wall inflammation associated with abdominal aortic aneurysm (AAA) development was assessed. Methods Utilizing the porcine pancreatic elastase (PPE) perfusion model, abdominal aortas of male Sprague-Dawley rats were infused with active PPE (APPE, AAA; N = 24) or heat-inactivated PPE (IPPE, controls; N = 16). Aortic diameter increases were monitored by ultrasound (US). Three, 7, and 14 days after induction, APPE and IPPE rats were imaged using 18 F-FDG microPET (approximately 37 MBq IV) and compared with 18 F-FDG autoradiography (approximately 185 MBq IV) performed at day 14. A subset of APPE (N = 5) and IPPE (N = 6) animals were imaged with both 11C-PBR28 (approximately 19 MBq IV) and subsequent 18 F-FDG (approximately 37 MBq IV) microPET on the same day 14 days post PPE exposure. In addition, autoradiography of the retroperitoneal torso was performed after 11C-PBR28 (approximately 1,480 MBq IV) or 18 F-FDG (approximately 185 MBq IV) administration at 14 days post PPE exposure. Aortic wall-to-muscle ratios (AMRs) were determined for microPET and autoradiography. CD68 and translocator protein (TSPO) immunohistochemistry (IHC), as well as TSPO gene expression assays, were performed for validation. Results Mean 3 (p = 0.009), 7 (p < 0.0001) and 14 (p < 0.0001) days aortic diameter increases were significantly greater for APPE AAAs compared to IPPE controls. No significant differences in 18 F-FDG AMR were determined at days 3 and 7 post PPE exposure; however, at day 14, the mean 18 F-FDG AMR was significantly elevated in APPE AAAs compared to IPPE controls on both microPET (p = 0.0002) and autoradiography (p = 0.02). Similarly, mean 11C-PBR28 AMR was significantly increased at day 14 in APPE AAAs compared to IPPE controls on both microPET (p = 0.04) and autoradiography (p = 0.02). For APPE AAAs, inhomogeneously increased 18 F-FDG and 11C-PBR28 uptake was noted preferentially at the anterolateral aspect of the AAA. Compared to controls, APPE AAAs demonstrated significantly increased macrophage cell counts by CD68 IHC (p = 0.001) as well as increased TSPO staining (p = 0.004). Mean TSPO gene expression for APPE AAAs was also significantly elevated compared to IPPE controls (p = 0.0002). Conclusion Rat AAA wall inflammation can be visualized using 18 F-FDG and 11C-PBR28 microPET revealing regional differences of radiotracer uptake on microPET and autoradiography. These results support further investigation of 18 F-FDG and 11C-PBR28 in the noninvasive assessment of human AAA development. Electronic supplementary material The online version of this article (doi:10.1186/s13550-014-0020-z) contains supplementary material, which is available to authorized users. Background Abdominal aortic aneurysm (AAA) is a significant medical problem, with a high mortality rate, accounting for approximately 150,000 hospital admissions per year. AAA is the 10th leading cause of death in Caucasian men, ages 65 to 74 years and accounted for nearly 16,000 deaths overall. This equates to approximately 10% of males and approximately 2% of females, over the age of 60 years with a known AAA diagnosis. More than 50% of patients will present with a ruptured AAA (rAAA), without a previous diagnosis of AAA. Over 36,000 AAA repairs are performed a year in the USA, highlighting the financial impact of this disease. While clinically significant decreases in mortality following elective AAA repair have been documented, the mortality following open rAAA repair has remained high (50% to 75%) [1]. The primary factor considered for risk of human rAAA is the aortic diameter [2]. However, it is well documented that small AAAs (<5 cm) rupture, while many large AAAs (>8 cm) are incidentally discovered. Thus, there is no absolute diameter threshold at which AAAs rupture. This suggests that a new diagnostic modality needs to be developed to reliably identify an impending rupture. It is hypothesized that the final event leading to aortic rupture involves the degradation of collagen within the media, extending out through the adventitia. This occurs through a catalytic process induced by inflammatory cells in the aortic wall. Macrophages are a major source of key enzymes at play, particularly serine proteases, and are believed to play a critical role in this tissue degradation [3]. Inflammation of the vascular wall related to phagocytic macrophage activity can be demonstrated by increased uptake of 2-deoxy-2-[ 18 F]fluoro-d-glucose ( 18 F-FDG) PET in the arterial wall [4]. Accordingly, 18 F-FDG PET has been used to assess AAA wall inflammation [5], and increased 18 F-FDG uptake has been identified in symptomatic AAAs (pain on palpation of AAA) and with rapid aortic enlargement [6]. Recently, we were able to show that increased 18 F-FDG uptake is indeed related to an increased risk for rupture in the same established animal model utilized for this study [7]. Translocator protein (TSPO), formerly referred to as peripheral benzodiazepine receptor, is an 18-kDa outer mitochondrial protein highly expressed in phagocytic inflammatory cells, such as macrophages in the periphery and macrophage-like microglial cells in the central nervous system. It has been demonstrated that other inflammatory cells including neutrophils, B cells, natural killer, as well as CD4-and CD8-positive cells express TSPO to varying degrees [8]. The TSPO radioligand 11 C-PBR28 ([methyl- 11 C]N-acetyl-N-(2-methoxybenzyl)-2-phenoxy-5-pyridinamine) has been used for neuroimaging with primates [9] and humans [10]. Other TSPO radioligands, including 11 C-PK1195, have been utilized to evaluate macrophage presence and behavior in rodent and human carotid plaques [11,12]. In this study, 18 F-FDG and/or 11 C-PBR28 microPET were evaluated to determine whether increased radiotracer uptake could identify aortic wall inflammation in rat AAAs. We utilized an established model, in which AAA development is induced by active porcine pancreatic elastase (APPE) infusion compared to heat-inactivated PPE (IPPE) controls [7]. PET imaging results were further supported and validated by 18 F-FDG and 11 C-PBR28 autoradiography, as well as CD68 and TSPO immunohistochemistry (IHC) and cell culture experiments. Methods Male Sprague-Dawley rats (200 to 300 g) were obtained from Charles River Laboratories (Wilmington, MA, USA) and utilized for all experiments. Animal anesthesia was performed with a mixture of approximately 1.5% isoflurane and oxygen for all procedures, respectively. The core body temperature was maintained with a heating pad (37°C). All procedures as listed in Table 1 were approved by the University of Michigan Universal Committee on the Use and Care of Animals (protocol number 10430). AAA model AAAs (N = 24) were established utilizing active porcine pancreatic elastase (APPE, 12 U/mL), while control animals (N = 16) were exposed to heat-inactivated porcine pancreatic elastase (IPPE 12 U/mL, APPE heated at 90°C for 45 min) as previously described [13]. After ventral abdominal wall incision, a customized polyethylene catheter (Braintree Scientific, Braintree, MA, USA) was introduced through an aortotomy, and PPE (12 U/mL) was instilled into the isolated aortic segment for 30 min. The Table 1 Study scheme: overview of the number of animals exposed to APPE or IPPE undergoing imaging IPPE N = 5 exposed segment was dilated to a maximal diameter, and constant pressure was maintained with the use of a syringe pump. Using a video micrometer configured with NIS Elements software (Nikon, Melville, NY, USA), the aortic diameter was measured just distal to the crossing left renal vein and the proximal ligature, just proximal to the iliac bifurcation and the distal ligature, and midway between these two locations. During PPE exposure and aortic dilation, average increases in aortic diameter of 51.3 ± 2.3% and 48.2 ± 3.7% (p = not significant, NS) were observed for the APPE and IPPE groups, respectively. Upon reestablishment of segmental blood flow, mean aortic diameter increases of 47.0 ± 2.5% and 34.0 ± 3.1% (p = 0.003) were observed for the APPE and IPPE groups, respectively. At the time of sacrifice, animals were anesthetized with isoflurane, and the ventral incision was reopened. The abdominal aorta was dissected free from the surrounding tissues. Blood was collected from the inferior vena cava, and the aorta was then excised from the level of the left renal vein to the iliac bifurcation and processed. Noninvasive aortic diameter measurements Prior to performing a laparotomy, the intraluminal diameter of the aorta just distal to the left renal vein crossing, just proximal to the iliac bifurcation, and midway between these two locations was measured with ultrasound (US, 12 MHz Zonare, Mountain View, CA, USA). Percentage increases in aortic diameter were determined considering the average baseline intraluminal aortic diameter and the maximum aortic diameter at days 3, 7, and 14 post PPE exposure. An aortic aneurysm was defined by a >100% increase in the aortic diameter compared to pretreatment measurements. 18 F-FDG and 11 C-PBR28 microPET 11 C-PBR28 was synthesized as described in the literature [14,15]. At days 3, 7, and 14 after PPE exposure, sets of APPE and IPPE controls were imaged dynamically on a microPET R4 scanner (Concorde/Siemens Microsystems, Knoxville, TN, USA) for 90 min after IV injection of approximately 37 MBq 18 F-FDG [16]. To avoid interference from radiotracer pooling in the inferior vena cava, 18 F-FDG was injected intravenously via a right external jugular vein (EJV) cannula. Separate sets of IPPE control (N = 6) and APPE AAA (N = 5) animals underwent imaging with both radiotracers. First, approximately 19 MBq of 11 C-PBR28 was injected via the right EJV catheter, and a 60-min dynamic microPET study was started. While keeping the animal under anesthesia and after near complete decay of 11 Cradioactivity (at 90 min post injection (p.i.) of 11 C-PBR28), 37 MBq of 18 F-FDG was injected, and a 90-min dynamic scan was started. MicroPET image data reconstruction and analysis All image data were corrected for attenuation by measured transmission scan, scatter, random events, and decay. Data were reconstructed using iterative ordered subset expectation maximization -maximum a posteriori (OSEM-MAP) [17] yielding a reconstructed image resolution of approximately 1.4 mm. To define volumes of interest (VOIs), first the abdominal aorta was identified on early bolus images (first 90 s), while summed transaxial late phase 11 C-PBR28 (10 to 60 min p.i.) and late phase 18 F-FDG (60 to 90 min p.i.) uptake data were used for further analyses. The total AAA or control aortic radiotracer uptake was normalized to the mean adjacent psoas muscle uptake (AMR PET ) for both 11 C-PBR28 and 18 F-FDG data sets using ASI Pro VM software (Siemens Medical Systems, Malvern, PA, USA) [18]. Normalization to muscle was chosen as the normal thoracic aorta was generally not assessable due to the limited field of view during dynamic microPET imaging and because the adjacent psoas muscle was available for normalization on both microPET and autoradiography. In addition, maximum intensity projection (MIP) images of the abdominal aorta with iliac bifurcation were created by zeroing all voxels outside of the aorta VOI, then rotating the image volume about the Z-axis in increments of 11.25°(360°/32°). At each rotation, the VOI pixels were weighted by a scale factor ranging linearly from 1.0 for the front-most voxel to 0.5 for the furthest-most voxel, and then the maximum intensity voxel was chosen for each pixel of the projection. Software to generate the MIP images was written using the IDL programming language (Exelis Visual Information Solutions, Boulder, CO, USA). Individual projections were volume rendered using Adobe After Effects CS5 software. C-PBR28 and 18 F-FDG autoradiography For autoradiography, 18 F-FDG (approximately 185 MBq) was injected IV in five IPPE control and four APPE animals. 11 C-PBR28 (approximately 1,480 MBq) was injected IV in five APPE and four IPPE controls. Circulation time prior to harvest was 60 min for 18 F-FDG and 10 min for 11 C-PBR28, respectively. Given rapid 11 C-PBR kinetics [15], this earlier time point was chosen to optimize count statistics (20.3 min physical half-life of 11 C) bearing in mind that whole-mount sectioning for autoradiography typically required considerable time (5 to 7 half-lives of 11 C) prior to exposure of the phosphor imaging screen. The infrarenal abdominal aorta with aortic bifurcation and surrounding musculature and vertebrae were harvested en bloc and quickly frozen in OCT (Sakura, Torrance, CA, USA) at −80°C for approximately 20 min. A Leica cryomicrotome (Leica Microsystems Inc, Buffalo Grove, IL, USA) was used to obtain 20-μm sections for whole-mount autoradiography. Block face photographs were taken during sectioning to correlate radiotracer uptake on autoradiography with anatomical landmarks. Radioactivity was determined using a calibrated bio-image analyzer BAS-1800 (FUJIFILM Life Science, Stamford, CT, USA) after exposing sections for 2 and 4 h in the case of 11 C-PBR28 and 18 F-FDG, respectively. The number of photostimulated luminescence events per square millimeter (PSL/mm 2 , mean ± SD) corrected for background was measured using vendor-specific software (BAS-Reader) by drawing individual regions of interest (ROIs) for the aortic wall and psoas muscle, facilitated by comparison with block face photos for each section. Specifically, three autoradiographic coronal sections at the level of maximum aortic diameter were considered to calculate the mean autoradiographic aortic wall-to-psoas muscle uptake ratio (AMR ARG ) for control and AAA animals for further analysis. Histology and immunohistochemistry Tissues from the maximum diameter of each AAA and from a comparable region of each control aorta were taken for histologic analysis. Since the remaining tissue was utilized for molecular analyses, a direct spatial comparison between autoradiography and immunohistochemistry was not possible. Aortic tissue was fixed in 10% formaldehyde for 8 h, transferred and stored in 70% ethanol, and embedded in paraffin for sectioning (5 μm). Macrophage staining was performed with a mouse antirat CD68 primary antibody (1:67, Serotec, Raleigh, NC, USA), and visualization of anti-CD68 was done with DAB (Dako North America, Inc., Carpinteria, CA, USA), and counter staining was performed with hematoxylin 1 (Richard-Allan Scientific Co, Kalamazoo, MI, USA). TSPO IHC was performed with mouse anti-rat TSPO primary antibody (1:100, Santa Cruz Biotechnology, Inc, Santa Cruz, CA, USA). After using an antigen retrieval solution (Vector Laboratories, Burlingame, CA, USA), the primary antibodies were detected using a Vectastain Elite Kit (Vector Laboratories, Burlingame, CA, USA). Utilizing a Leica DMR microscope confluence on polystyrene 6-well plates in C-RPMI. Sample (1.14 to 1.99 × 10 5 ) cells were incubated for 10 min with approximately 90 kBq of 11 C-PBR28 and approximately 390 kBq of 18 F-FDG (both at approximately 10 nmol/mL concentration), respectively. 11 C-PBR28 and 18 F-FDG cell uptake values were determined in macrophages stimulated at 10, 100, and 1,000 μg/mL lipopolysaccharide (LPS, Sigma-Aldrich Corporation, St. Louis, MO, USA) for 24 h and compared to nonstimulated cultures. The cells were washed three times in ice-cold PBS using a semi-automatic cell harvester (model 48LT; Brandel, Gaithersburg, MD, USA) before counting. All experiments were performed in triplicate. qRT-PCR was performed utilizing 10 ng/well of cDNA. Primers and the SYBR Green Master Mix used for qRT-PCR were obtained from SABiosciences (Qiagen, Frederick, MD, USA). mRNA expression of TSPO (catalog no. Rn00560890) was compared with that of 18S, a housekeeping gene (catalog no. Mm03928990). The CFX96 Real-Time System and the CXF Manager Software (version 2.1) was used to amplify target DNA and obtain the take-off values and melt curves. The following program was used on the Rotor Gene: 95°C for 10 min; 40 cycles of 95°C for 15 s and 60°C for 60 s. Data analysis Statistical analyses were performed using Prism 5 (Graph-Pad Software, Inc., La Jolla, CA, USA). Quantitative results were analyzed by unpaired two-tailed unequal variance t tests (also known as Welch's correction) to account for possible unequal group variances [19]. Paired data were assessed using the Wilcoxon signed-rank test. Data are presented as mean ± standard error of mean (SEM), when appropriate, and p values of less than 0.05 were considered statistically significant. Aortic diameters The mean day 3 aortic diameter increases compared to pretreatment measurements for the APPE (N = 8) and the IPPE (N = 4) groups were 99.7 ± 5.5% and 64.4 ± 7.5% (p = 0.009), respectively. At days 7 and 14, the aortic diameter further increased significantly in the APPE group to 185.6 ± 15.2% (N = 15, p < 0.0001) and 335.0 ± 30.8% (N = 24, p < 0.0001), respectively. For the IPPE control group, the aortic diameter slightly decreased to 52.3 ± 5.0% (N = 11) and 34.6 ± 3.6% (N = 16) at days 7 and 14, respectively ( Figure 1). 11 C-PBR28 and 18 F-FDG microPET Figure 2 displays representative 18 F-FDG and 11 C-PBR28 microPET images. Image analysis was complicated, as aortic wall uptake was generally lower than the radioactivity in bowel in both 18 F-FDG and 11 C-PBR28 scans. However, the blood pool activity seen on early phase imaging (0 to 90 s. p.i.) revealed the location of the abdominal aorta and AAAs. On late phase images, AAAs were always visually identifiable on both 11 C-PBR28 and 18 F-FDG microPET data. However, a clear trend favoring one of the two tracers for visualization of AAAs was not observed. While 11 C-PBR28 accumulated rapidly in the renal cortex, the renal excretion of radioactivity into the urine, as seen with 18 F-FDG, was not observed with 11 C-PBR28. Dynamic PET imaging, however, improved identification of extra-aortal structures, as radioactivity associated with bowel content and urine was variable over time. Due to unavoidable restrictions regarding the availability of radiotracers and scanner time, imaging could not be performed on all animals at every time point. 18 F-FDG microPET at 14 days post PPE exposure ( Figure 3) revealed a significantly greater APPE AAA AMR PET (mean ± SEM 17.6 ± 1.2, N = 14) compared to IPPE controls (9.8 ± 1.2, N = 12; p = 0.0002). No significant differences in AMR PET were noted at 3 and 7 days post PPE exposure between APPE and IPPE groups. At day 3 post PPE exposure, the mean 18 F-FDG AMR PET for APPE AAAs (N = 7) and IPPE controls (N = 10) were 13.9 ± 1.0 and 15.8 ± 1.7 (p = NS), respectively. At day 7, the mean 18 F-FDG AMR PET values for APPE AAAs (N = 7) and IPPE controls (N = 9) were 12.0 ± 0.9 and 11.7 ± 1.2 (p = NS), respectively. Interestingly, the average 18 F-FDG uptake (between 60 and 90 min. p.i. in nCi/cm 3 ) in the aorta of IPPE control animals was 1.9 times higher than muscle background (p < 0.02). 11 C-PBR28 microPET data were not obtained at days 3 and 7 post PPE exposure. Time-activity data obtained from day 14 indicated that the 11 C-PBR28 AMR PET was fairly stable from 10 to 60 min after tracer injection and Figure 1 Aortic diameter percent increases. Active porcine pancreatic elastase (APPE) exposed animals (black circles and solid lines) developed abdominal aortic aneurysms (AAA) by day 3 post APPE exposure, and AAA diameters continued to increase over the 14-day time period. Heat-inactivated PPE (IPPE) exposed animals (white circles, dashed lines) did not develop AAAs (defined as a >100% increase in aortic diameter compared to pretreatment measurements). that the AMR PET was higher in APPS compared to IPPE controls ( Figure 4). In fact when summing data between 10 and 60 min p.i. at day 14, the mean APPE AAA 11 C-PBR28 AMR PET (12.4 ± 1.6, N = 5) was significantly greater than that of IPPE controls (7.1 ± 1.0, N = 6) (p = 0.04). Also, the average 11 C-PBR28 uptake (summed between 10 and 60 min. p.i. in nCi/cm 3 ) in the aorta of controls was 2.5 times higher than in muscle background (p < 0.001). Figure 5 shows early and late phase segmentations of the aortic VOIs obtained from representative APPE and IPPE control animals. As seen on anterior projections, the 18 F-FDG and 11 C-PBR28 distribution was homogeneous in controls, but clearly inhomogeneous within the AAA wall, thereby often displaying increased uptake at the anterolateral aspect of the AAA in APPE animals. Additional movie files (scaled to SUV values ranging from 0 to 2) show the above cases from Figure 5 in greater detail (see Additional files 1, 2, 3, 4, 5, 6, 7 and 8). F-FDG and 11 C-PBR28 autoradiography Analysis of autoradiographic data was not impaired by nearby radioactivity and clearly identified increased 18 F-FDG ( Figure 6) and 11 C-PBR28 (Figure 7) uptake in the aortic wall of APPE AAAs compared to IPPE control aortas. As identified on autoradiography, the aortic wall uptake of 18 F-FDG and 11 C-PBR28 was generally nonuniform in APPE AAAs and homogeneous in IPPE controls. In the case shown in Figure 7, mild focal increased 11 C-PBR28 uptake was noted due to inflammation related to the suture material. The mean AMR ARG for 18 F-FDG on day 14 was 9.7 ± 2.3 for APPE AAAs (N = 4) and 2.4 ± 2.0 for IPPE controls (N = 5), respectively (p = 0.02). The mean 14-day AMR ARG for 11 C-PBR28 was 28.9 ± 4.0 for APPE AAAs (N = 4) and 5.0 ± 0.2 for the IPPE controls (N = 4), respectively (p = 0.01). On average, a 4-fold increase in the aortic wall uptake of 18 F-FDG and a 5.8-fold increase in aortic wall uptake of 11 C-PBR were demonstrated by APPE AAAs compared to IPPE control aortas. While the mean AMR ARG for 18 F-FDG (2.4 ± 2.0) and 11 C-PBR28 (5.0 ± 0.2) in the aortic wall of IPPE control animals was numerically higher than muscle background (set to 1), the difference failed to reach significance for both tracers, which was likely related to the small number of animals investigated. Macrophage cell culture Figure 9 displays decay-corrected 11 C-PBR28 and 18 F-FDG RAW 264.7 macrophage cell uptake (expressed as percentage of injected dose/million cells) in response to LPS stimulation. The 11 C-PBR28 uptake increased in a dose-dependent fashion from nonstimulated macrophages (3.95 ± 0.51) to 5.33 ± 0.38 at 1,000 ng/mL LPS. The 18 F-FDG uptake also increased from 0.12 ± 0.04 at 0 ng/mL LPS to 0.33 ± 0.09 at 1,000 ng/mL LPS. Using the same initial mass tracer concentrations (approximately 10 nmol/mL), the macrophage cell uptake of 11 C-PBR28 was substantially (between 16 and 32 times) higher than that of 18 F-FDG at any stimulation level (p < 0.001). Discussion Comparable aortic diameter increases during IPPE and APPE exposure demonstrate the consistency and reproducibility of the model. The greater degree of aortic diameter increase associated with the APPE group, immediately following dilation, compared with the IPPE group demonstrates the subsequent effect of the 30-min exposure to APPE. The ultimate aortic diameter increases at day 14 post APPE exposure are consistent with previous results for this model [7,13]. We interpret the decrease in aortic diameter following IPPE exposure in the control group over the 14-day period as a consequence of the reparative process that the rat aorta undergoes after the mechanical disruption caused by sham treatment. In the selected well-established animal model [13,20,21], it has been demonstrated that a wider variety of inflammatory cells including neutrophils, T cells, mast cells, and monocytes infiltrate the AAA wall during the acute and transitional phases of AAA development that occur over the first 3 and 7 days, respectively. Of these inflammatory cells, macrophages and T cells maintain a greater presence during the chronic inflammatory phase out to and beyond 14 days post APPE exposure [22]. By performing 18 F-FDG and 11 C-PBR28 microPET imaging and respective autoradiography, with supportive IHC with APPE AAA and IPPE controls, we tested whether noninvasive micro-PET imaging would allow the evaluation of the inflammatory response to PPE exposure. The assessment of rodent AAA development by small animal microPET is limited due to the small size of the rat aorta and the potential presence of confounding nearby radioactivity in bowel loops or ureters. We addressed these limitations in part by performing dynamic microPET and by injection of the radiotracers via the jugular vein catheter. As a result, a clear bolus of activity could be seen in the aorta, without interference from activity in the inferior vena cava, providing the necessary anatomical location of the aortic lumen for VOI definition. Using this technique, we identified elevated 18 F-FDG uptake in the aortic wall 14 days after APPE exposure compared to controls, while no significant differences were noted at 3 and 7 days. These results were further validated by autoradiography, performed 14 days post PPE exposure, demonstrating significantly greater 18 F-FDG uptake in the aortic wall of APPE AAA walls compared to IPPE controls. The decreasing 18 F-FDG uptake over the course of time post IPPE exposure correlates with the reparative process represented by continued decreases in aortic diameter in controls. By contrast, 14 days post APPE exposure, the increased 18 F-FDG uptake post APPE exposure correlated with the increasing aortic diameter, which is caused by an ongoing inflammatory response. Our inability to differentiate 18 F-FDG uptake by IPPE controls and APPE AAAs at 3 and 7 days post PPE exposure likely reflects a combination of the inflammatory processes associated with the operative procedure and transient mechanical aortic disruption that occurs within both groups. Given that the 18 F-FDG uptake in our model has not previously been assessed, comparisons with the literature are limited. Ogawa et al. observed greater aortic uptake on 18 F-FDG PET imaging and determined greater 18 F-FDG differential uptake ratios for the thoracic and abdominal aortas of Watanabe heritable hyperlipidemic rabbits that developed intimal thickening and plaque formation [23]. Furthermore, increased 18 F-FDG uptake was correlated with increased macrophage populations in the atherosclerotic plaques on IHC [23] and predominant T lymphocytes in human AAA undergoing scheduled surgical repair [24]. Recently, Courtois et al. identified increased 18 F-FDG uptake associated with active inflammation characterized by infiltrates of proliferating leukocytes in the adventitia of AAAs [25]. We observed increased macrophage populations by CD68 IHC and increased 18 F-FDG uptake ratios by autoradiography in AAA walls compared to control aortic walls, however, in separate groups. As measured by microPET, the 18 F-FDG uptake in the aorta of sham-treated animals was significantly higher compared to muscle background, which was likely related to the inflammation induced by the surgical trauma. TSPO binding by PET radioligands has been well investigated with regard to central nervous system inflammation [26][27][28]. In addition, we have recently shown the utility of 11 C-PBR28 to visualize extracranial inflammation in acute and chronic animal models based on carrageenan and T cell-mediated adjuvant arthritis, respectively [15]. However, little is known about the utility of TSPO radiotracers to demonstrate aortic wall inflammation. Here, we demonstrate that 14 days after APPE exposure, significantly increased 11 C-PBR28 uptake is noted in the AAA wall on microPET and autoradiography compared to controls. In addition, we identified significantly increased TSPO expression in APPE AAA walls compared to IPPE control aortas using qRT-PCR. While TSPO and CD68 IHC as well as 18 F-FDG and 11 C-PBR28 autoradiography were performed in this study, a direct spatial comparison between both was not available due to technical incompatibilities (lack of IHC antibodies suitable for frozen samples). However, spatial co-localization of the TSPO radioligand 3 H-PK11195 and TSPO IHC has previously been verified by Gaemperli et al. [12]. Also, we previously identified a correlation of 11 C-PBR28 uptake with CD68 staining scores in carrageenan-induced acute inflammation [15]. These observations provide further supportive evidence that the increased 11 C-PBR28 uptake in APPE AAAs was indeed related to an increase in TSPO protein expression. Sarda-Mantel et al. recently evaluated 18 F-FDG and the TSPO radioligand 18 F-DPA714 to assess AAA wall inflammation utilizing an aortic xenograft model involving orthotopic implantation of decellularized guinea pig abdominal aorta in rats [29]. While the time course for AAA development and the respective inflammatory response associated with this aortic xenograft model are quite different from those associated with the PPE model, their results also indicate a potential role of 18 F-FDG and 18 F-DPA714 in the assessment of AAA wall inflammation. In our study, we observed a greater mean AAA 18 F-FDG uptake (expressed as AMR PET ) compared to 11 C-PBR28 uptake (17.6 ± 1.2 and 12.4 ± 1.6) at day 14 post APPE exposure, however, with greater mean AAA 11 C-PBR28 uptake (expressed as AMR ARG ) on autoradiography (28.9 ± 4.0 and 9.7 ± 2.3, respectively) compared to 18 F-FDG. It should be noted that a direct comparison of 11 C-PBR28 and 18 F-FDG is difficult due to important technical differences (mainly related to differences in voxel resolution) between microPET and autoradiography and profound differences in the AMR analytic technique between these two modalities (forced by the different physical half-life of 11 C and 18 F). Given the rapid decay of 11 C-PBR28, whole-mount autoradiography was limited to a subset of coronal slices cut through the center of the AAA; whereas the calculation of AMRs obtained from PET is based on VOIs including the entire aortic wall. In addition, while the longer physical half-life of 18 F allowed for similar timing of 18 F-FDG PET and autoradiography (at 60 min post injection), the short half-life of 11 C-PBR28 required sectioning to start much earlier. Since we previously identified rapid kinetics of 11 C-PBR28 in two other inflammation models [15] and since we saw a significant difference in the 11 C-PBR28 AMR PET as early as 10 min p.i., a 10-min circulation time was selected for 11 C-PBR28 autoradiography. Other researchers had also opted for early time point autoradiography (at 20 min. p.i.) when investigating atherosclerotic plaques with 11 C-PK11195 [11]. Thus, AMRs obtained from PET (AMR PET ) and autoradiography (AMR ARG ) were timed differently and not expected to necessarily display similar results. Nevertheless, both measurements (PET and autoradiography) identified increased 18 F-FDG and 11 C-PBR28 uptake in APPE AAAs compared to IPPE controls. The uptake of both tracers was nonuniform within AAAs favoring the anterolateral aspects of aneurysms. It remains to be seen whether this observation is specific to the PPE model. Similar to 18 F-FDG, we observed increased 11 C-PBR28 uptake (AMR PET ) in IPPE control aortas compared to muscle background. While elevated TSPO radiotracer uptake in healthy mouse arteries has been observed [11], we caution that the elevated 11 C-PBR28 uptake in IPPE control aortic walls may primarily have been the result of the unavoidable surgical trauma. TSPO ligands such as 11 C-PBR28 may provide more specific (or relevant) information about inflammatory cell infiltration into the AAA wall than 18 F-FDG. In humans, asymptomatic noninflammatory AAA selected for surgical repair due to size criteria lack significantly increased 18 F-FDG uptake on PET while at the same time, focal 18 F-FDG uptake is identified on autoradiography co-localizing to focal accumulations of CD45-, CD3-, and CD20-positive leukocytes on IHC with very little contribution of macrophages [24]. However, macrophages play an important role in the AAA wall prior to rupture [30]. Matrix metalloproteinases produced by macrophages (and B cells) degrading collagen and elastin in the abdominal wall have been linked with the continued enlargement of AAAs [3,31]. As mentioned above, TSPO radiotracers such as 11 C-PBR28 are known for preferential accumulation in macrophage-like glial cells [26,28]. As shown in this study as well as on prior occasion [15], 11 C-PBR28 also displays elevated uptake in activated macrophages. Given that the macrophage cell uptake of 11 C-PBR28 was at least one order of magnitude greater than that of 18 F-FDG (measured at the same initial mass tracer concentration), one could speculate that the presence (or level) of 11 C-PBR28 uptake in human AAAs may be of greater clinical relevance than that of 18 F-FDG due to its more specific link to macrophage function. This elevated macrophage cell uptake of 11 C-PBR28, particularly with activation, also raises the possibility that partial volume-effect-related limitations of 18 F-FDG in the assessment of human AAAs [32] could potentially be overcome. Due to different binding mechanisms, the binding affinity of various TSPO radioligands may vary significantly. In humans, it now has been clearly demonstrated that the TSPO protein exists as a monomer as well as part of a multimeric complex consisting of multiple TSPO monomers. Furthermore, human TSPO polymorphism significantly influences 11 C-PBR28 binding potential [33,34]. To assess binding potential in humans, a simple blood test now evaluates the binding affinity of platelets for 11 C-PBR28 [35]. While the existence of such a TSPO polymorphism has only been investigated in humans, its existence cannot be excluded in rats and may have inadvertently interfered with our results. However, despite human TSPO polymorphism, the potential for 11 C-PBR28 PET for assessments of human AAA development, atherosclerosis, and vasculitis appears to be significant [36]. In addition, certain TSPO receptor ligands may exert an anti-inflammatory effect via steroid synthesis that could downregulate pro-inflammatory interleukin secretion. Torres et al. demonstrated decreased acute extremity and pulmonary inflammation in two murine models with administration of PK11195 and Ro5-4864, with specific reductions in IL-6 and IL-13 [37]. Therefore, TSPO radioligands could become useful not only as PET imaging agents to aid in diagnosis but also to monitor antiinflammatory treatments targeting the TSPO protein.
v3-fos-license
2021-09-16T13:26:03.610Z
2021-09-16T00:00:00.000
237521841
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.733895/pdf", "pdf_hash": "e4135b648afdaaf84cd427c05693907563e8b1d2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44634", "s2fieldsofstudy": [ "Psychology" ], "sha1": "e4135b648afdaaf84cd427c05693907563e8b1d2", "year": 2021 }
pes2o/s2orc
Natural Variability in Parent-Child Puzzle Play at Home Here, we observed 3- to 4-year-old children (N=31) and their parents playing with puzzles at home during a zoom session to provide insight into the variability of the kinds of puzzles children have in their home, and the variability in how children and their parents play with spatial toys. We observed a large amount of variability in both children and parents’ behaviors, and in the puzzles they selected. Further, we found relations between parents’ and children’s behaviors. For example, parents provided more scaffolding behaviors for younger children and parents’ persistence-focused language was related to more child attempts after failure. Altogether, the present work shows how using methods of observing children at a distance, we can gain insight into the environment in which they are developing. The results are discussed in terms of how variability in spatial toys and spatial play during naturalistic interactions can help us contextualize the conclusions we draw from lab-based studies. INTRODUCTION Spatial skills are central for everyday functioning, allowing us to encode the features, locations, and orientations of objects, as well as mentally manipulate this information. Spatial skills not only make it possible to interpret maps and diagrams, but also they are important predictors of later achievement across diverse STEM disciplines (Wai et al., 2009;Uttal and Cohen, 2012). For decades, research has documented a significant and robust relationship between spatial skills and mathematics performance over the course of development (Smith, 1964;Guay and McDaniel, 1977;Brown and Wheatley, 1989;Casey et al., 1995;Shea et al., 2001;Wai et al., 2009;Pyers et al., 2010;Cheng and Mix, 2014;Verdine et al., 2016Verdine et al., , 2017. As a result, identifying factors that might influence the development of spatial skills in early childhood has received a great deal of attention in the literature. For example, researchers have examined children's constructive play, or play with toys that involve the manipulation of objects in space, such as jigsaw puzzles, shapes, or construction blocks. A large body of research has reported a positive relationship between constructive play in childhood and both advanced concurrent spatial abilities (Connor and Serbin, 1977;Serbin and Connor, 1979;Caldera et al., 1999) and enhanced spatial skills later in development (Newcombe et al., 1983;Baenninger and Newcombe, 1989;Dearing et al., 2012;Levine et al., 2012;Nazareth et al., 2013;Jirout and Newcombe, 2015). Further, a handful of interventions studies have shown a causal relation between children experiences with constructive play, and Frontiers in Psychology | www.frontiersin.org 2 September 2021 | Volume 12 | Article 733895 a subsequent increase in various spatial skills (Casey et al., 2008;Bower et al., 2020;Schröder et al., 2020). Importantly, such constructive play often occurs during interactions with parents. Thus, parents' behavior during such play may be important for developing spatial abilities as well. For example, Levine et al. (2012) found that parents used more spatial language, including words describing the spatial properties of objects (e.g., "big, " "little, " "flat, " and "edge") when their children were engaged with more challenging puzzles. This finding is important because children who hear more spatial language perform better on spatial tasks (e.g., Szechter and Liben, 2004;Dessalegn and Landau, 2008;Casasola et al., 2009Casasola et al., , 2020. Thus, exposure to language is one possible mechanism for how play with parents shapes children's developing spatial abilities. Parents may also support children's emerging spatial skills during constructive play by giving feedback, structuring the task, and modeling ways to problem solve during constructive play (Wood et al., 1976;Gauvain et al., 2002;Mulvaney et al., 2006;Ralph et al., 2020;Thomson et al., 2020). Children whose mothers provided more support or scaffolding during a spatial task performed better on a cognitive capability test that included measures of spatial ability (Mulvaney et al., 2006). Further, several studies have shown parents who better communicate task objectives and provide appropriate feedback have children who perform better on spatial tasks and tests of spatial concepts (Casey et al., 2014;Lombardi et al., 2017). Thus, scaffolding is another mechanism by which parents may influence children's spatial development during play. Altogether, a large and growing literature suggests that several factors-including constructive play, exposure to spatial language, and parent scaffolding-may all play a role in shaping the development of children's spatial skills. Importantly, many of these studies have been conducted outside of the home, typically in a lab setting, with specific constructive play toys and tasks provided to parents and children. Although such experimental control allows us to derive conclusions based on standard conditions, the sole use of such assessments is limited, as children's behavior, along with parents' behavior with their children, might differ in the lab when compared to this behavior at home. Moreover, the constructive toys provided for a study in the lab may differ from those with which children typically play. Indeed, parents themselves have a great deal of control over what types of spatial toys they make available for their children, and they have many options to choose from. A simple google search for children's spatial toys produced over 5 million results, which can be narrowed down by the type of spatial toy in which a parent is interested, along with price, and the age and gender of their child. And there is evidence that the types of toys with which children play might bring about specific types of behaviors. In fact, researchers have even suggested that gender differences in spatial abilities might be attributable to differences in the toys parents select for girls versus boys (Todd et al., 2016;Coyle and Liben, 2020). The COVID-19 pandemic has put a number of constraints on researchers' ability to collect data with children in the lab and in some ways, necessitated new approaches to study development. Here, we show how we used a videoconference platform (Zoom) to study spatial play at home from a distance, along with the spatial and constructive toys that parents typically choose for their children. The existing studies that have examined children and their parents playing with toys in the home have focused on the relationship between the frequency of spatial play and parent support (Levine et al., 2012), or parent language and children's performance on spatial tasks (Mulvaney et al., 2006;Pruden et al., 2011;Polinksy et al., 2017;Ralph et al., 2020). Here, we asked a different question. Specifically, we sought to characterize the variability in various factors linked to spatial skills in children during their naturalistic play with the spatial toys they had at home. We explored variability in the types of puzzles families of 3-and 4-year-old children interact with in their homes, and the nature of those parent-child interactions during naturalistic play. We conducted the study over Zoom, and simply recorded parents and children as they played. Participants Children between the ages of 3 and 4 years and their parents were recruited via a Rutgers University maintained database to participate in an online study investigating the development of spatial skills in children ages 3 and 4 years. Forty-two dyads participated in the study. Eleven were not included in our final sample due to either deviation from the protocol (N = 3) or lack of puzzles at home (N = 8). The final sample included 31 children (14 female, M age = 44.6 months, SD = 6.32, Range = 35.8-55.3 months) and their parents. All except for two parents presented as female. Families identified as White (N = 28), Asian (N = 2), or Mixed Race (N = 1). Across all racial categories, four identified as Hispanic or Latino (three were White and one was Mixed Race). All caregivers had earned a bachelor's degree and 23 held advanced degrees. Our sample was middle class, with 22 families reporting an annual income above $100,000, and only one family reporting an annual income below $40,000. The Rutgers Institutional Review Board approved all procedures. Procedure Parents were invited to participate in an online study. Once an appointment was scheduled, families were emailed a link to a secure online survey via Qualtrics. This survey contained a consent form and an extensive questionnaire designed to describe the children's home playing environment. This questionnaire was part of a larger study designed to quantify the number and kinds of spatial toys in the participants' homes, and most of it will not be reported here. In one section of the survey, parents were presented with sample photographs of jigsaw puzzles and puzzle boards and were asked if they had those or similar toys at home. Parents were then asked to submit photos of those toys. The photos were used to code properties of the puzzles parents and children played with during our study. One day prior to the study, participants received a reminder email informing them that they would be playing with puzzles. Parents were asked to select two puzzles from the ones they described in the survey for use during the study. The study itself was conducted on Zoom. On the day of the study, a researcher informed participants that they would be recorded playing with their child. Parents were asked to set up the camera in a high angle so all the pieces and playing space were in view and the researcher was able to look down at the participant's hands and all the pieces (see Figure 1). The researcher asked the parents to retrieve the previously selected puzzle(s). Parents and children were then instructed to play with each puzzle as they normally would for 10 min. If participants finished both puzzles before the 10-min mark, they were asked to retrieve additional puzzles. Thus, some children completed one puzzle during the 10-min session, while others completed up to 5. If they did not complete the puzzle during the 10-min session parents and children were given the option to finish. The researcher turned off her camera during the play period so that the parent and child could no longer see the researcher observing, and the researcher did not interrupt the play period before the 10-min mark. Coding Coders watched the recorded play session to categorize the puzzles' difficulty and to identify instances of specific child and parent behaviors. Children's insertion attempts, parental scaffolding behavior, and parental language were all coded using the open-source behavioral coding software, Datavyu. 1 Puzzle Difficulty Parents chose puzzles that varied on a number of characteristics. One coder viewed all sessions and characterized all of the selected puzzles based on dimensions that might influence puzzle difficulty. There were five nested dimensions, each that were assigned a value of 0 (easiest) to 1 (most difficult). The first dimension was Puzzle type, which referred to whether the puzzle was a board puzzle (0) or jigsaw puzzle (1) (see Figure 2A). Puzzles were further coded for whether or not they had a tray (0 if they did and 1 if they did not; Figure 2B). Puzzles that had a tray were then coded for whether they contained a background image that matched the puzzle piece (0) or no background image (1) (see Figure 2C). Puzzles that contained large pieces (i.e., pieces that were larger than the child's hands) were considered easier (0) than standard jigsaw puzzles (1) (see Figure 2D). Finally, puzzles were coded for whether or not they involved interlocking pieces (no interlocking = 0 and interlocking = 1; see Figure 2E). These dimensions were summed. For example, a jigsaw puzzle (1) with a tray (0) that contained a background image (0) with large (0) interlocking (1) pieces would receive a score of 2. The number of pieces in each puzzle was also coded from the videos of the play session and from the puzzle photos submitted through the Qualtrics questionnaire. If information about the number of pieces was missing, an online search was conducted to identify the puzzle and obtain the specifications from the manufacturer's Web site. A second coder coded 25 puzzles out of a total of 65, and reliability was calculated for all the classifications described above (κ = 1) and for the number of pieces (percent agreement = 96%). A puzzle difficulty composite score then was created by adding the binary values of all the coded difficulty dimensions and a code ranging from 1 to 5 based on the number of pieces such that the puzzles contained (i.e.,1 to 10 pieces received a score of 1; 11 to 20 pieces received a score of 2; 21 to 30 pieces received a score of 3; 30 to 40 pieces received a score of 4; and greater than 40 pieces received a score of 5). The final puzzle difficulty score ranged from 1 to 10, where a score of 10 was the most difficult. Parent Behaviors Two coders identified parent scaffolding events in the play session. Scaffolding events consisted of the sum of four different behaviors: (1) removing a piece that was placed in an incorrect space by the child, (2) helping by handing the child individual pieces or rotating pieces for the child, (3) pointing or outlining to a piece or a space in the puzzle, (4) pointing or outlining to the pictorial representation of the puzzle. Inter-rater reliability was calculated for piece removal (κ = 0.85), helping (κ = 0.82), pointing to (κ = 0.81) or outlining (κ = 0.74), a piece or space and pointing to/outlining a pictorial representation (κ = 0.92). We created a total scaffolding score by summing the instances of each of these behaviors. In addition to scaffolding, we also coded instances where parents inserted a piece into the puzzle for the child (κ = 0.87). This final code was not included in the total scaffolding behavior score. Parental Language One coder transcribed all parents' utterances. We defined utterances as vocalizations that were separated by grammatical closure, intonation contour, or prolonged pausing of more than 2 s. Three raters then coded each utterance to assess whether it contained spatial language (percent agreement = 95%), praise (percent agreement = 95%), or persistence-focused language (percent agreement = 99%). Areas of disagreement were noted and resolved via discussion, ultimately resulting in consensus. Spatial language was coded using a coding scheme developed by Cannon et al. (2007). Spatial language included any mention of spatial dimensions, shapes, locations and directions, orientations and transformations, spatial features, and properties. Examples of utterances coded as containing spatial language are "where's the flat edge?", "but I think you might need to rotate it a little, " and "this is a big puzzle. " Utterances that contained more than one spatial word were not differentiated from those that contained only one spatial word. We only included spatial terms that were in reference to the construction of the puzzles and omitted terms that were unrelated to the puzzle (i.e., "Your blanket is under the bed"), or unrelated to its construction (i.e., "Put it in/on the puzzle"). In addition to spatial language, which has been associated with children's spatial ability in previous research, we also coded praise and persistence-focused language, which have been linked to more general engagement and persistence in children (Kelley et al., 2000). Praise was coded using a coding scheme developed by Gunderson et al. (2013) and included utterances that positively evaluated the child or the child's actions (e.g., "You're good at puzzles"; "good job"), or utterances that expressed general positive valence toward the child but not directed at any specific action (e.g., "Awesome!"; "Yay!"). Persistence-focused language was coded using a coding scheme developed by Lucca et al. (2019) and consisted of utterances that were focused on trying or repeated attempts to complete a goal-directed action. Frequently, this consisted of phrases that explicitly referred to acts of trying (e.g., "You're trying so hard!"). Child Behaviors First, a trained coder watched the play sessions and identified children's insertion attempts. An insertion attempt was defined as the first time the child took one puzzle piece and proceeded to either join it with one or more additional pieces or place it in an opening in a puzzle tray. An insertion attempt could be either successful if the child placed the piece in the correct space or unsuccessful if the child failed to insert the piece correctly and proceeded to place the piece back down on the floor or table. Each time the child attempted to insert the same piece in any opening or location was counted as a single event, which ended when the child either successfully inserted the piece or placed it down. A second researcher coded 25% of the participants and reliability was calculated for the event matching by both coders; reliability was calculated for both correct (κ = 0.88) and incorrect insertions (κ = 0.76). After coding initial insertion attempts, a trained coder went back to each insertion attempt and counted the number of times the children unsuccessfully attempted to insert a single piece before either successfully inserting it or putting it down. An unsuccessful attempt was coded every time the child tried to insert the piece into a different place in the puzzle or in the same place but in a different orientation. A different orientation was defined as a rotation of the piece more than 90 degrees. A second researcher coded 25% of the insertion instances for each participant. Reliability was calculated for the number of insertion attempts (κ = 0.81). Data Analysis Plan The main goals of this study were to describe the range of puzzles families selected for the play session, to examine parents' naturalistic behavior with their children at home while playing with each puzzle, and to examine the relation between parent's scaffolding and spatial language and children's behavior with the puzzles. Upon initial visualization of the data, we observed a great deal of variability in all of the variables we measured. Thus, instead of running a large number of inferential statistics, we primarily provide descriptive data of both parents' and children's behaviors with the puzzles that they chose to interact with at home. Then, we normalized our measures by totaling the number of behaviors in each 1-minute interval, and then averaging across those intervals, and ran a correlation matrix on puzzle difficulty level, parenting variables (e.g., parent scaffolding, number of parental insertion attempts, parental spatial language, parental persistence-focused language, and parental praise), and child variables (e.g., age, children successful attempts, children overall attempts, and attempts after failure). Finally, we ran a set of simple gender comparisons across all of normalized data, given that gender differences in spatial abilities have been reported in previous research (Levine et al., 2005(Levine et al., , 2016Pruden et al., 2011). Puzzle Difficulty As mentioned above, the puzzles that participants typically played with in their homes varied widely, which is evident by the distribution of difficulty scores across puzzles (see Figure 3). The mean puzzle difficulty score was 6.56 (SD = 2.17), and the difficulty scores spanned nearly the entire coded range from 2 to 10. Only five participants played with puzzles with a relatively low difficulty score that ranged between 2 and 4; the majority of participants played with puzzles that had a difficulty score in the middle of the range (N = 17, between 5 and 7), and nine additional participants played with puzzles that were more difficult, ranging in score from 8 to 10. Parent Language and Behaviors The distribution of parents' scaffolding, use of spatial language, and praise is in Figure 4. Two things are immediately clear. First, parents were highly variable, with some parents exhibiting high levels of these behaviors and other parents exhibiting low levels of these behaviors. It is possible that some of the variability in the number of behaviors may be due to variation in the length of the session. Although parents and children were encouraged to play for 10 min, some dyads played for less and others played for longer (M = 9.77 min, SD = 1.6 min, range 5.6 min-12.1 min). To examine whether the length of the session was related to the frequency of parent or child behaviors, we conducted a series of correlations. None of the relations between the duration of the session (in seconds) and parent or child behaviors were statistically significant (p's > 0.05). However, we normalized the data for all inferential statistics (see Section "Data Analysis Plan"). Second, the distributions for the parent behaviors are very similar, with relatively low levels of the behaviors occurring more frequently than relatively high levels of the behaviors. Further, there is some evidence that the same parents were exhibiting relatively high or relatively low levels of some combinations of these variables. For example, parent use of praise per minute was related to parent spatial language per minute, r(31) = 0.52, p < 0.05, and the relation between parent praise and parent use of persistence-focused language per minute was approaching significance, r(31) = 0.33, p = 0.07. This result suggests that there are effects of parental talk in general. Further, there were small, non-significant correlations between parent scaffolding behaviors and spatial language events per minute, r(31) = 0.28, p = 0.13, and praise, r(31) = 0.26, p = 0.17, suggesting that there were also parental behaviors specific to child behavior in this task. Interestingly, utterances containing persistence-focused language were relatively rare, M = 1.61 (SD = 1.61), ranging from 0 to 5 across the session as a whole. Fifteen parents did not produce any utterances with this type of language at all. To further understand parents' scaffolding behaviors, we examined separately the individual behaviors we coded. Recall that we coded parents' removal of an incorrectly placed piece, handing or rotating pieces, pointing or outlining puzzle space, and pointing or outlining pictorial representations of the puzzle. Parents more often pointed to or outlined the pieces or the puzzle (M = 24.29, SD = 18.07), than rotated or handed their child a puzzle piece (M = 9.97, SD = 11.94). Some parents simply inserted pieces into the correct places in the puzzle for the child, M = 5.06 times per child (SD = 7.33). There were large individual differences in this behavior; 21 parents rarely, if ever, inserted a piece for their child (ranging from 0 to 3 pieces), whereas 10 parents inserted between 7 and 30 pieces for their children. Child Behaviors Children's behaviors were also extremely variable. The distribution of total attempts and successful attempts to insert a piece is in Figure 5. In terms of attempts, children ranged from making as few as 12 attempts to making as many as 81 attempts, suggesting individual differences in how interested children were in the puzzle. Children's successful insertions ranged from 1 to 41. The proportion of successful attempts ranged from 3 to 86%, again showing the extreme variability in children's behaviors. We also coded how many times children attempted an insertion following a failed attempt. On average, children made 14.61 (SD = 9.5), such attempts ranging from 2 to 37 attempts. Out of the 250 events where children tried to reinsert a piece upon failure, 66% had a successful outcome eventually. Relations Among Variables Next, we examined how parental behaviors were related to child behaviors during play. To account for the fact that participants' play time varied (M = 9.77 min, SD = 1.6, range 5.6 min-12.1 min), we normalized our measures by totaling the number of behaviors in each 1-minute interval, and then averaging across those intervals. Thus, our measures for these analyses were the number of behaviors or utterances per minute. First, we examined how our measures were related to child age. The only relation between parental behaviors and child age was a negative correlation between age and parent scaffolding, r(31) = −0.38, p < 0.01. Parents provided more scaffolding behaviors for younger children. It is also noteworthy that there was a small, non-significant relationship between age and children's successful attempts, r(31) = 0.28, p = 0.12, with older children demonstrating more successful attempts than younger children. Interestingly, despite the wide variation in puzzle difficulty, we found that few child or adult behaviors were related to puzzle difficulty. There was no clear relation to child age, to parental scaffolding or language. The relation between puzzle difficulty and children's successful number of insertion events was approaching significance, r(31) = −0.32, p = 0.08. Not surprisingly, children were less likely to successfully insert a piece in more difficult puzzles. Note that we conducted a second set of correlations after removing the number of pieces from the difficulty score, as the number of pieces might have skewed the results. However, the results were the same. We also found that parent and child's behaviors were related. In particular, the number of children's insertion attempts after failure was positively related to parents' persistence-focused language, r(31) = 0.46, p < 0.01, suggesting that children who tried more after failing had parents that encouraged them to be persistent. In contrast, although non-significant, children's successful attempts were negatively correlated with all four parenting behaviors, suggesting that in general, children who had fewer successful attempts had parents who used more spatial language, praise, persistence-focused language, and scaffolding (see Table 1). Gender Differences Finally to evaluate any gender differences, we ran a series of t-tests comparing boys to girls on each of our measured variables. There were no gender differences in terms of age (females M = 45.2, SD = 1.7; males M = 44.39, SD = 1.49) or difficulty of the puzzles (females M = 6.89, SD = 2.11; males M = 6.29, SD = 2.24). We did find a significant difference in the number of children's attempts after failure, t(29) = 2.19, p = 0.021, 95% CI[−1.16, −0.04], with girls (M = 1.64 attempts per minute, SD = 0.99) attempting to place puzzle pieces more often after failure than boys (M = 1.04 attempts per minute, SD = 0.51). Thus, girls appeared to be more persistent than boys in their puzzle play. Further, we found that the difference in the amount of parents' persistence-focused language directed to boys and girls approached significance t(29) = 1.04, p = 0.066, 95% CI[−0.15, 0.50], with parents using more persistence-focused utterances with girls (M = 0.13, SD = 0.16) than with boys (M = 0.07, SD = 0.12). None of the other parent or child variables differed as a function of child gender. DISCUSSION A large body of research has reported a positive relation between constructive play with toys like puzzles and developing spatial skills in children (e.g., Casey et al., 2008;Levine et al., 2012;Jirout and Newcombe, 2015;Bower et al., 2020). However, most of these studies were somewhat constrained, involving constructive play in a lab, and/or with a preselected and uniform set of constructive toys. Although the COVID-19 pandemic has kept many researchers away from the lab, it has offered us the opportunity to develop strategies for studying some of our basic research questions from a distance, by using tools like Zoom to examine what parents and children do in their own homes. Here, for the first time, we recorded parents and children interacting with puzzles of their choice at home and provided a descriptive account not only of their behaviors, but also of their behaviors in relation to the puzzles with which they most typically interact. Importantly, because we used Zoom, we may have observed more naturalistic behaviors than if we had been present in the home with a video recorder and an experimenter in the room. The experimenter kept her camera off, and thus parents and children may have forgotten her presence. The most noteworthy finding from this descriptive study is the enormous variability we observed in both children and parents' behaviors, and in the puzzles they selected for play. This study is the first of its kind in provide detailed characterization of the kinds of puzzles children have at their homes as well as the variability in parents' and children's behavior while engaging in home puzzle play. The puzzles themselves varied on a number of dimensions that we coded for difficulty. Some of the puzzles were typical jigsaw puzzles with interlocking pieces, while others were puzzle boards that had pieces with shapes that fit into specific places on a tray. Some of the puzzles had oversized pieces, presumably making them easier to place, while others even had a colorful background that matched the background of the puzzle pieces themselves, making it possible for children to use perceptual cues like color to match the pieces to their correct location. Some children played with puzzles that had less than 10 pieces, while other children played with 40 or 50 piece puzzles. No two play sessions were quite alike. These differences in the puzzles that children actually play with every day provide a context for studies of children's puzzle play that have used a narrow set of puzzles. Researchers often assume that findings from the lab uncover processes involved in children's puzzle play that reflect developmental changes in spatial ability. However, the variability in the types of puzzles available in children's homes has raised the possibility that participants in lab studies might differ substantially in their familiarity with the experimental stimuli. Besides variability in the puzzles, there was also a great deal of variability in both parents' and children's behavior when interacting with the puzzles. There were a large number of parents who engaged in very few scaffolding behaviors, and very little spatial language, praise, and persistence-focused language during the parent-child interactions. Most parents fell somewhere in the middle of the range, but there were also parents that produced an incredibly large amount of these behaviors, some with over 60 scaffolding behaviors in a 10-min play session, and upward of 30-40 praise and spatial language utterances. Further, parents who tended to use more spatial language also tended to use more praise and persistence-focused language, as evidenced by the significant correlations between these variables. Children's behavior also varied widely, with some of our participants attempting to place pieces into the puzzles less than 10-20 times, alongside almost a third of our sample producing more than 50 attempts. Their accuracy varied just as widely: Most of the children placed less than 20 pieces correctly in the 10-min session, but some placed more than 30. Older children tended to place more pieces correctly than younger children. Given this large amount of variability and our small sample size, it is unsurprising that we found few significant correlations between our variables. However, our results do suggest some basic patterns. Specifically, there were few relations with child age in our data, likely reflecting, in part, the relatively narrow age range we sampled. More surprising, despite the wide variation in puzzle difficulty, there was little relation between the level of puzzle difficulty and child age, child behavior, or parent behavior. Parents also showed some evidence of being sensitive to children's need for help. More persistence-focused language was related to more child attempts after failure. Interestingly, there was a hint that children's successful attempts were negatively correlated with all four parenting behaviors. If confirmed in a larger sample, this pattern would suggest that parents' language and scaffolding are related to children's success in puzzle play. Specifically, it is possible that parents recognized when children were having a difficult time and used more language and scaffolding to direct them. Likewise, it is also possible that parents' behavior impacted their children's behavior. Indeed, children who attempted to place more puzzle pieces after failure also tended to have parents who encouraged them more, thus it is possible that parents' persistence-focused language drove children to try harder. Altogether, the variability we found in the puzzles themselves and in parent-child behaviors suggests that lab-based studies that impose a large number of constraints on children's behavior might not fully represent how children interact with spatial toys in their everyday environments. It is especially noteworthy that our sample was not particularly diverse. Indeed, most of our families were middle to high income, and even then, we had to eliminate eight families because they did not have two puzzles in their homes. While our sample was not ethnically and economically diverse and this limitation hinders our confidence to generalize our findings to a wider population, we expect that in a more diverse sample, we are likely to see considerably more variability than reported here. Lower income families, for example, might not have as many puzzles at home as middle to higher income families, and as a result, children's behavior when engaging in spatial play might differ systematically by SES. Further, the puzzles we observed here, while variable, were all characteristic of toys in Western, industrialized countries. It is likely that the types of spatial toys available cross-culturally vary significantly, which could, in-turn, affect the types of spatial play in which children engage. This is not to suggest that lab-based studies are not useful or important; indeed, they have provided the basis for even the current investigation. Indeed, imposing constraints on children's behavior allow us to narrow the focus of our research questions and ask more about the causal relations between variables. Further, it is important to acknowledge that the observational nature of this study was also limited in that the presence of the researcher, even with the camera off, may have changed parents' behavior in a way that is systematically different from completely naturalistic behavior. Nevertheless, this work highlights the enormous amount of variability that exists in children's spatial play at home in a very narrow sample, which has important implications for the conclusions we draw about lab-based studies that impose even more constraints on children's behavior. It is also important to note that despite the large amount of variability reported here, there are some relationships documented in previous literature that were also evident in the current sample, speaking to their robustness. For example, similar to our results, several studies have shown that parents provide more assistance to younger versus older children during puzzle-building tasks (Wertsch et al., 1980;Casasola et al., 2017), suggesting that parents might adjust their behavior to fit different children's needs. Finally, we found several gender differences suggesting that girls were more persistent than boys, making more attempts to place pieces into the puzzle after failure, and that parents used more persistence-focused language with girls than with boys and gave girls more difficult puzzles. Gender differences in children's spatial ability and spatial play have also been reported in previous literature, usually attributing more advanced spatial skills to boys than girls, but these findings are controversial (Baenninger and Newcombe, 1989;Levine et al., 2012) and require further research. CONCLUSION In conclusion, despite its descriptive and non-causal nature, the current study informs us about the types of variability in spatial toys and spatial play we might expect in real-world settings and can help us contextualize the conclusions we draw from lab-based studies. Given the wide variability of puzzles available in children's homes, future research could examine how the different characteristics of puzzles determine the nature of the parent-child interactions and what aspects of these interactions support spatial skills development. Our study also suggests that more large-scale, naturalistic studies of children's spatial play in the home could be incredibly informative, providing us with important information about what types of spatial toys best promote the development of spatial skills, and how the types of toys interact with both child and parent characteristics over time. DATA AVAILABILITY STATEMENT The datasets and coding manuals presented in this study can be found in online repositories. The names of the repository/ repositories and accession number(s) can be found at: LoBue, V., Pochinki, N., Oakes, L., and Casasola, M. (2021). Natural variability in parent-child puzzle play at home. Databrary available at: https://nyu.databrary.org/volume/1334 (Accessed June 29, 2021). ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Rutgers University Institutional Review Board. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS NP, VL, LO, and MC drafted the manuscript. NP collected the data. NP and DR oversaw coding. All authors designed the study and approved the final version of the manuscript. FUNDING This research and preparation of this manuscript were supported by a grant from the National Science Foundation (DS 1823489) to MC, LO, and VL. The funding agencies had no role in the design of the study or the collection, analysis, and interpretation of data or in writing the manuscript, apart from their financial contribution.
v3-fos-license
2024-06-09T15:07:05.640Z
2024-06-01T00:00:00.000
270353007
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/ijms25126308", "pdf_hash": "b66f3b01053e3e696bcbc5e7e2c19aa4572ebb16", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44635", "s2fieldsofstudy": [ "Biology" ], "sha1": "b4229001a2fa3f4a6fd1f8d043c87526c8310ce7", "year": 2024 }
pes2o/s2orc
Gene Expression Analysis of Yeast Strains with a Nonsense Mutation in the eRF3-Coding Gene Highlights Possible Mechanisms of Adaptation In yeast Saccharomyces cerevisiae, there are two translation termination factors, eRF1 (Sup45) and eRF3 (Sup35), which are essential for viability. Previous studies have revealed that presence of nonsense mutations in these genes leads to amplification of mutant alleles (sup35-n and sup45-n), which appears to be necessary for the viability of such cells. However, the mechanism of this phenomenon remained unclear. In this study, we used RNA-Seq and proteome analysis to reveal the complete set of gene expression changes that occur during cellular adaptation to the introduction of the sup35-218 nonsense allele. Our analysis demonstrated significant changes in the transcription of genes that control the cell cycle: decreases in the expression of genes of the anaphase promoting complex APC/C (APC9, CDC23) and their activator CDC20, and increases in the expression of the transcription factor FKH1, the main cell cycle kinase CDC28, and cyclins that induce DNA biosynthesis. We propose a model according to which yeast adaptation to nonsense mutations in the translation termination factor genes occurs as a result of a delayed cell cycle progression beyond the G2-M stage, which leads to an extension of the S and G2 phases and an increase in the number of copies of the mutant sup35-n allele. Introduction Termination of protein synthesis occurs when the release factor recognizes a stop codon (UAA, UAG, or UGA) and stimulates a nascent peptide release (reviewed in [1]).About 11% of all human disease-associated genetic variants are nonsense mutations, resulting in the occurrence of a premature translation-termination codon (PTC) in the protein-coding gene sequence [2].Translation of the PTC-containing mRNAs leads to synthesis of truncated, often dysfunctional, polypeptides that can have a dominant-negative or gain-of-function effect on gene function.More and more frequently, therapeutic approaches for disorders caused by nonsense mutations use drugs that promote PTC readthrough, forcing the translational machinery to recode an in-frame PTC into a sense codon in a process called nonsense suppression.Thus, nonsense suppression therapy refers to an effective means of preventing protein translation termination and alleviating disease symptoms by inducing PTC readthrough [3,4]. One of the most popular model objects for studying translation termination and nonsense suppression is the baker's yeast Saccharomyces cerevisiae.In yeast, the mechanisms of adaptation to translation defects could be investigated using strains carrying mutations in one of the genes encoding translation termination factors, SUP45 (encoding yeast eRF1) [5] and SUP35 (eRF3) [6,7].Both of these genes are essential in yeast, and deletion of either one leads to the death of the yeast cells.However, viable strains with nonsense mutations in both SUP45 [8] and SUP35 [9] have previously been isolated in our laboratory and extensively characterized. It has been shown that nonsense mutations (denoted as sup35-n) lead to the formation of truncated proteins and a significant (up to 100-fold) decrease in the level of the fulllength eRF3 protein.Previously, we demonstrated that the introduction of these sup35-n alleles is associated with gene amplification in the corresponding strains [10].Specifically, the number of copies of the plasmid bearing the mutant alleles of the SUP35 gene was significantly increased compared to strains bearing plasmids with the wild-type allele.Furthermore, strains carrying the sup35-218 allele as a single chromosomal copy contained duplication of a part of a chromosome IV (bearing the SUP35 gene), while strains with other nonsense mutations had additional copies of chromosomes II (sup35-203), XI (sup35-240), or XIII (sup35-244, sup35-260). In this work, we investigated the mechanisms of yeast adaptation to mutations in release factor genes using one of the sup35-n alleles, sup35-218.The sup35-218 mutation is located in the N-proximal part of the SUP35 gene, and leads to the G > T substitution in the 541st nucleotide of the protein-coding sequence, resulting in the emergence of a PTC (TAA) in the 181st position.This leads to the formation of a truncated eRF3 protein that contains 180 amino acids, instead of 685, and completely lacks the functional C-domain.The N-and M-domains that are present in the truncated protein are not directly involved in the process of translation termination, in contrast to the C-domain.Hence, the truncated protein is incapable of stimulation of translation termination.In the sup35-218 nucleotide sequence, the PTC is located in a "weak" nucleotide context, as the UAA codon is followed by the C nucleotide, which leads to ineffective translation termination at tetranucleotide UAAC [11,12].Consequently, cells carrying the sup35-218 allele contain up to 8% of fulllength eRF3 due to nonsense suppression [9]. In the present study, we performed a high-throughput transcriptome sequencing (RNA-seq) and proteome analysis to study changes that occur in cells containing the sup35-218 mutation, and uncover the possible mechanisms of mutant allele amplification.We have identified the genes involved in the cell cycle regulation changes in the level of transcription that may allow cells carrying a plasmid with a nonsense mutant allele to remain viable.Analysis of the RNA-seq and proteome data suggests a plausible molecular mechanism which could explain the basis of cellular adaptation to disturbances in translation termination process. Global Transcriptional Changes in Yeast Strains with Nonsense Mutations in the Release Factor Genes As mentioned earlier, yeast cells are capable of adaptation to nonsense mutations in genes encoding release factors, SUP35 and SUP45.Such an adaptation can be observed when a plasmid carrying the wild-type SUP35 or SUP45 allele as a sole source of the corresponding release factor is replaced with another plasmid bearing a mutant allele.To study the mechanisms of yeast adaptation in such a system, cells of the U-14-D1690 strain with a deletion of a normal chromosomal copy of SUP35 and bearing a wild-type allele of the corresponding gene on a plasmid were transformed with a second plasmid bearing either a wild-type or mutant sup35-218 allele of the same gene (Figure 1A).Subsequently, strains containing two plasmids were grown on medium containing 5-FOA to lose the plasmid with the URA3 marker carrying the wild-type SUP35 allele.Following such plasmid shuffling, we performed a high-throughput transcriptome analysis of the resulting strains (denoted as "3") and the initial strain (denoted as "1") using RNA-seq. Having obtained the RNA-seq data, we first analyzed the total differences in gene expression profiles between samples carrying mutant and wild-type SUP35 alleles at different stages of the experiment.To do this, we performed principal component analysis (PCA) of the normalized gene count matrix obtained after quantification of gene expression (see Section 4) PCA revealed that the samples perfectly clustered into three groups, corresponding to cells carrying the mutant sup35-218 alleles or wild-type alleles (at the beginning of the experiment and after the plasmid shuffling) (Figure 1B).A clear clustering of the samples into groups in the space of the first two principal components indicates the presence of reproducible differences in the gene expression profiles in the studied samples.Furthermore, the first of the principal components was sufficient to distinguish between cells bearing the SUP35 or sup35-218 allele while explaining as much as 69% of the gene expression variance.This result suggests that introduction of a nonsense allele of SUP35 has a pronounced effect on the gene expression profile of a yeast cell.Next, we performed the differential gene expression analysis either at the initial stage (stage 1) of the experiment or after the plasmid shuffling (stage 3).When the gene expression profiles of cells carrying the sup35-218 allele at stage 3 were compared to those of cells carrying the wild-type SUP35 allele at stage 1, we discovered 1240 upregulated genes and 1341 downregulated genes for which the adjusted p-value was less than 0.05 (Figure 1C).At the same time, analysis of the DEGs identified between cells carrying the mutant allele and cells carrying the wild-type allele after the shuffling reduced the number of genes with increased expression to 1035, and of those with reduced expression to 1100 (Figure 1C).Similar trends were observed when only genes with an absolute log2FC value greater than 0.5 were selected, with a total of 1824 DEGs observed between stages 1 and 3, and 1366 between cells after plasmid shuffling.The number of DEGs was substantially lower if only the genes with a two-fold difference in expression were considered (Figure 1C, right panel); again, more DEGs were observed when the initial strain was used as a control group.A substantial difference in the number of DEGs observed with different sets of control samples suggests that the marker gene used for plasmid maintenance (LEU2 or URA3) also significantly affects the global transcriptional profile of the cell.Given all of the aforementioned observations, we focused our attention on the set of DEGs identified when comparing cells after the plasmid shuffling and with absolute log2FC values greater than 0.5.In this set of genes, 817 genes had an increased expression and 1007 had a decreased expression, and a subset of 147 and 216 genes had at least two-fold increased and decreased expressions, respectively (Figure 1D).Importantly, the set of genes with the most substantial increase in expression included SUP35 itself, as well as LEU2.These observations are in perfect concordance with the results of our earlier work, which showed an increase in the number of plasmid copies in cells bearing sup35-218 accompanied by significantly elevated expression of this allele detected using qPCR [10]. Identification of Biological Processes Involved in Adaptation to the Presence of sup35-218 To gain insights into the biological mechanisms driving the adaptation to the presence of nonsense mutations in SUP35, as well as amplification of the mutant allele, we performed further functional analysis of the obtained list of DEGs.As the first step of such an analysis, we performed the Gene Ontology (GO) enrichment analysis for lists of up-and downregulated DEGs separately.Among the GO biological processes, we discovered enrichment of upregulated DEGs with genes involved in the control of the cell cycle and carbohydrate metabolism (including glucose import) (Figure 2A).In concordance with these results, a significant overrepresentation of proteins active in the cell wall or as components of cyclin-dependent protein kinase complexes was detected when testing the cellular component enrichment (Figure 2B).Among the genes with reduced expression, overrepresentation of the genes involved in various biosynthesis processes, including the synthesis of amino acids, was found (Figure 2C).While the repression of biosynthetic processes coupled with activation of catabolism and the import of nutrients is expected during stress conditions, the upregulation of the cell cycle genes seemed a more interesting phenomenon.Hence, we next went on to investigate the changes in the yeast cell cycle in more detail. To perform such an in-depth analysis of the effects of DEGs on the cell cycle, we used the Kyoto Encyclopedia of Genes and Genomes (KEGG, https://www.genome.jp/kegg/kegg2.html,accessed on 10 October 2023), which could be utilized to visualize the position and interaction of the studied genes in the entire pathway diagram.Visualization of the DEGs on a diagram of the yeast cell cycle regulation pathway revealed several interesting patterns (Figure 3).First of all, we noted an increase in the levels of transcription of CDC28, the most important regulator of the cell cycle, the transcription cofactor SWI6, as well as MCM2 and ORC2 genes involved in DNA replication.Secondly, increased expression of genes encoding cyclins of the CLN, CLB family and subunits of the cohesin complex SMC3 and MCD1 was detected.Finally, besides an increase in the expression of genes primarily involved in DNA synthesis, a decrease in the transcription of genes involved in the metaphase-anaphase transition (CDC23, APC9) was noted. Taken together, these observations suggest that the transition of the yeast cell to mitosis is impaired upon introduction of the nonsense mutant allele of SUP35.At the same time, proteins involved in cell cycle progression at earlier stages, as well as in the DNA synthesis, are more active in cells bearing the sup35-218 allele compared to control cells.These results may provide a mechanistic explanation of the observed phenomena of adaptation (see Section 3). In addition to the analysis of functions of the identified DEGs, the large number of DEGs suggest that certain transcription factors (TFs) should play a key role in driving the observed global changes in the gene expression profile.To identify such TFs, we used the YEASTRACT (www.yeastract.com,accessed on 20 November 2023) tool. Our analysis identified the enrichment of target genes of TFs associated with the biosynthesis of amino acids and nucleotides among genes with reduced expression (Supplementary Table S1) in cells bearing sup35-218.Specifically, overrepresentation of targets was observed for Bas1, which regulates genes of the purine and histidine biosynthesis pathways (55.63% of genes with reduced expression; p-value < 10 −15 ), and for Gcn4, a transcriptional activator of amino acid biosynthetic genes (93.52% of genes in the studied gene set; p-value < 10 −15 ).These observations are in good concordance with the results obtained using GO term enrichment analysis. For the upregulated DEGs (Supplementary Table S2), we identified an enrichment of targets of transcription factors involved in metabolism (Ino2, Gcn4) and stress response (Rpn4 and Hsf).Moreover, we discover that a significant number of the identified DEGs are targets of Rph1, Mig1, and Ste12, which are involved in many biological processes such as transcription, autophagy, invasive growth, and others. Interestingly, an overrepresentation of the targets of several transcription factors involved in cell cycle regulation was found (specifically, Aft1, Ybp1, Mcm1, and Ume6).It was shown that Aft1, a protein which regulates iron homeostasis, has a role in chromosome stability, interaction with the kinetochore and promoting pericentric cohesin [13].Yhp1 is a homeobox transcriptional repressor; it binds Mcm1 and early cell cycle box (ECB) elements of cell cycle regulated genes, thereby restricting ECB-mediated transcription to the M/G1 interval [14].Ume6 is a histone deacetylase complex subunit and a key transcriptional regulator of early meiotic genes, which are involved in chromatin remodeling and transcriptional repression [15,16]. Pcl1/2/9 Pcl2/Pcl9 Figure 3.A diagram showing the changes in the expression of genes controlling the yeast cell cycle progression.Visualization of the relationship of genes involved in cell cycle control using KEGG PATHWAY.Genes with increased expression in the presence of the sup35-218 allele are marked in red; genes with reduced expression are marked in blue.Color contrast reflects the degree of increase or decrease in expression.Cases where two genes with similar functions have inconsistent expression changes are highlighted with two colors (Yox1/Yhp1, Clb1/2, Dbf2/20).Genes where no significant changes in expression were found are highlighted in green. Validation of High-Throughput Sequencing Results Using qPCR Given that the cell cycle progression is a finely regulated process, many of the identified DEGs corresponding to cell cycle regulators have relatively small changes in expression (1.4-to 2-fold, corresponding to log2FC values between 0.5 and 1).This prompted us to conduct additional validation of discovered changes in expression with the qPCR method for the most significant cell cycle genes.In total, six genes were selected for validation, including three of the upregulated ones (FKH1, CDC28, and SUP35), two of the downregulated ones (CDC20 and CDC23), and one gene with no changes in expression according to RNA-seq data (SUP45).Two of the four cell cycle genes that were chosen for validation had an absolute value of log2FC < 1 (CDC20 and CDC28). For all of the genes selected for validation, the observed difference in median expression from qPCR corroborated transcriptomic analysis, and statistically significant differences in expression were confirmed for four out of five DEGs (Figure 4).Thus, we confirmed a decrease in the expression of the CDC20 gene in cells bearing the sup35-218 allele.For another gene involved in the anaphase transition, CDC23, the difference in median expression was not statistically significant.In good concordance with RNA-seq data, median expression of CDC28, encoding the main kinase that regulates the cell cycle in yeast, was twice higher in the presence of a nonsense allele compared to cells containing the wild-type SUP35 gene.For the FKH1 transcription factor involved in the regulation of cyclin levels, we observed a significant increase in relative expression levels, which is also in line with the RNA-seq analysis.The SUP35 gene normally has increased mRNA levels in the cells harboring the sup35-218 allele, also demonstrating a solid upregulation in qPCR.Finally, the SUP45 gene, which was not differentially expressed according to both RNA-seq data and our previous results, demonstrated no significant expression changes in qPCR validation.Shown are boxplots of the relative expression levels calculated using the ∆∆Ct method (see Section 4 for more details).ns-not significant, ** p-value < 0.05, *** p-value < 0.001 according to Wilcoxon-Mann-Whitney rank sum test.At least seven biological replicates were used for each group. Thus, we confirmed that, in the presence of the sup35-218 mutant allele in yeast cells, an increase in the transcription of the CDC28 and FKH1 genes and a decrease in the expression of the activator of anaphase-promoting complex CDC20 occurred.We suggest that these changes can affect the duration of cell cycle phases. Proteome Analysis of Cells Harboring sup35-218 Allele To better understand the changes that occur because of disturbances in the translation termination process, we also evaluated changes in cells at the protein level.For this purpose, we performed proteome analysis of the same cells harboring sup35-218 mutation.By examining the proteome, we identified 117 differentially produced proteins (56 with increased production and 62 with decreased production) (Figure 5A).However, our analysis of GO terms for differentially produced proteins showed no enrichment.We also assessed the similarity of the observed changes in the proteome and transcriptome (Table S3).For this purpose, we evaluated the correlation between the protein and transcript levels for each gene.We have detected a statistically significant moderate correlation (Figure 5B) between the transcript levels and protein levels for both wild-type strains and strains with the sup35-218 mutation (correlation between proteome and transcriptome in wild-type strain-tau = 0.256, p-value < 0.001; correlation between proteome and transcriptome in strain with sup35-218 mutation-tau = 0.238, p-value < 0.001).While a low degree of correspondence between the protein and transcript levels has previously been reported for yeast [17][18][19], a lack of differences in the correlation coefficients suggests that nonsense mutations in SUP35 do not disrupt the regulation of gene expression at translational level. Although a poor correlation was observed between the transcriptome and proteome, there were still several genes whose changes in expression levels are consistent with their protein levels.Genes RPC82, STU2, CDC28, PST1 and their protein products were upregulated in transcriptome and proteome.At the same time, expression and protein levels of FOL2, ERG26, ERG28, HMF1, BNA1 were decreased (Table 1).Despite the low number of overlaps between the datasets, the functions of these nine proteins further support the conclusions made based on functional annotation of RNA-Seq results.Particularly, two of the upregulated proteins control cell cycle and division (Cdc28 and Stu2); moreover, an increased abundance of Rpc82, which is responsible for tRNA biosynthesis, is consistent with transcriptional downregulation of amino acid biosynthesis genes. Effects of the sup35-218 Mutation on Cell Cycle Progression Since the proteomic and transcriptomic analyses revealed several overlapping differentially expressed genes involved in regulation of cell division, we continued further investigations of the peculiar features of cell cycle of strains carrying sup35-218 nonsense allele.To evaluate the effect of the sup35-218 nonsense allele on the cell cycle progression, we compared the distribution of DNA content between unsynchronised cultures of wild-type and sup35-218 strains by flow cytometry (Figure 6). We observed two similar clear peaks of fluorescence intensity in wild-type strains, which corresponds to the DNA content of cells with unreplicated (1C) and replicated DNA (2C) in unsynchronised cultures.In contrast, cells with the sup35-218 allele mostly did not demonstrate clear peaks of fluorescence intensity that would have corresponded to 1C or 2C.Additionally, the comparison of independent clones bearing sup35-218 with each other did not reveal any patterns in the distribution of DNA amount.Therefore, cells containing the sup35-218 nonsense allele were characterized by a greater heterogeneity in DNA content compared to the wild-type, which may indicate the disruption of cell cycle progression in cells with sup35-218 nonsense allele. Discussion In this study, we describe global changes in the gene expression profile of a yeast cell in response to the introduction of a nonsense mutation in the essential SUP35 gene encoding a release factor, eRF3.We discovered 1366 differentially expressed genes between cells bearing a wild-type SUP35 allele and a mutant sup35-218 allele (Figure 1).In-depth functional analysis of this set of DEGs suggests that several biological processes are involved in adaptation to such pronounced translation termination defects, with metabolic processes and cell cycle among the major discoveries.While massive metabolic changes are expected during stress (e.g., [20,21]), the observed upregulation of certain cell cycle components attracts attention, as it might explain gene amplification observed previously in strains bearing sup35-218 [10]. Normally, the yeast cell cycle can be divided into four phases.During the first of these phases, the growth phase (G1), there is a significant cell growth and increase in cell volume.This stage is followed by the synthetic (S) phase, during which the DNA replicates.After that, the cell enters the second phase of growth (G2), in which a bud appears; and the process ends with the mitotic phase (M), in which division occurs, giving rise to a daughter cell.S. cerevisiae cells, compared to fission yeasts, usually have a prolonged G1 phase; however, it depends on conditions, nutrients and size of the cell [22].Cell cycle progression is mainly controlled by proteins of the cyclin-dependent kinase (CDK-Cdc28 in yeast) family and their interaction with cyclins whose expressions are fluctuating during the cell cycle [23].Importantly, cell cycle is regulated at the transcriptional, translational, and post-translational levels.In our work, we tried to use both transcriptomic and proteomic data to obtain more insights into the cell cycle changes observed in cells carrying sup35-218; however, the datasets showed a low degree of overlap between sets of differentially expressed genes and proteins, making it harder to draw an unambiguous conclusions regarding exact state of cell cycle regulation in mutant cells. The expression of the central coordinator of the major events of the yeast cell division cycle, CDC28, was found to be increased at both transcriptomic and proteomic level.Cdc28 activity controls the timing of mitotic commitment, bud initiation, DNA replication, spindle formation, and chromosome separation [24].While the abundance of Cdc28 was substantially altered in strains carrying sup35-218, its impact on the phenotype could not be determined, in particular, due to additional regulation of Cdc28 at the post-translation level (for example, via post-translational modifications) [25]. Additionally, we observed an increase in the transcription level of FKH1, which encodes the Forkhead homolog protein and its forkhead-associated (FHA) domain acts in promoting the ORC-origin binding and origin activity at a subset of origins in S. cerevisiae [26].Previously it was shown that an increased expression of the FKH transcription factors leads to an extended lifespan, improved stress response and rescues APC mutant growth defects [27]. Besides CDC28 and FKH1, we discovered increased expression of the ORC2, SWI6, and MCM2, primarily involved in DNA synthesis, and genes encoding cyclins of the CLN and CLB families (CLN3, CLN2, CLB2, CLB5, and CLB6).Cyclins are regulated both at the transcriptional and post-transcriptional level, and they are expressed in alternate phases of the cell cycle.The CLN family consists of Cln1, Cln2, and Cln3, which are expressed mostly in the G1 phase.The CLB-type cyclins, on the other hand, regulate later cell cycle stages, including DNA replication and the entry into mitosis.To prevent premature DNA replication, the activity of the first B-type cyclins to be expressed, Clb5/6, is inhibited through high levels of the cyclin-dependent kinase inhibitor Sic1 during G1 [28].Notably, we did not identify cyclins as differentially abundant in the proteome; however, this finding is rather expected given the low abundance of these proteins. In addition to changes in cyclin/CDK abundance, downregulation of the proteins that comprise the APC/C complex or interact with it (APC9, CDC23, and CDC20) was detected.The APC/C complex is one of the crucial regulators of mitosis progression.This complex has two main forms-APC/C-Cdc20 and APC/C-Cdh1.These two forms have overlapping as well as distinct substrates.In S. cerevisiae, APC/C-Cdc20 is required for the degradation of Pds1 (securin) and the B-type cyclin Clb5, whereas APC/C-Cdh1 promotes the degradation of Clb2 [29,30]. Apc9 is a nonessential component of the APC; however, deletion mutants of APC9 have delayed progression through mitosis [25].Cdc23 was identified as a conserved subunit of APC/C [31], and it was shown that cdc23 mutants are defective in nuclear division in S. cerevisiae [32].Previous studies showed that S. cerevisiae strains bearing a cdc23 mutation represented a metaphase-like arrest phenotype.In addition, yeast cells with mutation in CDC23 also showed defects in both entering and exiting anaphase [33], suggesting that Cdc23 may play an important role in at least two stages of the cell cycle, metaphase-to-anaphase transition and telophase-to-G1 transition [34][35][36][37]. CDC20 expression is regulated at both transcriptional and post-transcriptional levels [38].Mice embryos lacking the Cdc20 function demonstrated the metaphase arrest at the two-cell stage with high levels of cyclin B1, indicating an essential role of Cdc20 in mitosis that is not redundant with that of Cdh1 (Cdc20 homolog, activator of anaphase-promoting complex) [39].In fission yeast Schizosaccharomyces pombe, the transcriptional silencing of Cdc20, an APC activator, was shown to cause cell cycle arrest in metaphase accompanied by high levels of cyclin B and securin Pds1, which inhibits the separase Esp1.Most cells that remain in mitosis for a long time undergo apoptosis, but some of them skip cytokinesis and enter G1 with non-segregated chromosomes.This process, called mitotic slippage, increases genome instability and results in aneuploidy [40].If a similar process can occur in S. cerevisiae cells upon CDC20 downregulation, such slippage can provide a mechanistic basis for the observed amplification of sup35-n and sup45-n alleles [10]. Taken together, elevated levels of cyclins (at G1, S, and G2 stages) and the Cdc28 kinase, as well as a decrease in the expression of the anaphase-stimulating complex components, allows us to suggest that amplification of plasmids or chromosomes detected in our previous work could occur due to a slowdown in the cell cycle (Figure 7).First, defects and delays in the synthetic phase could allow for intensive replication of plasmids.At the same time, a decrease in the efficiency of the APC/C and Cdc20 complex, in turn, could lead to disturbances in chromosome segregation and plasmid segregation.In earlier studies, it has been shown that mutation in SUP35 gene influences the cell cycle and leads to the accumulation of large buds, disruptions in DNA synthesis and G1-to-S phase transition arrest [41].Such mutants were not capable of either continuing the cell cycle or copulating [42].In 2002, Valouev et al., demonstrated the effect of eRF1 and eRF3 depletion on the yeast cell morphology and cell cycle progression [43].Our results of flow cytometric analysis, which showed an accumulation of cells with non-1C DNA content in cultures harboring sup35-218 mutation (Figure 6), differ from those obtained by Valouev et al. in eRF3-depleted cells.However, both our results and previously published data indicate that alterations in eRF3 abundance and/or activity lead to profound changes in DNA content, supporting the role of observed changes in expression of cell cycle regulators in mutant allele amplification.It is important to note, however, that cell cycle disturbances are usually studied using synchronized cultures, despite the facts that synchronization affects cell cycle progression heavily and that single cell behavior deviates from population behavior [44].As thus, further studies using synchronized cultures could help to completely disentangle the complex changes in cell cycle regulation and their impact on gene amplification and adaptation. Standard methods of cultivation and manipulation of yeast were used throughout this work.Yeast strains were cultivated at 26 • C in standard solid and liquid synthetic media (SC).The SC medium contained higher amounts of certain nutrients: 40 mg/L adenine, 20 mg/L L-histidine, 20 mg/L L-lysine, 20 mg/L L-methionine, 20 mg/L Lthreonine, 20 mg/L L-tryptophan, 20 mg/L uracil, and 20 mg/L L-leucine.For strains bearing two plasmids (with URA3 and LEU2 marker genes), SC-UL media was used that did not contain either uracil or L-leucine.SC medium containing 1000 mg/L 5-fluoroorotic acid (5-FOA) (Thermo Scientific, Waltham, MA, USA) was used for selection against cells bearing plasmids with the URA3 marker [48].Yeast strains were incubated on 5-FOA medium for 4-5 days.The yeast transformation was carried out according to the standard protocol [49].For flow cytometry, yeast strains were grown at 26 • C in YEPD medium supplemented with 40 mg/L adenine. Cell Cycle Analysis by Flow Cytometry For cell cycle analysis by flow cytometry, overnight cultures of individual clones were diluted into fresh YEPD medium until OD 600 = 0.1, and incubated to pass 1-2 cell divisions (OD 600 = 0.4).The sample preparation was conducted according to [50] with minor modifications: the incubation time of the RNAse solution (50 mM Tris pH8, 15 mM NaCl; 4 µg/mL RNAseA (Merck, Burlington, MA, USA)) was 2-3 h.The samples were sonicated 4 times for 5 s on ice.After adding 10 µL SYBR Green I solution (#PB025, Evrogen, Moscow, Russia), the samples were incubated for 0.5-1 h in the dark.The analyses were performed on the CytoFlex S (Beckman Coulter, Brea, CA, USA) at 488 nm through a 525/40 filter, collecting 50,000 events per sample.The FlowJo10 (BD Biosciences, Franklin Lakes, NJ, USA) was used for data analysis. RNA-Seq Library Preparation and Sequencing For extraction of the RNA, cultures were grown in 30 mL of SC-U or SC-L liquid medium until OD 600 = 0.8-1.0.The cells were then harvested by centrifugation at 8000× g for 5 min and washed.The total yeast RNA was isolated using the GeneJET RNA Purification Kit #K0731 (Thermo Scientific, Waltham, MA, USA) according to the manufacturer's instructions.The RNA concentration and quality were evaluated using a NanoDrop Spectrophotometer (Thermo Scientific , Waltham, MA, USA).Preparation of the libraries for sequencing was carried out using NEBNext® Ultra™ II Directional RNA Library Prep Kit for Illumina (#E7765, NEB, Ipswich, MA, USA) and NEBNext® Multiplex Oligos for Illumina® (96 Unique Dual Index Primer Pairs Set 2) (#E6442S, NEB, Ipswich, MA, USA).Sequencing was performed using the Illumina HiSeq 4000 platform in the paired-end mode and a read length of 150 nucleotides. A total of 10 samples were used for sequencing (three biological replicates for the initial U-14-D1690 strain, four replicates for cells bearing the wild-type SUP35 allele after plasmid shuffling, and three replicates for cells bearing the sup35-218 allele). RNA-Seq Data Analysis Raw RNA-seq reads were aligned onto the yeast S288C (R64-3-1) genome using Hisat2 [51].Quantification of gene expression was performed using feature counts [52].Gene annotation was obtained from the Saccharomyces Genome Database (the same R64-3-1 version was used).Further analysis of the obtained gene count matrix was performed using the R v4.1.1 software [53]. To perform the principal component analysis of the gene expression profile, the original count matrix was transformed using the rlog function in the DESeq2 package (v1.34.0) [54].Differential expression analysis was performed using the default DESeq2 functions.Differentially expressed genes were defined as genes with FDR-adjusted p < 0.05 and log2FC > 0.5 (1.4-fold increase in expression) or log2FC < −0.5 (1.4-fold decrease in expression) for up-and downregulated genes, respectively. A Gene Ontology (GO) term enrichment analysis for the obtained DEGs was performed using the clusterProfiler package for R.For the analysis of the role of DEGs in the identified pathways, we used the Kyoto Encyclopedia of Genes and Genomes (KEGG) [55]. For the analysis of transcription factors that play a role in the observed transcriptional changes, the set of DEGs was analyzed using information system YEASTRACT (www.yeastract.com,accessed on 20 November 2023). In order to calculate the correlation between the transcript and the protein levels, RNA-seq read counts were normalized to account for the gene length and library size, and logarithms were performed to transform the data.The Kendell correlation coefficient was then calculated using the built-in function in R v4.1.1 software [53]. RNA Extraction and cDNA Generation for qPCR For extraction of RNA, cultures were grown in SC-L liquid medium until OD 600 of 0.8-1, then the cells were harvested and washed.The total yeast RNA was isolated using the GeneJET RNA Purification Kit (Thermo Scientific, Waltham, MA, USA, #K0731) and treated with DNase I (RapidOut DNA Removal Kit, Thermo Scientific, Waltham, MA, USA, #K2981) according to the manufacturer's instructions.The RNA concentration and quality were evaluated using a NanoDrop Spectrophotometer (Thermo Scientific, Waltham, MA, USA).Purified RNA was reverse transcribed with RevertAid RT Reverse Transcription Kit (Thermo Scientific, Waltham, MA, USA, #K1691).cDNA generation was performed under the following conditions: 25 • C for 5 min, 42 • C for 60 min and termination at 70 • C for 5 min. qPCR The expression levels of target cell cycle genes were analyzed by quantitative PCR (qPCR) with EVA Green 2.5X PCR-mix (Syntol, Moscow, Russia) according to the manufacturer's instructions.The reactions and quantification were performed using CFX96 amplifiers (Bio-Rad, Hercules, CA, USA).The quantitation cycle for ACT1 was used as a reference.Triplicate qPCRs were performed for each biological replicate.We analyzed the specificity of the amplification through the melting curves for all pairs of primers used for controls and genes of interest and, in all cases observed, a single peak accounting for a single PCR product.The ∆∆C T method [56] was used to measure the levels of transcription.The resulting values were used to quantify the gene expression changes in cells containing the sup35-218 allele.Primer pairs used in this analysis are listed in Table 2. Protein Extraction Cells were grown and pelleted as for RNA extraction.The total proteome was isolated using the following methodology.Precipitated cells were resuspended in lysis buffer (30 mM Tris-HCl, pH 7.4; 150 mM NaCl; 10 mM PMSF) [57] with a cocktail of protease inhibitors (Protease Inhibitor Cocktail, Sigma P8215).Next, glass beads were added to the suspension.Cell lysis was performed with Fast-Prep-24 homogeniser (MP Biomedicals) at a speed of 6.0 M/S, five times for 20 s each, with cells cooled in ice for 3 min between rounds.In the final step, the lysate was centrifuged for 10 min, 400× g at 4 °C.The protein concentration was measured with a Lumiprobe QuDye Protein kit (#15102, Lumiprobe, Hunt Valley, MD, USA).The average protein concentration was 5 µg/µL.The samples were then equilibrated to the minimum concentration and 30 µg of protein was taken for analysis.The obtained samples were treated with chemically pure trypsin (Trypsin Gold, Mass Spectrometry Grade (Promega #V5280, Madison, WI, USA)) overnight.Then, the peptides were purified according to the stage tips protocol [58].The obtained samples were further used for mass spectrometric analysis. Proteome Analysis The analysis of the prepared proteomic libraries was performed using mass spectrometer timsTOF Pro 2 (Bruker, Billerica, MA, USA).To analyze the obtained data, we used PEAKS Studio 11 software, and the proteome of the S. cerevisiae strain (strain S288C, Uniprot ID-UP000002311) was used as a reference database.Protein counts were analyzed with the limma package (v 3.56.2) with R v4.1.1 software [53].Differentially produced proteins were defined as proteins with adjusted p-value < 0.05 and log2FC > 0.5 (1.4-fold increase in abundance) or log2FC < −0.5 (1.4-fold decrease in abundance) for up and down regulated genes, respectively. Code Availability All code pertinent to the bioinformatic or statistical analysis presented in this work is available at https://github.com/mrbarbitoff/sup35n_expression_analysis,accessed on 4 June 2024.Raw and processed RNA-seq data have been submitted to the Gene Expression Omnibus (GEO) database (accession number GSE267888).Raw mass-spectrometry data have been uploaded to the Proteomics Identification (PRIDE) database (accession number PXD052727). Figure 1 . Figure 1.Transcriptome profile of strains harboring nonsense-mutation sup35-218.(A) Scheme illustrating the experimental design used to study yeast adaptation to nonsense mutations in release factor genes.(B) Scatterplot representing principal component analysis of 10 yeast transcriptomes carrying different combinations of alleles of the SUP35 gene.(C) Comparative analysis of the number of differentially expressed genes, depending on the log2FC cutoff and set of controls used.(D) Analysis of differential gene expression in cells at the 3rd stage of the experiment, carrying the sup35-218 allele compared to 3rd stage harboring the SUP35 wild-type allele.Significantly differentially expressed genes (FDR < 0.05) are represented by red dots (upregulated) and blue dots (downregulated).Grey dots represent genes with unaltered expression. Figure 2 . Figure 2.Results of gene set enrichment analysis using Biological Process (BP) and Cellular Component (CC) terms from Gene Ontology.The analysis was performed using the clusterProfiler package for R. The dot size is proportional to the enrichment ratio, and the color gradient represents the adjusted significance level.(A) Genes with increased expression in cells bearing sup35-218 (biological processes).(B) Genes with increased expression in cells bearing sup35-218 (cellular component).(C) Genes with decreased expression in cells bearing (biological processes). Figure 4 . Figure 4. Validation of changes in expression of DEGs identified in RNA-Seq analysis using qPCR.Shown are boxplots of the relative expression levels calculated using the ∆∆Ct method (see Section 4 for more details).ns-not significant, ** p-value < 0.05, *** p-value < 0.001 according to Wilcoxon-Mann-Whitney rank sum test.At least seven biological replicates were used for each group. Figure 5 . Figure 5. Proteome profile of strains harboring nonsense-mutation sup35-218.(A) Analysis of differential protein production in cells at the 3rd stage of the experiment, carrying the sup35-218 allele compared to cells harboring SUP35 wild-type allele.Differentially produced proteins (FDR < 0.05) are represented by red dots (upregulated) and blue dots (downregulated).Grey dots represent proteins with unaltered production.(B) Evaluation of the correlation of gene and protein expression levels in strains carrying the wild-type SUP35 allele or mutant sup35-218 allele (left and right plots, respectively). Figure 6 . Figure 6.Cell cycle analysis by flow cytometry, illustrating the DNA content of yeast cultures of wildtype strain (L-14-D1690) or sup35-218 mutant cells (218L-14-D1690).Ten independent yeast cultures were grown and then prepared for flow cytometry as described in Section 4. Curves representing independent clones are shown in different colors.Locations of peaks of the 1C and 2C cell populations are indicated by arrowheads. Figure 7 . Figure 7.A schematic representation of the possible changes in the cell cycle in cells carrying the sup35-218 mutation.(A) Changes in expression of genes responsible for the progression of cell cycle phases in cells carrying the sup35-218 allele (right) compared to cells carrying the SUP35 allele (left).Purple asterisks indicate checkpoints in cell cycle regulation.Genes with increased expression are depicted in red (as in Figure 3) and are marked with an up arrow; genes with decreased expression are depicted in blue and are marked with a down arrow.(B) Scheme of changes occurring during plasmid replication and segregation in cells carrying the sup35-218 allele (right) compared to cells carrying the SUP35 allele (left).Plasmids with wild-type SUP35 are shown in green, plasmids with sup35-218 allele are shown in blue.Genes with changes in expression validated using qPCR are underlined. Table 1 . Genes with concordant changes in expression level in the transcriptome and proteome. Table 2 . Primers used for qPCR analysis in this work.
v3-fos-license
2021-09-30T06:24:00.743Z
2021-09-29T00:00:00.000
238218486
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://jamanetwork.com/journals/jamapsychiatry/articlepdf/2784695/jamapsychiatry_giannakopoulou_2021_oi_210048_1635884831.5977.pdf", "pdf_hash": "c0a70c0b5dee741a6afa005d2c53e3a3da3a784f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44636", "s2fieldsofstudy": [ "Psychology" ], "sha1": "75cc01bbd348574c0ce91d5241074972508394b5", "year": 2021 }
pes2o/s2orc
The Genetic Architecture of Depression in Individuals of East Asian Ancestry Key Points Question Are the genetic risk factors for depression the same in individuals of East Asian and European descent? Findings In this genome-wide association meta-analysis of depression in 194 548 individuals with East Asian ancestry, 2 novel genetic associations were identified, one of which is specific to individuals of East Asian descent living in East Asian countries. There was limited evidence for transferability with only 11% of depression loci previously identified in individuals of European descent reaching nominal significance levels in the individuals of East Asian descent. Meaning Caution is advised against generalizing findings about genetic risk factors for depression beyond the studied population. for age, sex, principal components (PCs) and recruitment region. After filtering variants with effective sample size (Neff) < 50 2 and poorly imputed variants (info<0.7), 10,834,708 variants were included in the downstream analyses. B. China, Oxford and Virginia Commonwealth University Experimental Research on Genetic Epidemiology cohort (CONVERGE) The CONVERGE cohort of Han Chinese women has been previously described 3 . Briefly, ~5,000 cases of recurrent MDD (≥2 episodes), established with the CIDI, which used DSM-IV criteria, were analysed against an equal number of controls. Cases with medical history of bipolar disorder, psychosis, mental retardation and/or drug or alcohol abuse before their first depressive episode were excluded from the study. CONVERGE samples underwent whole-genome sequencing, as previously described 3 . In brief, after genotyping calling, two rounds of imputation were performed: first without a reference panel and then using the 1000Genomes Phase 1 Asian haplotypes. Variants with a) a P-value for violation HWE < 10 -6 , b) information score < 0.9 and c) MAF in CONVERGE < 0.5% were excluded from the GWAS, resulting in a final set of 5,987,610 SNPs. The GWAS was conducted with a mixed-linear model including a genetic relationship matrix (FastLMM version 2.06.20130802) as random effect and PCs from eigen-decomposition of this matrix as fixed effects. We further filtered the publicly available GWAS summary statistics by removing variants with Neff less than 50. C. 23andMe cohort The GWAS dataset of personal genetic company 23andMe, Inc. (Sunnyvale, CA) that included in this meta-analysis, encompassed 2,729 depression cases and 90,310 controls of East Asian ancestry. All participants provided informed consent and answered surveys online according to 23andMe's human subject protocol, which was received and approved by Ethical & Independent Review Services, an AAHRPP-accredited institutional review board. As part the medical history survey, participants were asked if they have ever received a clinical diagnosis or treatment for depression (binary variable). DNA extraction and genotyping were performed on saliva samples by National Genetics Institute (NGI), a CLIA licensed clinical laboratory and a subsidiary of Laboratory Corporation of America. Samples were genotyped on one of five genotyping platforms. The v1 and v2 platforms were variants of the Illumina HumanHap550+ BeadChip, including about 25,000 custom SNPs selected by 23andMe, with a total of about 560,000 SNPs. The v3 platform was based on the Illumina OmniExpress+ BeadChip, with custom content to improve the overlap with our v2 array, with a total of about 950,000 SNPs. The v4 platform was a fully customized array, including a lower redundancy subset of v2 and v3 SNPs with additional coverage of lower-frequency coding variation, and about 570,000 SNPs. The v5 platform (68.4% of the samples in the East-Asian dataset), is an Illumina Infinium Global Screening Array (~640,000 SNPs) supplemented with ~50,000 SNPs of custom content. This array was specifically designed to better capture global genetic diversity and to help standardize the platform for genetic research. Imputation was performed with Minimac3 using a reference panel combining the May 2015 release of the 1000 Genomes Phase 3 haplotypes with the UK10 imputation reference panel. The association testing was performed by logistic regression assuming additive allelic effects, adjusting for age, sex, the top five principal components to account for residual population structure and indicators for genotype platforms to account for genotype batch effects. The association analysis and the downstream quality control was conducted separately for the genotyped and the imputed SNPs. Genotyped GWAS results were filtered for: SNPs that were only genotyped on "v1" and/or "v2" platforms due to small sample size, SNPs on chrM or chrY, SNPs that failed a test for parent-offspring transmission, SNPs with fitted β<0.6 and P<10 −20 for a test of β<1, SNPs with a Hardy-Weinberg P<10 −20 , or a call rate of <90%, SNPs with genotype date effects (determined as P<10 −50 by ANOVA of SNP genotypes against a factor dividing genotyping date into 20 roughly equal-sized buckets), SNPs with large sex effect (ANOVA of SNP genotypes, r2>0.1), SNPs with probes matching multiple genomic positions in the reference genome and variants with minor allele counts in the controls less than 50. For imputed GWAS results, SNPs with poor imputation quality (rsq<0.7), Neff less than 50 and SNPs that had strong evidence of a platform batch effect were excluded from the downstream analysis. The batch effect test is an F test from an ANOVA of the SNP dosages against a factor representing v4 or v5 platform (P<10 −50 ). Across all results, further filtering was performed on SNPs that have an available sample size of less than 20% of the total GWAS sample size, logistic regression results that did not converge due to complete separation, identified by abs(effect)>10 or stderr>10 on the log odds scale. D. Taiwan-Major Depressive Disorder (MDD) Study MDD patients were included from a family study of mood disorders in Taiwan were also excluded. The GWAS was performed using PLINK 1.9 and adjusted for 5 ancestry principal components. The GWA analysis was conducted separately by platforms with (1) Affymetrix TWB2.0 and (2) all other platforms combined together. In the latter, variants significantly associated (P<0.005) with genotyping platforms were excluded from downstream analysis. We also used a stricter imputation threshold for filtering (info<0.9 instead of 0.7). E. Women's Health Initiative study (WHI) The WHI study is a long-term national health study in U.S conducted in postmenopausal women, enrolled either in a clinical trial or an observational study 5 . We analysed data from 3,492 women with Asian ancestry who were genotyped as part of the WHI -Population Architecture using Genomics and Epidemiology (PAGE) sub-study. These participants had agreed their data to be included in the database of Genotypes and Phenotypes (dbGaP). The genotype and phenotype data were assessed vid dbGaP study accession phs000200.v12.p3. Depressive symptoms in the past week were assessed in the baseline visit with a 6-item Center for Epidemiological Studies Depression Scale (CES-D) form. Based on Smoller et al., definitions 6 , participants with a score of 5 or more were considered as depression cases, while participants not classified as currently depressed (6-item CES-D), without medical history of depression (2-item Diagnostic Interview Schedule) and not on antidepressant therapy constituted the control group. The dataset of Asian participants of WHI included in our analyses, have been genotyped with CardioMetaboChip, as part of the NHGRI's PAGE project. Samples and variants with a call rate lower than 95%, typed variants with different missingness rates between case and control group > 0.2 and variants with MAF < 0.05 were excluded from downstream analysis. A logistic regression analysis was performed (PLINK2), adjusting for age, sex, 20 PCs and study subgroup. F. Intern Health Study (IHS) We also considered participants from IHS, a multi-institutional longitudinal cohort study of medical interns in U.S. The study design has been previously described 7 . Depressive symptoms were measured through the PHQ-9 questionnaire, a self-report component of the primary care evaluation of mental disorders inventory. Subjects were asked to complete the questionnaire assessing PHQ-9 depressive symptoms in the baseline survey, as well as at months 3, 6, 9 and 12 of their internship year. Participants with a PHQ-9 score of 10 or greater 8 during their internship were considered as depression cases in this study. A total of 294 depression cases and 544 controls were considered in this association study. IHS samples were genotyped on Illumina Infinium CoreExome v1.0 or v1.1 array. Quality control steps and imputation were performed using the Ricopili Rapid Imputation Consortium Pipeline 9 . Study samples were assigned into distinct ancestry groups based on PCs derived from the study samples combined with 1000Genomes reference panel. In brief, samples with call rate < 98% or samples with a gender mismatch between genotype and reported data were excluded. For duplicated samples and up to third-degree relatives, the sample with higher call rate was selected. Variants with call rate < 98%, missing difference > 0.20 were also excluded prior imputation. Genotypes were imputed to the Haplotype Reference Consortium (HRC) reference panel using EAGLE and IMPUTE2 for the phasing and the imputation respectively. A logistic regression analysis was performed (PLINK2) in genotype dosages, adjusting for age, sex, 20 first PCs. Variants with MAF < 0.05 and imputation info score < 0.7 were excluded from downstream analysis, resulting in a dataset of 4,626,568 variants. G. UK Biobank (UKB) UKB is a well-characterized cohort of more than 500,000 individuals recruited at UK between 2006-2010 with linked health and genetic data 10 . A subset of participants has also completed the mental-health questionnaire. We used a combination of hospital diagnoses (ICD10 codes) and lifetime CIDI (A. prolonged feelings of depression OR prolonged loss of interest in normal activities AND B. affected more than half of the day during worst episode of depression AND C. the frequency of depressed days during worst episode was at almost every day/every day AND D. these problems interfered with your life/activities (study/employment, childcare and housework, leisure pursuits) somewhat/a lot) to define our cases. Gender mismatches, missingness/heterozygosity outliers, participants with excessive genetic relatedness, no quality control metrics, individuals that have withdrawn their consent and up to 2nd degree relatives (PC-Relate) were excluded before the analysis. UKB genotyping was conducted by Affymetrix using two similar arrays; Applied Biosystems™ UK BiLEVE Axiom™ Array, consisting of 807,411 genetic variants and a bespoke UK Biobank Axiom™ array, including 825,927 genetic variants. All genetic data was quality controlled by UKBB bioinformatics team, both at sample and marker level, resulting in a dataset of 488,377 samples and 805,426 variants from both arrays. The genetic data was subsequently imputed by UKB to over 90 million SNPs, indels and large structural variants, using haplotypes of both British, European and diverse-ancestry populations. For this study, we used data imputed with both the HRC and the merged UK10K and 1000Genomes Phase 3 reference panels 10 . To assign individuals in ancestry groups based on their genetic information, we implemented the PC-AiR method to perform a PC analysis for the detection of population structure 11 . A logistic regression analysis was performed in imputed genetic dataset (PLINK2), adjusting for age, sex, genotyping array and PCs that were calculated based on the subset of genetically defined EAS participants. Downstream analysis was restricted in the subset of common (MAF > 0.05) and well-imputed (> 0.7) variants. The analysis conducted under UK Biobank application 51119. H. Army Study To Assess Risk and Resilience in Service members (Army STARRS) study Data from the Army-STARRS, a study conducted in army members in USA, were also assessed in the current analysis. Army STARRS includes the New Soldier Study (NSS) and the Pre/Pst Deployment Study (PPDS). Detailed information about the design of the study have been published previously 12 . Depression outcomes were measured with the CIDI screening scales and evaluated for concordance with DSM-IV diagnoses within the Army STARRS clinical reappraisal study 13 . The genotyping and imputation of Army-STARRS, New Soldier Study (NSS) samples has been described previously 14 . In brief, samples were genotyped using the Illumina OmniExpress and Exome array and were imputed on a reference multi-ancestry panel from the 1000G Genomes Project (phase1). Samples and genetic variants with a call rate less than 95% and 98% respectively were filtered out. A logistic regression analysis was performed in common and well-imputed variants (PLINK2), adjusting for age, sex and the 20 first PCs. BioMe samples were genotyped with the Infinium Global Screening Array (GSA) BeadChip. Individuals with population-specific heterozygosity rate that surpassed +/-6 standard deviations of the population-specific mean, along with individuals with a call rate of <95%, individuals with discordant reported and genetic sex and with phenotypically intermediate sex were not considered in the analysis. In cases of duplicates, the sample of each pair with the lower missingness rate in the exomic data was preferentially excluded. Genetic variants exclusions included a call rate <95% and HWE p < 10 -5 . The resulting dataset was imputed to the 100Genomes Phase 3 reference panel. The GWAS was performed with a binary mixed model (SAIGE). The first 20 PCs were calculated using PLINK (v1.9) and a genomic relationship matrix (GRM) was calculated using the KING (v1.4) software (-ibs). The PCA and GRM calculations were restricted to common (MAF>0.01), autosomal sites. Additionally, variants with MAF<0.05 and info<0.07 were excluded before the meta-analysis. Data availability statement Summary statistics for the combined EAS meta-analysis excluding the 23andMe study are available through the PGC website (http://www.med.unc.edu/pgc/downloads). The genome-wide summary statistics for CONVERGE and the European meta-analysis are also available on the PGC website. Uploading and sharing of individual genetic data from CKB are subject to restrictions according to the Interim Measures for the Administration of Human Genetic Resources administered by the Human Genetic Resources Administration of China (HGRAC). Summary data including allele frequencies and GWAS summary statistics are available by application and restricted to research-related purposes. Other individual-level CKB data are available through www.ckbiobank.org, subject to completion of a Material Transfer Agreement, either through Open Access or on application. CKB data access is subject to oversight by an independent Data Access Committee. Analyses using CKB data were conducted under research approval 2018-0018. Data from 23andMe, Inc were made available under a data use agreement that protects participant privacy. Please visit https://research.23andme.com/collaborate/#dataset-access for more information and to apply to access the data. The raw genetic and phenotypic UK Biobank data used in this study, which were used under license (application number 51119), are available from: http://www.ukbiobank.ac.uk/. The genotype and phenotype data for the WHI study can be requested via dbGaP study accession phs000200.v12.p3. Genotyping The genotyping of each study has been previously described 3, 4, 10, 14, 16 . To optimise genome-wide coverage in EAS populations, genotyping was carried out using two custom-designed Affymetrix Axiom arrays in CKB and the Affymetrix TWB2.0 array for a subset of the Taiwan-Major Depressive Disorder study samples 1,4 . CONVERGE used whole-genome sequencing with a mean depth of 1.7 3 . More detail for all studies is provided in the studies description above. Quality control Quality control and association analyses were carried out separately for each study as described in the studies description and Supplementary Table 2. Genotypes were imputed to 1000 Genomes Project reference panel, except IHS where the Haplotype Reference Consortium (HRC) was used, 23andMe and UKB where the 1000 Genomes data were combined with the UK10K and HRC imputation reference panel, respectively. In the meta-analysis, we included only well-imputed variants (imputation accuracy > 0.7) with effective sample size (Neff) equal or higher than 50 2 in the larger datasets (CONVERGE, CKB, 23andMe), and with minor allele frequency (MAF)>=0.05 in the other studies. For the Taiwan-MDD study an imputation accuracy threshold of 0.9 was used. Meta-analysis We performed a Z-score weighted meta-analysis using METAL 33 for 13,163,200 genetic variants (Supplementary Figure 1). For all meta-analyses, results were restricted to variants present in at least two studies. We also performed a Z-score weighted meta-analysis combining results from our EAS analysis and the publicly available summary statistics from the largest published GWAS in EUR samples 17 . Variants associated at genome-wide significance in this trans-ancestry meta-analysis were considered novel if they were located outside ±250kb either side of the lead variants from the published GWAS of depression in EUR and if the Linkage Disequilibrium (LD) with the lead variant was <0.01 17 . We calculated betas for the meta-analyses using the formula from Zhu et al. 18 . Odds ratios were based on an inverse-variance weighted meta-analysis of the study betas, where for CONVERGE we used results from a logistic regression in Plink instead of FastLMM. Functional annotation and gene-based association analysis We functionally annotated the lead variants and their proxies (r 2 ≥0.8). Gene-based association analysis was performed using MAGMA (v1.08), implemented in FUMA, with default settings 19,20 . SNPs were mapped to 19,575 protein coding genes from Ensembl build 85. Significance for the gene-based analysis was defined as the Bonferroni corrected threshold (P=2.6x10 -6 ). We functionally annotated the lead SNPs in the genomic regions associated with increased risk for depression using HaploReg v4 21 and Open Targets Genetics Platfrom 22 . Candidate genes for each locus associated with depression were selected based on their proximity to the lead variant and/or the evidence of eQTL associations for a gene in that region. Open Targets Genetics interrogates various data sources to link genetic variation to genetic expression. The GeneCards database was used to obtain summary information of the identified genes, while NCBI's PubMed database was used to interrogate literature related to gene function and association with other human traits/diseases. We queried the identified variants and their proxies in PhenoScanner 23 and the NHGRI-EBI GWAS catalogue 24 to investigate trait pleiotropy. Reproducibility of established depression loci We assessed whether the associations of 102 established depression loci from the largest published EUR GWAS 17 were reproducible in samples with EAS ancestry. Since the lead SNP might not be the causal variant nor correlated with it in other ancestry groups due to LD differences, we also formed credible sets that are likely to include the causal variant. These were based on all variants in LD with the lead variant of a locus (r 2 >0.6) based on an ancestry matched reference (1000 Genomes Project v3 EUR samples). We assessed whether any variant in the credible set displayed evidence of association in the target study. As these credible sets contained multiple SNPs, we used a p-value threshold of P<0.01 to indicate reproducibility. While this p-value threshold might not provide conclusive evidence of reproducibility for individual loci, we used it to test reproducibility rates across sets of loci. We estimated the number of associations out of the 102 established loci that were expected to replicate. We accounted for the sample size of our study and the allele frequency in EAS populations. First, we calculated the power 25 to observe an association in the EAS meta-analysis for each of the 102 loci at alpha error of 0.05 using the effect estimate from the EUR discovery study 8 , the allele frequency for EAS samples from 1000 Genomes and the sample size available in the EAS meta-analysis. By summing up the probabilities across the 102 loci, we derived the absolute number of associations out of the 102 we are powered to observe if the effect estimates in EAS are consistent with the ones from the EUR studies. For benchmarking, we also assessed the reproducibility of these established loci in ancestry-matched cohorts. We used independent EUR GWAS for depression with different sample sizes (BioMe, BioVU, FinnGen 26 , 23andMe). Heritability and genetic correlations We estimated the SNP heritability (h 2 ) for each depression phenotype in EAS (meta-analysed cohorts) using used LD score (LDSC) regression 27 . We also used bivariate GREML implemented in the GCTA software 28 to estimate h 2 for the two large Chinese datasets, CONVERGE and CKB (symptom-based definition), that contribute the majority of samples in our analysis for which genotype data were available. For this we excluded, related individuals and used hard-calls for variants with call rate>0.95 and MAF>0.01. For this analysis we used a variety of prevalence estimates, ranging from 6.5% 29 to 15% 30 . To characterise the genetic architecture of depression, we estimated genetic correlations between depression in EAS and EUR studies. For clinical depression in EUR samples, we used the summary statistics from 45,396 cases with DSM-based diagnosis of major depressive disorder and 97,250 controls from a meta-analysis of 33 independent cohorts included in the latest GWAS 17 , excluding UKB and 23andMe. Additionally, we generated a symptom-based definition for EUR samples using the PHQ-9 questionnaire and a cut-off score of 10 31 , yielding 6,510 affected individuals and 116,697 controls from UK Biobank 10, 32 . To assess the sharing of genetic risk factors for depression across the genome between the two populations, we estimated trans-ancestry genetic correlations using POPCORN 33 . We estimated the genetic effect correlation which compares effects independent of allele frequency differences between the two populations. LDSC was also used to estimate genetic correlations between different outcomes within each ancestry group. The default LD Scores computed using 1000 Genomes EAS data were used as a reference for the LD estimates. We also assessed the genetic overlap with other traits using publicly available summary statistics (PGC, NHGRI-EBI GWAS catalogue) from EAS and EUR populations, using LDSC and POPCORN respectively, as described above. We only present genetic correlation estimates where the standard error (SE) was less than 0.3. To aid interpretation of the trans-ancestry genetic correlations, we also gathered estimates for other traits. We extracted genetic correlations between EUR and EAS from publications [34][35][36][37] . Additionally, we used publicly available summary statistics from Biobank Japan 38,39 and EUR GWASs to estimate correlations for coronary artery disease (CAD) 40 , breast cancer 41 and age at menarche 42 using POPCORN as outlined above. A novel locus at 7p21.2 was associated with depression at genome-wide significance in the analysis of the East Asia based studies (Table 1). The lead SNP, rs10240457 (EAF=0.646, beta for A-allele=0.028, SE=0.005, P=5.0x10 -9 ) is intronic to AGMO (Alkylglycerol Monooxygenase). This gene cleaves the O-alkyl bond of ether lipids which are essential components of brain membranes and function in cell-signalling and other critical biological processes. We carried out a meta-analysis for the broad depression outcome in EAS and the largest GWAS of depression in EUR samples 17 ( Figure 1B , Supplementary Figure 4). The lead variant at 1q25.2, rs7548487, (beta for A allele= -0.013, SE=0.002, P=1.29x10 -8 ) is located in an intron of ASTN1 (astrotactin 1). Astrotactin is a neuronal adhesion molecule required for glial-guided migration of young postmitotic neuroblasts in cortical regions of the developing brain 47 . The C-allele of the lead variant at 18q12.1, rs547488 had beta 0.008 (SE=0.001) and P=3.3x10 -8 . It is located downstream of CDH2 (cadherin 2) and is nominally associated with the expression of CDH2 in the brain (UKBEC, P=0.03) and from BrainSeq 48 (P=0.027). CDH2 encodes N-cadherin, which expresses broadly in multiple tissues and has been shown to play a role in the development of the nervous system and be associated with neurodevelopmental disorders 49 . The third locus is 22q13.31 with lead variant rs12160976 (beta for A allele=-0.009, SE=0.002, P=1.6x10 -8 ). Gene-based analysis We also performed a gene-level aggregate test based on the meta-analysis summary statistics using MAGMA (v1.08), as implemented in FUMA 20 . The ETS Variant Transcription Factor 5 (3q27.2) gene, was the only gene that passed the significance threshold (P=6.9x10 -6 ). It has been previously associated with depression risk in an EUR study 50 . Reproducibility In addition to the comparisons described in the main manuscript, to rule out that the low reproducibility rates are due to differences in LD patterns between the ancestry groups, we created credible sets of SNPs that are likely to contain the causal variants and assessed their associations in the EAS data. Of the 102 credible sets, 13 (12.7%) contained variant(s) with P<0.01 in the EAS association analysis with depression. We also assessed a high-confidence set of loci from the largest EUR meta-analysis that were replicated in an independent dataset of 23andMe 8 . Out of the 86 which were available in the EAS meta-analysis, 13 (15.1%) of the credible sets contained a variant with P<0.01. eFigure 7. Genetic Correlations Between the Clinical and Symptom-Based Depression Phenotypes in East Asians and Other Traits in Europeans For this analysis we used published summary statistics for schizophrenia, age of menarche, body mass index (BMI) and type 2 diabetes, from European (EUR) GWAS (LDSC) and East Asian (EAS) GWASs (POPCORN). Colours correspond to direction and strength of the genetic correlations (rgen). Statistically significant genetic correlations are indicated by a star (*
v3-fos-license
2019-09-19T09:14:12.339Z
2019-07-01T00:00:00.000
203989601
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mechanics-industry.org/articles/meca/pdf/2019/06/mi180155.pdf", "pdf_hash": "054b4a29f8ecba6a4a3a8f47c18e44952756ac28", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44637", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "fc1e33da64d635ebab9b4297c3e5b72bbddb9454", "year": 2019 }
pes2o/s2orc
Stress analysis of a second stage gas turbine blade under asymmetric thermal gradient . In this study the main causes of the failure of a GE-F9 second stage turbine blade were investigated. The stress distribution of the blade which has 6 cooling vents in three modes (with full cooling, closure of half of the cooling channels, and without cooling) was studied. A three dimensional model of the blade was built and the fl uid fl ow on the blade was studied using the FVM method. The stress distribution due to centrifugal forces applied to the blade, temperature gradients and aerodynamic forces on the blade surface was calculated by the fi nite element model. The results show that the highest temperature gradient and as a result the highest stress value occurs for the semi-cooling state at the areas near the blade root and this status is true for the full cooling mode for the regions far from the root. However, the fi eld observations showed that the failure occurred for the blade with the semi-cooling state (due to closure of some of the channels) at areas far from the root. It is discussed that the main factor of the failure is not the stress values being maximum because in the state of full cooling mode (the state with the maximum stress values) the temperature of the blade is the lowest state and as a result the material properties of the blade show a better resistance to phenomena like hot corrosion and creep. Introduction One of the important components of gas turbines is their moving blades which are under mechanical and thermal stresses due to high-speed rotation and exposure to high temperatures. To improve turbine efficiency, gas turbine inlet temperature should be increased [1,2]. On the other hand, the temperature of the turbine blades needs to be kept lower than a certain value because of the limitations of the material properties at high temperatures. In order to achieve ideal conditions in the design and manufacture of gas blades, the accuracy in the measurement of the temperature distribution of the blade is very important. So, in recent years, many studies have been done to estimate temperature and stress distribution of the turbine blade [3], turbulence intensity of the streamline [4,5], the Reynolds number and Mach number [6,7], to experimentally study the effect of cooling temperatures and mass flow on the heat transfer distribution on the turbine blades [8], swirl effects of unsteady vortices [9,10] as well as the tip and shape of the blade [11,12]. In addition to these experiments, numerical studies have been carried out using CFD codes developed based on the Navier-Stokes equations and boundary layer models. Among numerical studies done, we can note the physical effect of the flow in the cooling hole on the heat transfer in turbine blade [13], improvement and development of turbulence models, in order to accurately predict heat transfer from the surface of the turbine blade with cooling and without cooling [14]. However, most of the numerical simulations conducted in this matter are 2 dimensional or does not applied on near-real conditions. So a thorough assessment considering the real geometry and near-real boundary conditions needs to be done. In this study, numerical analysis is conducted in order to estimate thermal stresses due to temperature gradient and stresses caused by aerodynamic and centrifugal effects on a fractured blade. Figure 1 shows the fractured blade, as it can be seen from this illustration. Geometry, boundary conditions and material properties The model of blade was designed using CAD software. Since there are several effective parameters such as complex geometry of the blade, the unsteady nature of the flow, relative motion of the components and turbulence in order to obtain accurate results, there is a need for proper meshing. For this purpose, several meshing with different numbers of elements was built by Gambit 2.4.6 software and mesh independency was investigated. In order to analyze the fluid flow, the sector volume around the blade was calculated considering the arrangement of 93 adjacent turbine blades of the second stage. A total of 3,287,137 structured elements were used, 1,482,314 elements of which are for the blade body (Fig. 2), and the rest for the environment around the blade and 6 cooling holes (Fig. 3). It should be noted that according to Figure 2 a boundary layer mesh was used in areas near surface of the blade. Boundary conditions used in this study is based on measurements done in the power plant. According to Figure 4 the inlet mass flow rate is 4 kg/s with the temperature of 1149°K. In the case of cooling channels, the inlet cooling fluid mass flow rate from the compressor is 7.2 Â 10 À4 kg/s for each channel, and its temperature is 620 Kelvin. Periodic boundary condition is used for the lateral boundaries; the output pressure boundary condition is used for the outlet surface. Since the flow around the turbine blade is turbulent, k -e model was used to simulate the flow. The convergence criteria determined to be 10 -6 for the residuals. The material properties of the GTD 111 nickel-base alloy are shown in Table 1. Results and discussion In the sight of theory, the thermal stresses, static and dynamic forces that affect the performance of the turbine blades are as follows: the aerodynamic forces of flow; the stresses resulting from temperature gradient; forces due to centrifugal effect. In this part these that the effect of each parameter on the blade performance will be explained. Fluid flow analysis The uniform distribution of the force from the fluid pressure from bottom to top of the blade is of great importance. Unequal distribution of the force makes the gases flow over the blade with different speed and pressure. The difference in rotational speed at the hub and tip of the blade reduces the gases relative speed at the tip, so less force is applied to the tip of the blade compared with hub. So, modern gas turbines use blades that have impaction at the hub and reaction at the tip. Figure 5 shows the distribution of static pressure on the blade. Pressure drop required for the blade reaction appears at the tip, and gradually changes to the conditions without drop for the impaction at the hub. High pressure at the tip makes the gases move toward the roots of the blade; this effect is opposed to centrifugal forces which flow gases into the tip. As a result, a uniform force distribution occurs across the entire blade. Figure 6 shows the static pressure coefficient which is where P x and P ref are the static pressure on the mid-surface of the blade and the reference pressure on the stagnation point, respectively. V b is the speed at the blade input that is used as a reference for dynamic pressure term. The difference of the results from the present study and the result from reference [3] is due to the physical conditions of the flow. Figure 7 shows the comparison of the heat transfer coefficient in the middle part of the blade in the recent study and the amount of mass and heat transfer coefficient in the previous studies conducted by others [3,4,5,9,10]. The highest amount of heat transfer occurs due to collision of the main flow on the stagnation point at leading edge. The lowest amount of heat transfer coefficient occurs due to the development of thermal boundary layer at the trailing edge. The highest temperature is observed at the blade tip. Heat transfer analysis In this study, the Reynolds number is considered to be equal to 1.07 Â 10 6 . One of the reasons for the different results in some parts is the difference in the blade geometry studied by other researchers. Of course, another thing that has impact on the change of these results is the condition in which the flow passes the blade surface. Since the effect of temperature gradient on stress distribution occurring in the blade is much more than the pressure of the fluid on the blade, and with respect to blades' working conditions, there is blocking probability in some cooling channels, therefore, for better comparison, simulation is done in three modes of (1) full cooling with 6 cooling channels; (2) cooling assuming the blockage of half of the cooling channels (Fig. 8); (3) assuming the blockage of all cooling channels. In order to find the temperature distribution and Stress caused by temperature gradients in the blade, 4 sections are considered with similar distances in the blade according to Figure 9 for all three cooling modes. Figure 10 shows the temperature distribution and stress caused by temperature gradients in section 1 for all three modes. In the full cooling with 6 cooling channels, maximum blade temperature is significantly lower than the other two conditions. The highest temperature gradient is in the half-cooling mode and is equal to 74 Kelvin.The maximum stresses caused by the temperature gradient in this section is related to half-cooling state. Figure 11 shows the section 2 at the distance of one third of the blades' length from hub of the blade. With increasing altitude, average temperature is increased in all three modes but the temperature gradient is reduced at any section. On the other hand with reduces the temperature gradient we can also see decrease of corresponding stresses. Figure 12 shows the section 3 at the distance of two third of the blades' length from hub of the blade. The blade without cooling is subjected to relatively little temperature changes. In blades with cooling channels, as height increases the cooling fluid impact reduces, so the section temperature will be increased. The maximum temperature of the section (3) happens at the trailing edge of the blade in all three modes, and the highest temperature is in the mode without cooling. In this section also a reduction in the stresses caused by the temperature gradient in three sections can be seen. Figure 13 shows near the blade tip, i.e. section (4). Compared to all sections, the maximum temperature occurred in this section, and in all cases, the trailing edge of the blade is in high temperature which is due to the cooling fluid warming and its low impact in this height of the blades. On the other hand, the temperature difference between the three modes is reduced. On the other hand the thermal stress at this section is the lowest and is near together in three states. Figure 14 shows the temperature distribution on the wall attached to the cooling fluid with 6 cooling channels. The results show that with increasing altitude, cooling fluid gets warmer and the wall temperature goes higher. Centrifugal force This force is a function of the rotor rotation speed, turning radius and mass of blades. This force is a harmful one for the blades so, designers are always trying to reduce mass of blades to minimize its amount. In order to calculate analytically the function of the changes in stress distribution caused by the centrifugal force along a blade with variable cross-section, we act as follows [16]: where F r is the amount of centrifugal force and S r is the area function at any section of the blade, both of which are defined as functions of the turning radius of blades. According to Figure 15, R hub = the radius of the blade hub and R tip = the radius of the blade tip. In relation 2, F b parameter is the centrifugal force of the Shroud Banding which is considered to be zero for this kind of blade due to the lack of Shroud Banding part. Centrifugal force of F bl(r) is a piece of the blade, which is applied in the area between section r and blade end radius R tip with rotational speed of 3000 rpm. In order to calculate the F bl for the airfoil, according to Figure 15, considering a longitudinal element, centrifugal force is achieved. To compute the integral in equation (3) area function (S (j) ) needs to be obtained. To evaluate S (j) the area of forty different sections of blade obtained and interpolated. S (j) is achieved as follow: S ðjÞ ¼ À3:02 Â10 À5 j 3 þ 1:36 Â10 À2 j 2 À 9:9 Â10 À3 j þ2:58 Â 10 À3 : Consequently F r can be obtained: F r ¼ À4:837Â10 3 r 5 þ 2:730Â10 6 r 4 þ 9:34Â10 5 r 3 À2:851 Â 10 6 r 2 þ 2:022 Â 10 6 r: Finally, having the area and force functions, the stress distribution function can be obtained in accordance with equation (1) which is shown by solid line in Figure 16. To ensure the validity of the results obtained by numerical simulation, the stress distribution due to centrifugal force compared by analytical results. Numerical results (the average stress in each section) are also illustrated by circles in this figure. The agreement between the results is suitable and the differences can be because of assumptions have been made in the analytical method; normal stress distribution in each section is assumed to be constant and the deformation of the blade is ignored. To examine the stress on the blade from the fluid, the results of the fluid analysis, including temperature and pressure distribution on the blade, are inserted as boundary conditions; and the stresses obtained in different sections of the blade are investigated by applying a 3000 rpm rotation on the blade Figure 17 shows the overall stress that applied on the blades in half cooling state. Field surveys of the second stage blade fracture surfaces show that the fractures occurred at the section 2 and lower sections of the blade, started to grow and finally caused the failure of the blade. Thermo-mechanical analyses also show the fact that the maximum stress occurred in the hub. It seems this is because of high temperature gradients in this area due to the proximity of these pages to the inlet opening of the cooling fluid on the one hand, and inability of deformation as a result of root constraint, on the other hand. Conclusion Internal cooling of GEF9 turbine blade was presented and investigated in this paper. In the first step, the cooling air flow inside the channel was simulated and evaluated. In the next step, normal stresses were evaluated in four sections using the results obtained from the fluid analyses and cooling system effect on the temperature parameters. According to the results presented in this report, the following cases can be presented as the conclusion: -Using cooling system as a channel in the blade significantly reduces the temperature in all parts of the blade. Results showed that maximum temperature is in the mode without cooling at all sections, and the temperature difference is more tangible in the modes of cooling and without cooling in the blade root, and this difference decreases with increasing altitude; but in general, the temperature increases with increased blade height, and maximum temperature occurs in the upper part of the blade. -It can be said about the pressure that in three modes of 6-and 3-channel cooling and without cooling, there is no significant change in the pressure on the blades, and the pressure is almost equal in all modes, and the highest pressure is at the stagnation point near the peak leading. Another important point is that the influence of this parameter is very low compared to the impact of stresses caused by rotation and temperature gradient. -Investigating the four sections cut on the blade, it can be concluded about the applied stress that the stress will be reduced gradually with increasing distance from the blade root, and the maximum stress occurs in the section near the blade root. These stresses are caused by applying blade rotation, temperature gradient and fluid pressure on the blade. Comparing the three modes above, it can be found that highest stress is related to the 6-channel cooling mode. Stress difference between cooling and without cooling modes is at maximum at the blade root. -In this study, cooling channels was considered alternately closed, however, due to the expected weather conditions, channels would be closed consecutively which leads to create a more intense temperature gradient. -It can be said blade in full cooling mode provides turbine blade function optimal conditions compared to the halfcooling and without cooling modes. Thanks to cooling, the blade temperature operating range is decreased and as a consequence, the material can tolerate higher stresses. Solutions to prevent such events -Some infiltration filters can be installed in the way air flow to the compressor to prevent the entrance of dusts and impurities. These filters should be visited periodically. -Another solution is regular oversee of the hot paths of the turbine. Centrifugal force of the Sherrod bonding area F bl(r) Centrifugal force of a section of the blade that is between r cross section and r of tip of blade
v3-fos-license
2022-08-09T15:20:55.916Z
2022-08-01T00:00:00.000
251430992
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2607/10/8/1572/pdf?version=1660298662", "pdf_hash": "78ead70fc8dd15e83492e7e51299f45d29c12dd9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44638", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "a6416ad98eef201c0ebf6705e0623dac466eaac2", "year": 2022 }
pes2o/s2orc
Oral Microbiota Profile in Patients with Anti-Neutrophil Cytoplasmic Antibody–Associated Vasculitis Microbiota has been associated with autoimmune diseases, with nasal Staphylococcus aureus being implicated in the pathogenesis of anti-neutrophil cytoplasmic antibody–associated vasculitis (AAV). Little is known about the role of oral microbiota in AAV. In this study, levels of IgG antibodies to 53 oral bacterial species/subspecies were screened using immunoblotting in plasma/serum in pre-symptomatic AAV-individuals (n = 85), matched controls, and established AAV-patients (n = 78). Saliva microbiota from acute-AAV and controls was sequenced from 16s rDNA amplicons. Information on dental status was extracted from a national register. IgG levels against oral bacteria were lower in established AAV versus pre-AAV and controls. Specifically, pre-AAV samples had, compared to controls, a higher abundance of periodontitis-associated species paralleling more signs of periodontitis in established AAV-patients than controls. Saliva microbiota in acute-AAV showed higher within-sample diversity but fewer detectable amplicon-sequence variants and taxa in their core microbiota than controls. Acute-AAV was not associated with increased abundance of periodontal bacteria but species in, e.g., Arthrospira, Staphylococcus, Lactobacillus, and Scardovia. In conclusion, the IgG profiles against oral bacteria differed between pre-AAV, established AAV, and controls, and microbiota profiles between acute AAV and controls. The IgG shift from a pre-symptomatic stage to established disease cooccurred with treatment of immunosuppression and/or antibiotics. Introduction The microbiome on and in our body is believed to have evolved along with us and to play a role in health and disease, nutrition modulation, prevention of pathogen invasion, and immune system education [1]. Sequencing technology advances have facilitated culture-independent microbiome analyses showing that dysbiosis may result in excessive immune activation and tissue damage [2]. The microbiome's importance in disease development and progression has been suggested for several autoimmune conditions, including rheumatoid arthritis (RA) [3], systemic lupus erythematosus [3], inflammatory bowel disease [4], and vasculitis [5]. However, the influence of the microbiome in systemic vasculitis remains unclear, and studies are limited. Anti-neutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a group of diseases characterized by ANCA production, excessive neutrophil activation, and smallmedium vessel vasculitis [6]. Mucosal inflammation of the upper and lower respiratory [24]. 2 7 received corticosteroids before sampling, and 10 did so on the sampling day. 3 Cytotoxic maintenance drugs were methotrexate, azathioprine, mycophenolate mofetil, rituximab, and tacrolimus. 4 At sampling: 2 on methotrexate, 1 on azathioprine , 1 on rituximab, and 1 on cyclophosphamide. 5 Antibiotic treatment within the last 3 months predating sampling. Pre-Symptomatic AAV Individuals and Matched Controls Screened for Immunoglobulin (IgG) Antibodies to Oral Bacteria The process of identifying pre-symptomatic AAV cases has been presented in detail previously [25]. Briefly, the Cause of Death Register and the Swedish National Inpatient Register were used to identify individuals with AAV as a first diagnosis in the discharge summary and/or cause of death between 1987 and 2011 using the codes of the International Classification of Diseases (ICD)-9 (1987-1997; 446.4, 446E) and ICD-10 (1998-2011; M30.1, M31.3, M31.7). Identified personal identity numbers were linked to the registers of five biobanks in Sweden and individuals aged ≥18 years and with a plasma or serum sample donated > 1 month but <10 years before symptom onset were included. Medical records were reviewed to identify the time-point for symptom onset and confirm the AAV diagnosis [24]. In total, 85 pre-symptomatic cases (mean age (standard deviation (SD)) 52.3 (16.7) years; 57.6% women) fulfilled the defined criteria (Table 1). One population-based non-AAV control was matched to each case for sex, age, sampling date, and biobank origin. Of the 85 pre-symptomatic AAV individuals, 11 also had available samples after AAV onset. Of all samples, 80% were serum and 20% plasma, with a similar proportion of each for cases and controls. In the pre-symptomatic AAV cases, 10.6% (n = 9) were myeloperoxidase (MPO)-ANCA positive (+), and 24.7% (n = 21) were proteinase 3 (PR3)-ANCA+. Of the 11 individuals with a follow-up sample, 27.3% were MPO-ANCA+ and 72.7% PR3-ANCA+. Before sampling, none of the pre-symptomatic individuals or controls had any AAV-related treatment or antibiotics for at least 3 months. Established AAV Cases Screened for IgG Antibodies to Oral Bacteria Of the 96 patients diagnosed with established AAV at the Department of Rheumatology and Nephrology, University Hospital, Umeå, Sweden, 78 participated in this study with a plasma sample. Medical records were reviewed to identify the time-point for onset and the AAV diagnosis [24]. The mean (SD) age at sampling was 64.3 (19.1), 52.6% were women, and the disease duration (mean (SD)) was 9.7 (7.1 years). During the disease period, the established AAV patients were treated according to guidelines [26] (Table 1). Thus, during active disease, 88% were treated with pulses of corticosteroids and 81% with cyclophosphamide, and the remainder received azathioprine or methotrexate. In addition, 28% were given antibiotic prophylaxis (trimethoprim-sulfamethoxazole) dur-ing active disease. In 10% of the cases with a relapse, rituximab was added. In most cases, maintenance therapy was oral prednisolone (5-12.5 mg daily) plus a cytotoxic drug (e.g., methotrexate/azathioprine/mofetil mycophenolate). Information on dental status was retrieved from the Swedish quality register on caries and periodontitis (SKaPa, www.skapareg.se, accessed on 14 December 2021) [27]. Data on the number of teeth, cause of tooth loss, probing pocket depth (PPD), and caries or restoration per tooth (third molars excluded) were available from 2010 to 2020 [28]. For periodontal status, the number of teeth with PPD ≥ 6 mm was calculated as described previously [29], and for caries, the Decayed, Missing, Filled index, which gives the sum of caries-affected tooth surfaces. Information on PPD was available for 61 established AAV cases, and for these, information from the dental visit closest to the AAV diagnosis was retrieved (median difference 3.0 years after AAV diagnosis (quartile limits 0 and 8.5 years). Caries status was available for 59 cases and similarly, the visit closest to the AAV diagnosis was kept (median difference 3 years after AAV diagnosis (quartile limits 0 and 9.0 years). Dental status was also compiled for three controls per case, matched for sex, age, and birth year, and for a group of patients with RA [30] with dental data from the same year as the RA diagnosis (n = 557, mean age (SD) 57.7 (15.9) years, 72.4% women, 56.4% ever smoker). Smoking status was classified as being a never or ever (current and former) smoker. Acute AAV Cases and Controls for Saliva Microbiota Sequencing Finally, 25 patients with a first acute attack of vasculitis (n = 18) or acute relapse (n = 7) were recruited (acute AAV) when they were admitted to the University Hospital in Umeå and Uppsala, Sweden, along with 23 healthy volunteers matching the sex and age of the cases. The mean age (SD) of the cases was 60.9 years (18.0), and 60% were women (Table 1). Of the patients, four reported having diabetes, one was a current smoker, and 36.4% were ever smokers, whereas among the controls, 30.4% were ever smokers, and none was a current smoker. Among the acute AAV patients, seven with relapse had been on a continuous low dose of prednisolone, and 10 received corticosteroids on the day of sampling. Furthermore, based on plasma samples, 20% were MPO-ANCA+, and 80% were PR3-ANCA+ (Table 1). In saliva samples, antibodies against MPO-ANCA were found in two MPO-ANCA+ patients, and none had PR3-ANCA antibodies. Of the individuals who had a first acute attack of vasculitis, 9 had received antibiotics between 1-10 days at the primary care and 5 of the individuals with an acute relapse were either on long-term antibiotics or had received it during the previous 2 weeks. Thus, 11 AAV individuals and none of the controls had received antibiotics for the previous 3 months. Whole stimulated saliva was collected for 3 min into ice-chilled tubes while the participant chewed on a 1-g piece of paraffin wax. Participants had refrained from eating or drinking for 2 h before sampling. All samples were stored at −80 • C until DNA extraction for microbiota analysis. In addition, serum/plasma samples from pre-AAV individuals were screened for IgG class ANCA using high-sensitivity ELISA (ORGENTEC Diagnostika, Mainz, Germany) with the cut-off for positivity set by the manufacturer as ≥1. Furthermore, the pre-AAV serum/plasma samples and acute AAV saliva samples were analyzed using second-generation (capture-based) PR3-ANCA and MPO-ANCA ELISAs (SVAR Life Science, Malmö, Sweden) with cut-offs at 7 and 8 IU/mL, respectively, as previously described [25]. Saliva samples were diluted 10 times more than serum/plasma samples. Saliva Microbiota Sequencing DNA was extracted from saliva using the GenElute Bacterial genomic DNA kit (Sigma-Aldrich Co, Stockholm, Sweden) from 400 µL of saliva. In short, saliva was thawed on ice, centrifuged (5 min at 13,000× g rpm), lysed in a buffer with mutanolysin and lysozyme, followed by RNase and Proteinase K treatment. DNA was purified, washed, and eluted at room temperature in 150 µL elution buffer. DNA quality was assessed using a NanoDrop 1000 Spectrophotometer (Thermo Fisher Scientific, Uppsala, Sweden) and quantified using the Qubit 4 Fluorometer (Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA). The DNA concentration ranged from 17.5 to 26.8 ng/µL. The same DNA extraction method was applied to negative (Milli-Q Ultrapure water) and positive controls (mock mixture of 14 oral bacterial species). Bacterial V3-V4 16S rDNA amplicons were generated by PCR amplification using forward primers 341F (ACGGGAGGCAGCAG) and the reverse primers 806R (GGAC-TACHVGGGTWTCTAAT) from saliva, and positive and negative control extracted DNA as described by Caporaso et al. [33]. An equal amount of DNA was applied to each PCR reaction (50 ng), and equal amounts of amplicon libraries were pooled before purified using AMPure XP beads (Beckman Coulter, Stockholm, Sweden). All samples were analyzed in one run by Illumina Miseq sequencing 2 × 300 bp kit (Illumina, Stockholm, Sweden) at the Swedish Defense Research Agency research facility in Umeå, Sweden, including a 5% PhiX and 12.5 pM amplicon library. Acquired sequences were demultiplexed, the pair-end reads fused, primers, chimeric and ambiguous sequences, and PhiX removed, and amplicon sequence variants (ASV) identified using the open-source software package DADA2 in the QIIME2 (https://qiime2.org, accessed on 10 December 2021) [34,35]. ASVs were taxonomically classified against the expanded Human Oral Microbiome Database (eHOMD, http://www.homd.org, accessed on 10 December 2021) [36]. ASVs with ≥2 reads and 98.5% identity with a named species or unnamed phylotype in eHOMD were retained, and those with the same Human Microbial Taxon (HMT) ID number were aggregated. The mock (positive control) that was used contained a mixture of Actinomyces odontolyticus, Bifidobacterium longum, Bifidobacterium dentium, Corynebacterium matruchotii, Gemella haemolysans, Haemophilus parainfluenzae, Lactobacillus fermentum, Lactobacillus vaginalis, Porphyromonas gingivalis, Rothia mucilaginosa, Streptococcus intermedius, Streptococcus mutans, Streptococcus sanguinis, Streptococcus parasanguinis. All 14 mock-included species were detected for each batch, and no reads were generated for the negative controls (ultrapure water) using the same bioinformatic criteria as for the samples. In addition, targeted detection of quantitative PCR analyses was performed in a QuantStudio 6 system (Applied Biosystems by Life Technologies, Carlsbad, CA, USA) using TaqMan Universal Master Mix (Applied Biosystems) and TaqMan kit predesigned to detect and quantify Staphylococcus in saliva [37]. All samples were run in duplicate and quantified against a standard curve from 1 to 1 × 10 −5 ng/µL DNA isolated from the Staphylococcus aureus CCUG64138 strain. Statistical Analyses SPSS (IBM Corp. version 27.0) and PAST 4 software packages were used for descriptive statistics, including means and medians, standard deviations (SD), 95% confidence interval (95% CI), and proportions (%). Group differences between continuous variables were tested using the Mann-Whitney U test and categorical variables using the chi-square test (χ 2 ) or Fisher's exact test. Differences between the mean (95% CI) of the number of PPD > 6 mm teeth and DMFS were evaluated using general linear modeling (GLM) with adjusted for sex, age, and smoking and sensitivity analyses for birth year and the time difference between dental recordings and AAV diagnosis. Binary logistic regression was performed with subgroups of interest as the dependent variable and antibody response to each of the 68 bacterial strains, including sex, age, and sample type (plasma/serum) as covariates. Results are presented as bar graphs based on beta values (β) and standard errors (SE). Microbiota diversity was evaluated for alpha-diversity using the Shannon diversity index (which considers abundance and evenness), evenness index (which evaluates evenness), and Faith PD index (a measure of biodiversity based on phylogeny). Beta-diversity was also evaluated by Bray-Curtis dissimilarity (based on species abundances) and the Jaccard index (based on presence), Unweighted UniFrac (based on phylogenetic similarities of presence), and Weighted UniFrac (based on phylogenetic similarities of abundances) using QIIME2 [35]. All tests were two-sided, and p < 0.05 was considered significant after controlling for multiple testing; adjusted p values are presented as the false discovery rate-derived q value. Multivariate analyses included unsupervised principal component analysis (PCA) for group separation and supervised orthogonal projection to latent structures-discriminant analysis (OPLS-DA) to evaluate binary classification. SIMCA P+ version 17.0 (Sartorius Stedim Data Analytics AB, Malmö, Sweden) was used. PCA analysis evaluated participants' microbiota patterns using the identified genus or species abundance and presence. OPLS-DA analysis was used to identify genera and species associated with being a control or vasculitis patient or clinical treatments related to AAV. All variables were log-transformed using the SIMCA function "auto transform selected variables as appropriate" and scaled to unit variance when needed. K-fold cross-validation was performed by systematically removing every seventh observation and predicting the remaining observations (Q 2 -values and analysis of variance [ANOVA] of cross-validated residuals. The results were displayed in two-dimensional score loading plots projecting component 1 with the maximal separation of the observations and orthogonal component (to [1]) representing within-group variation. Multivariate partial least squares regression modeling estimates the explanatory and predictive power of many (and even co-varying) x-variables when modeled simultaneously against an outcome(s) (y-variable(s)). Variable importance in the projection (VIP-values) reflects the significance of each x-variable in explaining the variation in y. VIP values are presented for the predictive components only, with VIP >1 considered significant. For high-dimensional class comparisons of the microbiota of controls and vasculitis patients linear discriminant analysis (LDA) effect size (LEfSe) method was used [38]. GLM was used to evaluate unadjusted PCA loading scores (t [1] or to [1]) in subgroups, or adjusted for potential confounding factors, i.e., sex, age, ever/never smoking, diabetes, hypertension, antibiotic treatment, and MPO and PR3 antibody profile. Eleven pre-AAV individuals had follow-up samples as established AAV patients, allowing for a longitudinal evaluation of the total and specific IgG during disease progression. In line with the cross-sectional findings, total IgG levels had declined in established AAV compared with the pre-symptomatic stage (p = 0.0028), as had levels for 22 bacterial species (p ≤ 0.010), of which 10 overlapped with those identified in the cross-sectional comparison: Aggregatibacter actinomycetemcomitans, Bifidobacterium longum, Filifactor alocis, Fusobacterium nucleatum subsp nucleatum, Fusobacterium periodonticum, Lautropia mirabilis, Prevotella pleuritidis, Streptococcus sanguinis, Streptococcus oralis, and Tannerella forsythia. Figure 2 shows four examples. Dental Status in Established AAV Given elevated IgGs to several periodontitis-associated oral species in pre-AAV individuals but decreased levels in established AAV, we compared periodontitis severity between established AAV and matched control samples using deep PPD (≥6 mm) as a proxy. After adjustment for smoking, patients with established AAV had three times more deep pockets than controls matched for gender, age, and smoking (mean (95% CI) 1.3 (0.8, 1.8) versus 3.9 (3.1, 4.8), p < 0.001). Additional adjustment for birth year did not affect the numbers, whereas adjustment for time difference between dental examination and AAV diagnosis attenuated the difference somewhat, but it remained statistically significant (p = 0.020). Restricting the analysis to cases with the dental examination the same year as the Dental Status in Established AAV Given elevated IgGs to several periodontitis-associated oral species in pre-AAV individuals but decreased levels in established AAV, we compared periodontitis severity between established AAV and matched control samples using deep PPD (≥6 mm) as a proxy. After adjustment for smoking, patients with established AAV had three times more deep pockets than controls matched for gender, age, and smoking (mean (95% CI) 1.3 (0.8, 1.8) versus 3.9 (3.1, 4.8), p < 0.001). Additional adjustment for birth year did not affect the numbers, whereas adjustment for time difference between dental examination and AAV diagnosis attenuated the difference somewhat, but it remained statistically significant (p = 0.020). Restricting the analysis to cases with the dental examination the same year as the AAV diagnosis (n = 20) and their matched controls yielded similar mean values as for the whole group, i.e., 1.3 versus 3.5 PPD 6 mm, p = 0.032. For comparison, we compared the number of deep pockets in the established AAV group with values in early RA patients, as RA has been reported as associated with deteriorated periodontal status. After adjustment for smoking, the RA group had four times more deep pockets than equally matched controls (mean (95% CI) 1.3 (1.2, 1.5) versus 4.8 (4.5, 5.0), p < 0.001). Findings for caries-affected tooth surfaces did not differ between individuals with established AAV and matched controls (mean Decayed, Missing, Filled index (95% CI) 52 (46, 57) versus 49 (40,59), p = 0.735). Microbiota Diversity Characterization in Acute AAV versus Controls Compared with controls, individuals with acute AAV had significantly higher withinsample diversity (beta diversity) in their saliva microbiota regardless of whether the comparisons were between quantitative (Bray-Curtis distance matrix, q = 0.001; Weighted UniFrac distance matrix, q = 0.018) or qualitative measures (Jaccard distance matrix, q = 0.001; Unweighted UniFrac distance matrix, q = 0.006) (Figure 3a-d). In contrast, they had significantly less richness (Shannon index, q = 0.034) and evenness (Evenness index, q = 0.042) than controls (Figure 3e,f) and tended to have fewer detectable ASVs (q = 0.101) (Figure 3g,h) in saliva, but phylogenetically their saliva microbiota did not differ from controls (Faith index, q = 0.942) (Figure 3i). To understand the nature of the beta-diversity in acute AAV samples, we evaluated the number of shared taxa ("core" microbiota) by sliding cut-off levels for the proportions of shared species. The analyses were done separately for the acute AAV and control groups. When all participants were expected to harbor a species, control samples had 16 species in the core microbiota compared with two species in samples from the acute AAV group (∆-14; Figure 4). The "core" microbiota consistently contained fewer species in the acute AAV samples than in control samples until the requirement was that 10% of the respective group members should harbor the species. At lower levels, the diversity was higher in the acute AAV group than in controls. To understand the nature of the beta-diversity in acute AAV samples, we evaluated the number of shared taxa ("core" microbiota) by sliding cut-off levels for the proportions of shared species. The analyses were done separately for the acute AAV and control groups. When all participants were expected to harbor a species, control samples had 16 species in the core microbiota compared with two species in samples from the acute AAV group (∆-14; Figure 4). The "core" microbiota consistently contained fewer species in the acute AAV samples than in control samples until the requirement was that 10% of the respective group members should harbor the species. At lower levels, the diversity was higher in the acute AAV group than in controls. of shared species. The analyses were done separately for the acute AAV and control groups. When all participants were expected to harbor a species, control samples had 16 species in the core microbiota compared with two species in samples from the acute AAV group (∆-14; Figure 4). The "core" microbiota consistently contained fewer species in the acute AAV samples than in control samples until the requirement was that 10% of the respective group members should harbor the species. At lower levels, the diversity was higher in the acute AAV group than in controls. LEfSe Identified Differences in Saliva Microbiota in Acute AAV versus Control Samples LEfSe analysis revealed that compared with matched controls, samples from acute AAV patients had lower abundances of several genera, most notably Haemophilus, Fusobacterium, Alloprevotella, Schaalia, and Leptotrichia (Figure 5a,b, LDA score > 2.0, p < 0.05). Additionally, they had higher relative abundances of Arthrospira, Cariobacterium, Lactobacillus, Ruminococcaceae G-1, and Staphylococcus (Supplementary Table S3). These results were confirmed in non-parametric, univariate analyses. LEfSe Identified Differences in Saliva Microbiota in Acute AAV versus Control Samples LEfSe analysis revealed that compared with matched controls, samples from acute AAV patients had lower abundances of several genera, most notably Haemophilus, Fusobacterium, Alloprevotella, Schaalia, and Leptotrichia (Figure 5a,b, LDA score > 2.0, p < 0.05). Additionally, they had higher relative abundances of Arthrospira, Cariobacterium, Lactobacillus, Ruminococcaceae G-1, and Staphylococcus (Supplementary Table S3). These results were confirmed in non-parametric, univariate analyses. Illumina sequencing identified the Staphylococcus genus in 24% (n = 6) of the acute AAV cases but none of the controls (p = 0.023) (Supplementary Table S2). Sequencing detected Staphylococcus aureus in two AAV cases but none of the controls (p > 0.05). Sensitivity analysis (quantitative PCR) also detected Staphylococcus aureus in 6 AAV cases and in 2 controls (p > 0.05). Thus, species in the Staphylococcus genus tended to be more prevalent in saliva from acute AAV than from controls, whereas Staphylococcus aureus did not. Data-Driven Profiling of Saliva Microbiota in Acute AAV versus Controls Multivariate PCA modeling of abundance and prevalence of taxa at the genus level (R 2 = 29%, Q 2 = 16%) indicated that a fraction of acute AAV patients clustered distinctly apart from the controls and other AAVs groups (Figure 6a). GLM using the PCA loading scores confirmed a significant difference between the sample groups for component 1 (pt [1] = 0.00021) but not component 2 (pt [2] = 0.051). Sensitivity analysis, adjusted for sex, age, and smoking, did not alter the significant difference for component 1 (p = 0.000041). Subsequent OPLS-DA analysis with AAV status as the dependent variable and the PCA set Table S2). Sequencing detected Staphylococcus aureus in two AAV cases but none of the controls (p > 0.05). Sensitivity analysis (quantitative PCR) also detected Staphylococcus aureus in 6 AAV cases and in Microorganisms 2022, 10, 1572 11 of 18 2 controls (p > 0.05). Thus, species in the Staphylococcus genus tended to be more prevalent in saliva from acute AAV than from controls, whereas Staphylococcus aureus did not. Data-Driven Profiling of Saliva Microbiota in Acute AAV versus Controls Multivariate PCA modeling of abundance and prevalence of taxa at the genus level (R 2 = 29%, Q 2 = 16%) indicated that a fraction of acute AAV patients clustered distinctly apart from the controls and other AAVs groups (Figure 6a). GLM using the PCA loading scores confirmed a significant difference between the sample groups for component 1 (p t [1] = 0.00021) but not component 2 (p t [2] = 0.051). Sensitivity analysis, adjusted for sex, age, and smoking, did not alter the significant difference for component 1 (p = 0.000041). Subsequent OPLS-DA analysis with AAV status as the dependent variable and the PCA set of independent variables generated a stable model that explained 68% and predicted 35% of the sample variation (p = 0.00091 in PLS CV-ANOVA) (Figure 6c). Important genera (VIP > 1.2) for the separation by component 1 that were increased in abundance and/or detection in acute AAV samples were Arthrospira, Fretibacterium, Lactobacillus, Scardovia, Staphylococcus and Veillonellaceae [G-1]), while reduced abundance and/or prevalence of 31 genera (including Aggregatibacter, Alloprevotella, Bergeyella, Butyrivibrio, Catonella, and Fusobacterium) (Supplementary Table S4). Table S4). [2]). OPLS-DA loading scatterplots based on models including participant status as a dependent variable (t [1]) and abundance and prevalence of genera (c) or species (d) detected in saliva samples as the independent block (x). The subgroup intra-variation is observed in the orthogonal loading score (to [1]). The model goodness-of-fit parameter, R 2 , represents the fraction of the variance of the y variable explained by the model, whereas Q 2 suggests the model's predictive performance. Model validation by 7-fold CV-ANOVA is shown for models (c,d). p-values obtained by GLM analysis on PCA component loading scores (t [1] and t [2]) when comparing acute AAV and control samples are shown for models (a,b). Acute AAV Subgroup Characterization The PCA score plot (Figure 6a) indicated that the acute AAV cases represented at least two subgroups. An OPLS-DA model restricted to the acute AAV cases explained 95% and predicted 80% of the bacteria profile variation among the acute AAV cases (p = Figure 6. Multivariate models of saliva microbiota in acute AAV patient and control samples run at the genus and species levels. PCA score plots show the position of each participant (dot) based on microbiota composition at the (a) genus level and (b) species level in the first and second components (t [1] and t [2]). OPLS-DA loading scatterplots based on models including participant status as a dependent variable (t [1]) and abundance and prevalence of genera (c) or species (d) detected in saliva samples as the independent block (x). The subgroup intra-variation is observed in the orthogonal loading score (to [1]). The model goodness-of-fit parameter, R 2 , represents the fraction of the variance of the y variable explained by the model, whereas Q 2 suggests the model's predictive performance. Model validation by 7-fold CV-ANOVA is shown for models (c,d). p-values obtained by GLM analysis on PCA component loading scores (t [1] and t [2]) when comparing acute AAV and control samples are shown for models (a,b). Repeating the PCA analysis with abundance and prevalence of taxa at the species level yielded results similar to those for the genus level, i.e., acute AAV samples separated from the controls in component 1 (p t [1] = 0.00001) but not component 2 (p t [2] = 0.221) in a modestly strong model (R 2 = 20%, Q 2 = 7%) (Figure 6b). Additionally, the OPLS-DA model comparing the acute AAV cases and controls was strong, with 88% explained and 25% predicted of the model variation (p = 0.015 in PLS CV-ANOVA) (Figure 6d). Acute AAV Subgroup Characterization The PCA score plot (Figure 6a) indicated that the acute AAV cases represented at least two subgroups. An OPLS-DA model restricted to the acute AAV cases explained 95% and predicted 80% of the bacteria profile variation among the acute AAV cases (p = 1.2 × 10 −6 ) ( Figure 7a). In this model, antibiotic treatment emerged as a significantly influential variable for clustering the acute AAV patients (VIP = 1.27). Sex, age, diabetes, hypertension, GPA vs. MPA AAV, PR3-ANCA or MPO-ANCA antibody profile, or glucocorticoid treatment were all non-influential (Figure 7b). The results were consistent between the use of genera or species for the OPLS-DA analysis. To follow up on the systematic impact of antibiotic exposure in the acute AAV cases we rerun the model from Figure 7 and included the 11 cases who did not have any antibiotic exposure for the previous 3 months. This model confirmed enrichment of Staphylococcus aureus in the acute AAV cases (VIP 2.0) compared with the controls, as well as several caries-associated species with a VIP-value > 1.2 (7 species in Lactobacillus, Streptococcus mutans, Scardovia wiggsiae, 2 species in Bifidobacterium, Prevotella denticola and Veillonellaceae [G-1]). However, the 11 acute AAV cases that did not have antibiotic treatment for the previous 3 months also had higher abundances of several species that have been reported as being associated with periodontitis. i.e., Aggregatibacter actinomycetemcomitans, 2 species in Actinomyces and Fusobacterium, Dialister invisus, Prevotella nigrescens, Tannerella forsythia, and 5 species in Treponema compared with controls. A full list of taxa with VIPvalues >1.2 is presented in Supplementary Table S5. To follow up on the systematic impact of antibiotic exposure in the acute AAV cases we rerun the model from Figure 7 and included the 11 cases who did not have any antibiotic exposure for the previous 3 months. This model confirmed enrichment of Staphylococcus aureus in the acute AAV cases (VIP 2.0) compared with the controls, as well as several caries-associated species with a VIP-value > 1.2 (7 species in Lactobacillus, Streptococcus mutans, Scardovia wiggsiae, 2 species in Bifidobacterium, Prevotella denticola and Veillonellaceae [G-1]). However, the 11 acute AAV cases that did not have antibiotic treatment for the previous 3 months also had higher abundances of several species that have been reported as being associated with periodontitis. i.e., Aggregatibacter actinomycetemcomitans, 2 species in Actinomyces and Fusobacterium, Dialister invisus, Prevotella nigrescens, Tannerella forsythia, and 5 species in Treponema compared with controls. A full list of taxa with VIP-values >1.2 is presented in Supplementary Table S5. Discussion In this study, IgG responses to oral bacteria and saliva microbiome profiles were compared in individuals with AAV in three different stages of the disease. Although the total IgG response to the tested panel of oral bacteria did not differ between controls and pre-symptomatic AAV, the profiles did, as did the profiles and total IgG levels in cases with established AAV. Furthermore, the saliva bacteria profile in acute AAV cases differed distinctly from controls. The IgG profiles of 53 oral bacterial species/subspecies representing commensal and opportunistic taxa differed distinctly between the controls and pre-and established AAV participants, respectively. Thus, IgG levels in several species that have been associated with periodontitis, such as F. alocis, T. forsythia, and A. actinomycetemcomitans, and species in Prevotella and Fusobacterium were elevated in pre-AAV individuals but reduced in established AAV patients [39][40][41]. These species have previously been shown to be more prevalent in other autoimmune diseases [13,28,42,43], and based on this, a link has long been hypothesized between periodontitis and chronic inflammatory disease, such as RA [44]. In support of these findings, established AAV patients had more teeth with deep periodontal pockets than controls, but still with less pronounced signs than patients with established RA. In fact, oral manifestations, such as gingival inflammation (strawberry gingivitis), ulcerations, and tooth loss, have been reported to be present in 6-13% of GPA patients [45][46][47]. Additionally, an unexpected, but noteworthy, finding was that the established AAV status was not only characterized by lower serum IgG levels to periodontitis associated bacteria but also to commensal bacteria in the oral core microbiota, i.e., species commonly found in all or almost all subjects, such as S. mitis, S. sanguinis, S. oralis, and R. mucilaginosa. Combined with the contrasting finding of enrichment of periodontitis-associated species in pre-AAV the present findings support a hypothesis of different dysbiotic oral microbiomes in the pre-and established AAV stages due to host traits, disease, or treatment-related exposure. Unfortunately, the blood biobanks did not have samples suitable for the characterization of the oral microbiota. For this reason, we identified a group of consecutively included AAV patients at the acute stage of the disease and a few with an acute relapse and collected saliva for microbiota characterization. The most prominent findings among all 25 acute AAV patients, i.e., half of the group having antibiotic treatment at sampling and at least for days, were lower saliva microbiota richness (alpha-diversity), but higher beta-diversity, than controls. Here, a combination of LEfSe analysis and multivariate modeling suggested increased detection prevalence of species in the genera Arthrospira, Cardiobacterium, Lactobacillus, Ruminococcaceae G-1, Staphylococcus, Fretibacterium, Veillonellaceae [G-1], and Scardovia. When the analysis was restricted to the 11 acute AAV cases who did not have antibiotic exposure for at least 3 months higher abundance was confirmed for these (except Cardiobacterium, and Ruminococcaceae G-1) and more genera, such as Streptococcus and Treponema. A higher abundance of Streptococcus has been reported in active GPA [7], which was confirmed in the non-antibiotic acute AAV group, but antibiotic treatment appeared to abolish the enrichment despite Streptococcus still being the most abundant genus in the saliva of all participants (28%). Among the 11 non-antibiotics exposed acute AAV cases, several species known to be associated with periodontitis, such as Aggregatibacter actinomycetemcomitans, Tannerella forsythia, and species in Fusobacterium and Treponema were enriched compared to the controls, as were several key actors in the development of dental caries, namely Streptococcus mutans, Scardovia wiggsiae, bifidobacteria, and lactobacilli. When including the antibiotictreated acute AAV cases the enrichment of caries-associated species was still seen but the enrichment of periodontitis-associated species was nullified. We lacked information on dental status in the acute AAV group, but the higher abundance of periodontitis-associated species was in line with more signs of periodontitis in established AAV cases, whereas no difference was seen in caries experience compared with controls. Information from other studies is lacking. The findings in sera/plasma of the microbiota from the different stages of AAV did not parallel the microbiota findings in saliva from acute AAV cases. This may reflect that the samples were not from the same individuals and that the acute AAV group was comparably small. The IgG response to the test panel of oral bacteria species was significantly lower in established AAV samples than in pre-AAV and control samples. This finding may suggest that either the transformation from a pre-symptomatic stage to an established disease or the immunosuppressive treatment given to AAV patients induces an overall lowered IgG response to microorganisms, including oral bacteria. Patients in the established AAV group had been treated extensively during the active course of the disease, and at the blood sampling, 58% were still on oral glucocorticoids and 89% on cytotoxic drugs. Furthermore, 28% had been treated prophylactically with trimethoprim-sulfamethoxazole (an antibiotic mix) during the active course of the disease. Along this line, Kronbichler et al. reported that treatment with trimethoprim-sulfamethoxazole reduced bacteria diversity and disease relapses [48], and Rhee et al. showed that non-glucocorticoid immunosuppression normalized bacterial and fungal composition in the noses of GPA patients and reduced bacteria diversity in AAV patients compared with controls [8]. Whether the prophylactical treatment with antibiotics or the immune-suppressive treatment contributes to the lower IgG responses to oral bacteria in established AAV is not possible to distinguish in our study partly due to the low incidence of the disease (limited number of available cases) and partly due to that many patients had a cocktail of medications. We have recently reported that about 37% of symptom-free pre-AAV individuals were ANCA (PR3-or MPO-ANCA) positive in samples collected between 1 month and 10 years before AAV debut [25]. Thus, ANCA positivity is suggested as an early marker of disease development similar to what was reported for antibodies against citrullinated proteins (ACPA) in pre-RA [49]. Further, activation of the complement system as an early event in the progression of AAV was also reported by us [50]. The enhanced immune response to oral bacteria in pre-AAV versus controls may be part of a general immunological disturbance before clinical manifestation of AAV or potentially reflect an ongoing periodontal inflammatory process triggering the development of autoimmunity or that the inflammatory progression in pre-AAV affects the bacterial profile. However, analyses on stratification for the presence of ANCA as an early sign of disease progression did not support any of the suggested pathways. Previous studies have shown that 60-70% of PR3-ANCA-positive GPA patients are chronic carriers of Staphylococcus aureus in the nose, compared with 20-30% of unaffected individuals [14,51]. Accordingly, S. aureus has been suggested to be linked to AAV disease activity and relapse. Furthermore, antibacterial treatment has been reported to reduce these events [14,17]. We found that 24% of the acute AAV cases had detectable Staphylococcus in saliva versus none among the controls (p < 0.05), with higher abundances in non-antibiotic treated acute AAV cases versus controls but not when antibiotic-treated cases were included. The latter finding agrees with findings by Rhee et al., who reported no difference in the relative abundance of S. aureus between nose samples from GPA cases and controls [8]. However, as with the IgG profiling, the microbiota findings, including a higher prevalence of Staphylococcus, do not allow for the distinction of causal associations and potential effects of acute-phase or long-term treatments. Thus, though saliva was collected in acute cases of AAV the majority (19/25) had not yet entered AAV treatment, 14 of the 25 patients had received antibiotics within the previous 3 months. i.e., 5 with acute AAV who had a relapse with previous treatments, and 9 on suspicion of an infection. The uncertainty of the role of treatment versus the bacteria per se and the finding that the controls formed a narrow cluster, whereas the acute AAV cases were scattered into at least two subgroups in the score plot from the microbiota-based PCA prompted further evaluations. As the scattering among the AAV patients suggested that underlying driving forces beyond AAV status were influential, follow-up OPLS-DA was run among the cases only. This revealed treatment with antibiotics as a significant factor for the subgroup separation and possibly glucocorticoid exposure as a candidate factor (though not statistically significant), whereas all other clinical data (autoantibodies, smoking habits) or comorbidity (diabetes, hypertension) were non-influential for the PCA subgroup separation in acute AAV. Thus, the results by Rhee et al. of glucocorticoid effects on nasal microbiota [8] were not fully supported but our limited group size calls for further evaluations. Taken together, our results emphasize that the effects of acute-phase and long-term treatments on the microbiota and analyses carried out on individuals undergoing treatment should be interpreted with caution. The strength of the present study was the availability of samples from biobanks, which allowed the identification of individuals with pre-AAV and patients with established AAV, together with samples from controls. The biobank search yielded a unique identification of pre-AAV individuals not previously published and a comparably large group of established AAV patients. Furthermore, in this study, we could follow a subgroup of 11 AAV patients from a pre-symptomatic to an established stage of the disease. A weakness is the limited sample size, reducing the power for sensitivity analyses of treatments, such as antibiotics, corticosteroids and/or cytotoxic drugs. An additional weakness is that the assessment of periodontitis was based on pocket depth alone; however, clinical attachment loss or bone loss are not yet systematically reported to the national register on dental status in Sweden. Another potential weakness is the limitation of the V3-V4 fragments in distinguishing phylogenetically close bacterial species, such as Streptococcus and Lactobacillus. To limit the effect of misclassification, we requested 98.5% similarity with the matched database sequences, accepted only species represented by at least two sequences, and performed analyses at both the genus and species levels. Furthermore, the 16S rDNA amplicon sequencing method is restricted to bacterial identification and does not identify fungi, protozoa, and viruses. Future studies should aim to target the potential causal roles of bacteria colonizing the oral cavity in AAV or other forms of vasculitis because many oral species are sources for gut-, pharyngeal-, and airway-colonizing species. Conclusions The conclusions from the present study are that the immune response to oral bacteria differed significantly between pre-AAV, established AAV, and controls, but with different dysbiosis profiles in the two AAV states. It was also concluded that periodontitis manifestations were more severe in AAV patients than in controls but did not reach the levels seen in RA patients. Whether these differences reflect variation in the microbiota per se or the immune response cannot be distinguished from the present results. Similarly, it was concluded that saliva microbiota communities differed between AAV patients with acute disease and controls but conclusions on causality cannot be drawn. Furthermore, our results suggest that immunosuppressive treatment after manifested diagnosis reduces the bacterial overload identified at the pre-symptomatic stage to established disease. The increased immune response to oral microbiota may indicate a clinically not manifest AAV already years before symptom onset. However, this needs to be evaluated in future human studies or experimental settings. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/microorganisms10081572/s1. Supplementary Table S1: Description of bacteria used in the checkerboard assay and culture conditions. Supplementary Table S2: Prevalence and relative abundance of bacterial genera and species detected by multiplex sequencing in saliva from acute AAV cases (n = 25) and controls (n = 23). Supplementary Table S3: Results from Linear discriminant analysis Effect Size (LEfSe) analysis of taxa identified in the saliva sample from acute AAV case and controls. Orange color indicates taxa with increased abundance/prevalence in acute AAV and blue color taxa with increased abundance/prevalence in controls. Supplementary Table S4: List
v3-fos-license
2023-08-28T05:09:18.103Z
2023-08-25T00:00:00.000
261213373
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "3450304a3df704b3787e5398939e09d1cd4b72aa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44639", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "3450304a3df704b3787e5398939e09d1cd4b72aa", "year": 2023 }
pes2o/s2orc
The natural history of Pelizaeus–Merzbacher disease caused by PLP1 duplication: A multiyear case series Key Clinical Message This study aimed to characterize the clinical features, developmental milestones, and the natural history of Pelizaeus–Merzbacher disease (PMD) associated with PLP1 gene duplications. The study examined 16 PMD Patients ranging in age from 7 to 48 years, who had a documented PLP1 gene duplication. The study examined and analyzed the medical and developmental histories of the subjects utilizing a combination of resources that included medical history questionnaires, medical record reviews, and a 31‐point functional disability scale that had been previously validated. The data extracted from the medical records and questionnaires for analysis included information related to medical and developmental histories, level of ambulation and cognition, and degree of functional disability. The summation of findings among the study population demonstrated that the presenting symptoms, developmental milestones achieved, and progression of symptoms reported are consistent with many previous studies of patients with PLP1 duplications. All patients exhibited onset within the first year of life, with nystagmus predominating as the first symptom noticed. All patients exhibited delays in both motor and language development; however, many individuals were able to meet several developmental milestones. They exhibited some degree of continued motor impairment with none having the ability to walk independently. All patients were able to complete at least some of the cognition achievements and although not all were verbal, a number were able to use communication devices to complete these tasks. A critical tool of the study was the functional disability scale which provided a major advantage in helping quantify the clinical course of PMD, and for several, we were able to gather this information at more than one point in time. These reported findings in our cohort contribute important insight into the clinical heterogeneity and potential underlying mechanisms that define the molecular pathogenesis of the disease. This is one of only a small number of natural history studies examining the clinical course of a cohort of patients with PLP1 duplications within the context of a validated functional disability scoring system. This study is unique in that it is limited to subjects with PLP1 gene duplications. This study demonstrated many commonalities to other studies that have characterized the features of PMD and other PLP1‐related disorders but also provide significant new insights into the evolving story that marks the natural history. lizing a combination of resources that included medical history questionnaires, medical record reviews, and a 31-point functional disability scale that had been previously validated. The data extracted from the medical records and questionnaires for analysis included information related to medical and developmental histories, level of ambulation and cognition, and degree of functional disability. The summation of findings among the study population demonstrated that the presenting symptoms, developmental milestones achieved, and progression of symptoms reported are consistent with many previous studies of patients with PLP1 duplications. All patients exhibited onset within the first year of life, with nystagmus predominating as the first symptom noticed. All patients exhibited delays in both motor and language development; however, many individuals were able to meet several developmental milestones. They exhibited some degree of continued motor impairment with none having the ability to walk independently. All patients were able to complete at least some of the cognition achievements and although not all were verbal, a number were able to use communication devices to complete these tasks. A critical tool of the study was the functional disability scale which provided a major advantage in helping quantify the clinical course of PMD, and for several, we were able to gather this information at more than one point in time. These reported findings in our cohort contribute important insight into the clinical heterogeneity and potential underlying mechanisms that define the molecular pathogenesis of the disease. This is one of only a small number of natural history studies examining the clinical course of a cohort of patients with PLP1 duplications within the context of a validated functional disability scoring system. This study is unique in that it is limited to subjects with PLP1 gene duplications. This study demonstrated many commonalities to other | INTRODUCTION In 1885 Friedrich Pelizaeus, a German physician, first identified a genetic disorder in five boys in a single family with nystagmus, spasticity of the limbs, and developmental delay. 1 Twenty-five years later in 1910, Ludwig Merzbacher independently reexamined this family and described further the neuropathology of 14 affected individuals and found that all affected members shared a common ancestor. 2 Together, Pelizaeus and Merzbacher identified this rare X-linked inherited white matter disorder. Pelizaeus-Merzbacher disease (PMD) is today recognized as part of a group of disorders caused by mutations in the PLP1 gene. The gene encodes both PLP, a major component of central nervous system myelin, and an alternatively spliced isoform, DM20, which is a minor component of both central and peripheral nervous system myelin. 3 Duplications of the PLP1 gene cause the majority (50%-75%) of PMD. Point mutations in the coding or splice site regions are found in most of the remaining patients, although a very small portion is caused by deletions of the PLP1 gene. 4,5 The disorder is thus genetically heterogeneous. Individuals with PLP1-related disorders are not only genetically heterogeneous but also clinically heterogeneous. They can, however, be loosely grouped into three main clinical phenotypes: (1) classic PMD, characterized by nystagmus, hypotonia, and delay in motor development with onset in the first year of life, 6,7 (2) connatal PMD characterized by severe hypotonia and stridor with onset at birth and death within the first 10 years of life, 6,8 and (3) SPG2, characterized by a slowly progressive Xlinked spastic paraparesis 3,6,8 Classic PMD is usually caused by a duplication of the PLP1 gene within its locus on the X-chromosome and is typically described as slowly progressive; however, several patients have been observed to maintain a stable clinical course. 7 Variations in the size of PLP1 duplications, and whether or not the region encompassed includes flanking genes, may contribute to variability. Connatal PMD is often caused by point mutations within the PLP1 gene producing misfolding of PLP1, ER retention, and activation of the unfolded protein response (UPR). SPG2 is caused by PLP1 mutations that allow the protein to traverse the ER and become inserted in myelin. Other intermediate phenotypes depend on the specifics of the nature and location of the PLP1 mutation. In order to facilitate future treatment interventions and to understand the natural history of PMD, in this work we have evaluated the clinical presentation and progression of a group of patients with PLP1 duplications. This was done both retrospectively, through chart review, and prospectively, by collecting clinical data. Taken together, these results will be useful for future treatment strategies, developing biomarkers, and timing of treatment interventions in this disease. | Editorial policies and ethical considerations This study was approved as an exempt study by Wayne State University Institution Review Board. Informed consent and assent, when applicable, were obtained from all participants. | Participants Individuals, or the parents or guardians of individuals, who have a PLP1-related disorder and an identified PLP1 pathogenic variant, were invited to participate in the study. Participants were identified through one of three mechanisms. (1) Those who had previously participated in a PLP1-related disorders (PLP1-RD) study at Wayne State University/Detroit Medical Center (WSU/DMC) or who were participating in a different PLP1 study at the time of enrollment; (2) those who had been seen/were being seen at WSU/DMC for clinical care or who had contacted the primary investigator because of a diagnosis of PLP1-RD; (3) those who had genetic testing for PLP1-RD through the molecular genetics laboratory at AI Dupont in Wilmington, DE. Recruitment was conducted either studies that have characterized the features of PMD and other PLP1-related disorders but also provide significant new insights into the evolving story that marks the natural history. K E Y W O R D S gene duplication, humans, mutation, myelin proteolipid protein, Pelizaeus-Merzbacher disease/genetics, Pelizaeus-Merzbacher disease/pathophysiology, phenotype through mail or in person (for those who were being seen at WSU/DMC or Dupont during the period of enrollment). Potential participants/parents/guardians were sent or given a study packet that included the consent form with HIPAA authorization, an assent form (when applicable), a medical history questionnaire, a family history questionnaire, medical record release forms, and a "decline to participate" form. Individuals who had already filled out the questionnaires as part of ongoing/previous PLP1related disorders studies or clinical care instead received a follow-up "current medical history" questionnaire. For all potential participants, those who did not return the "decline to participate" form within 2 weeks were contacted by telephone by a study investigator to answer questions about the study. Participants were also asked to consider taking part in an optional long-term follow-up which involves completing a one-page "current medical history" questionnaire every 1-2 years. | Functional disability score The functional disability score (FDS) is a clinical scale that has been developed and previously validated 9 to analyze the clinical disability in patients with PLP1-related disorders. The clinical scale measures the ability of patients to perform routine tasks of daily living. The scoring system does not depend on any one neurologic sign but is a reproducible scale that can collate responses from a patient's caregiver. The inter-rater reliability of the scoring system is greater than 95% among a small team of neurologists at Wayne State University School of Medicine who estimated the functional disability of a group of 20 patients with genetically confirmed PMD. 9 The FDS of this cohort of patients was determined by either direct examination of the patient, interview with the caregiver of the patient, and/or written report of the caregiver. | Analysis Data were extracted from the medical records and questionnaires for analysis, including information related to medical and developmental histories, level of ambulation and cognition, and degree of functional disability. Medical and developmental histories, level of ambulation, and level of cognition were assessed based on questions from the medical history and current medical history questionnaires. The degree of functional disability was assessed based on the score from a 31-point functional disability scale (FDS) from 0 (lowest level of achievement) to 31 (highest level of achievement). 9 This score was derived from measures of nine areas of function: employment/education, speech, diet, dressing, toileting, drawing/writing, walking, sitting, and breathing (See Table 1). FDS scores were gathered from one of two mechanisms. (1) From chart review based on FDS scores previously obtained as part of prior participation in a PLP1-RD study or as part of clinical care for PLP1-RD or (2) from a series of questions asked in the medical history and current medical history questionnaires. | RESULTS Sixteen subjects/parents or guardians completed the medical history questionnaire (MHQ) and medical record requests (See Table 2). The average age of the study subject at the time of completion was 22 years (range 7-48 years). Sixteen subjects had at least one functional disability scale score available. The average age at which the first functional disability scale (FDS1) score was obtained was 19 years (range 7-42 years; standard deviation 10.7 years). Nine of those individuals had at least two functional disability scale scores available. The average age at which the second functional disability scale (FDS2) score was obtained was 29 years (range from 14 to 48 years). The average number of years of follow-up from FDS1 to FDS2 was 5.4 years (range 4-7 years). Two individuals had a third functional disability scale score available. The average age at which the third functional disability scale (FDS3) score was obtained was 48.5 years (range 46-51 years). The average number of years of follow-up from FDS2 to FDS3 was 5.5 years. In total, three sibling pairs were included in this study. | Presenting symptoms For a majority of the subjects in this study, the first symptom identified was nystagmus (11 of 16, 68.8%). In most cases, nystagmus was an isolated symptom but 3 of the 11 cases (27.3%) each had one additional symptom. These included seizures, "couldn't lift legs," and head lag in a baby that also had cleft lip and palate. In the remaining subjects, the presenting symptoms included "missed developmental milestones," "could not sit up," "head lag," "lower extremity tremors," and "strabismus." The average age at which the presenting symptom was noticed was 3.1 months (range from birth to 12 months; SD = 3.4 months). T A B L E 1 Functional disability rating scale. 9 Education 0-no formal schooling 1-special school or special education classes 2-regular classes, but not at grade level 3-regular school, | Common features/symptoms Ninety-four percent (15/16) of the subjects were reported to have had nystagmus at some point in their life. The average age at which nystagmus was noticed was 2.3 months (range from birth to 9 months, SD = 2.5, n = 12 subjects for whom age of onset was available). All subjects reportedly had hypotonia (15 of 15). For most, the age of onset was between birth to 12 months (9/10 responses). For one individual, the onset of hypotonia occurred after a car accident when the individual was in his 30s. Sixty-three percent (10/16) of subjects had feeding problems. For the five subjects for which the age of onset was reported, it varied from less than 1 year of age to 31 years of age. Forty percent (6 of 15) of subjects had gastroesophageal reflux. Age of onset was reported for five subjects and ranged from birth to 15 years of age (average age 6). All subjects reported to have gastroesophageal reflux were also reported to have feeding problems; however, not all subjects who had feeding problems also had reflux. | Age at diagnosis The average age of diagnosis was 5.1 years, ranging from birth to 18 years. This includes diagnoses made prenatally or at birth because of a previously affected sibling. Table 3 shows what developmental milestones subjects achieved and when available, the age at which they achieved them. Of those who responded to the developmental milestone questions, all (12 of 12) reported that the affected individual was able to hold their head up and turn back to the front (10 of 10). Ninety-one percent (10 of 11) were able to turn front to back. Fifty-four percent (6 of 11) were able to crawl either combat and/or belly style. Thirtysix percent (4 of 11) were able to pull to a sit but only 17% (2 of 12) were able to sit alone. Thirty-one percent (4 of 13) were able to take their first steps, one "with help," one "with a kiddie walker," and one "with assistance and support." None (12 of 12) were able to stand alone, and none (11 of 11) were able to climb stairs. Thirty-three percent (4 of 12) were able to pedal a tricycle. Regarding toileting, 60% (9 of 15) had achieved toilet training. | Developmental milestones In terms of language development, 83% (10 of 12) demonstrated the ability to babble, 81% (13 of 16) were able to speak their first words, and 50% (6 of 12) were able to speak in sentences. | Ambulation None of the study subjects (0 of 16) were able to walk unassisted; however, none were bedbound. Ninety-four Note: Ages are in months. +, achieved, age not available; WNR+, achieved within the normal range for milestone; −, did not achieve; Blank, missing data. percent (15 of 16) reported that they currently use a wheelchair "all of the time" and the remaining individual reported using a wheelchair "most of the time." Thirtyeight percent (6 of 16) reported that they currently used braces "always," and 6% (1 of 16) reported that they currently used a walker "most of the time." Nineteen percent (3 of 16) reported using other devices (crutches, stander, and gait trainer). None of the participants (0 of 16) reported using a cane. Seventy-five percent (12 of 16) of subjects reported using a wheelchair starting at 0-10 years of age, 6% (1 of 16) starting at 10-20 years, and 6% (1 of 16) starting at 20-30 years. Sixty-nine percent (11 of 16) reported using braces starting at 0-10 years of age; 25% (4 of 16) reported first using a walker at 0-10 years of age; and 6% (1 of 16) reported first using a walker at 10-20 years of age. | Cognition All subjects (15 of 15) were reported to know or respond to their names and were able to follow 2 step commands. Ninety-three percent (14 of 15) could name two objects in the room, 86% (12 of 14) could add, and 77% (10 of 13) knew their address. At least some of the subjects required the use of communication devices to complete these tasks. Sixty-nine percent (11 of 16) were reported to be able to read. Responses regarding reading level varied widely and ranged from "a little" or "letters" up to "12th-grade" reading level. The average change in FDS scores from FDS1 to FDS2 was −0.7 (range −6.5 to 7.5). Five individuals scored lower on FDS2 (average change −3.3), three scored higher (average change 3.5) and one remained unchanged. The average change in FDS scores from FDS2 to FDS3 was −3.75 (both scored lower). FDS individual scores: Answers from the following nine individual categories included in the overall FDS were also analyzed. Answers from FDS1 were most often included in this analysis. However, the range of responses selected for each question included responses from any available FDSs (FDS1 through FDS3). Education/employment: Regarding education, 75% of subjects (9 of 12) attended a special school or had special education classes (Table 5). Responses ranged from "regular school grade-appropriate for age (within 2 years)" to "special school or special education classes," with no participants selecting "no formal schooling." Concerning employment, the most often selected response was sheltered workshop (i.e., works at an institution dedicated to disabled employees) (50%, 2 of 4); however, responses ranged from a special job (i.e., works at a conventional workplace, but requires special supervision) to unable to work/homebound. From FDS1 to FSD2, five subjects' scores in education/employment increased, one decreased, and three did not change. Speech: The most often selected response was speech understandable, but with difficulty (37.5%, 6 of 16) with responses ranging from no verbal communication to detectable speech disturbance but easily understood. None of the participants selected normal speech. From FDS1 to FDS2, one participant's score for speech increased, three decreased, and six did not change. Diet: The most often selected response was "normal swallowing" (5 of 16), with responses ranging from "normal swallowing" to "tube feedings only." From FDS1 to FDS2, two participants' scores in diet increased, five decreased, and two did not change. Dressing: The most often selected response was total dependence (50%, 8 of 16) with responses ranging from independent with decreased efficiency to total dependence. From FDS1 to FDS2, three participants' scores in dressing decreased and six did not change. Toilet: The most often selected response was "total dependence" (9 of 16) with responses ranging from "normal" to "total dependence." From FDS1 to FDS2, one participant's score for toileting increased, three decreased, and five did not change. Drawing/writing: The most often selected response was can scribble but cannot draw or write letters (62.5%, 10 of 16), with responses ranging from can draw or write letters to cannot reach for and grasp writing utensil. None of the subjects were reported to be able to write/draw normally for their age. From FDS1 to FDS2, three participants' scores for drawing/writing increased, two decreased, and four did not change. Walking: The most often selected response was "wheelchair or bedbound" (56%, 9 of 16), with responses ranging from can walk a few steps, but needs adaptive aids or other support to wheelchair or bedbound. From FDS1 to FDS2, one participant's score for walking increased, three decreased, and five did not change. Sitting: The most often selected response was "cannot sit without support" (87.5%, 14 of 16). From FDS1 to FDS2, one participant's score for sitting decreased and the remaining eight did not change. Breathing: The most often selected response was normal breathing (62.5%, 10 of 16), with responses ranging from normal breathing to intermittent use of non-invasive respiratory support. None of the participants selected ventilator or constant respiratory support. From FDS1 to FDS2, one participant's score for breathing increased, one decreased, and seven did not change. | DISCUSSION This study aimed to characterize the clinical features, developmental milestones, and the natural history of Pelizaeus-Merzbacher disease in a cohort of subjects, ranging in age from 7 to 48 years, who had a documented PLP1 gene duplication (PMD). We examined and analyzed the medical and developmental histories of subjects utilizing medical history questionnaires, medical record reviews, and a 31-point functional disability scale. 9 Characterizing the natural history, to what extent the condition progresses over time, and the variability in both is important in providing genetic counseling and anticipatory guidance to parents and guardians of individuals with PMD. Understanding the natural history is also critical in the event that treatments to alter the disease course become available in the future. The presenting symptoms, developmental milestones achieved, and progression of symptoms reported in our cohort were consistent with many previous studies of patients with PLP1 duplications. All our patients exhibited onset within the first year of life, with nystagmus predominating as the first symptom noticed, consistent with previous reports. 7,8,10,11 In addition, most had nystagmus at some point in their lives and all had hypotonia, key characteristics of the classic PMD phenotype. Velasco Parra et al. 12 reported on seven Columbian patients with Pelizaeus-Merzbacher disease ranging in age from 6 to 16 and with various PLP1 pathogenic variants. Unlike in our cohort. In their series, only 28.7% (2 of 7) had early onset nystagmus and only 57% (4 of 7) had hypotonia. However, in a recent cohort study of 111 Chinese individuals with PMD and various PLP1 pathogenic variants who were followed for a median of 53 months, 99.1% (110/111) presented with nystagmus and 83.8% (93 of 111) with hypotonia. 13 In our cohort, all of our subjects exhibited delays in both motor and language development; however, many individuals were able to meet several developmental milestones. Similar to previous studies, a subset of the PMD patients in our study were able to obtain head control, the ability to sit, the ability to speak several words or sentences, and some were even able to walk with assistance. 7,8,11,13 All individuals exhibited some degree of continued motor impairment with none of the participants having the ability to walk independently. We found that all individuals relied on the use of wheelchairs for most or all their ambulation. Like previous studies, the patients in our cohort seemed to exhibit large phenotypic variability. 11 This variability occurred not only within the cohort but between siblings. In terms of cognitive achievement, previous studies have observed that individuals with PLP1 duplications often have some degree of intellectual disability, ranging from mild to severe. 7,11 All individuals in our cohort were able to complete at least some of the cognition achievements such as knowing or responding to their name and following two-step commands. Although not all individuals were verbal, a number were able to use communication devices to complete these tasks. Additionally, many were able to read, although the reading levels were variable between individuals. By utilizing the functional disability scale, 9 we were able to quantify the clinical course of PMD, and for several individuals, we were able to gather this information at more than one point in time. The clinical course of PMD has previously been described as slowly progressive; however, to date, this has not been adequately characterized. In a study by Regis, et al., 7 five patients were followed for a period ranging from 5 to 12 years. In this study, the clinical course remained stable for four patients, while one showed a mild worsening in the last year of follow-up. It is interesting to note that in our study population, there were individuals who scored both lower and higher on FDS2 versus FDS1 (as well as an individual with an FDS score that remained unchanged). Given the limited number of individuals in our study with more than one FDS score, a comparison between FDS1 and FDS2 scores was not significant; however, our study failed to depict a progressive clinical course. Given the limited number of follow-up years in our study, it remains possible that our population is consistent with previous studies suggesting a slowly progressive disorder. Further research using a larger study population and FDS scores at additional time points would be necessary to characterize the clinical course more fully. Additionally, a pattern may exist whereby individuals with PMD gain skills for a period before deteriorating, as has been previously suggested 8 ; however, our data set did not allow us to look for any such potential patterns. | CONCLUSION This is one of only a small number of natural history studies examining the clinical course of a cohort of patients with PLP1 pathogenic variants and is unique in that it is limited to subjects with PLP1 gene duplications. This study demonstrated many commonalities with other studies that have characterized the features of PMD and other PLP1-related disorders but also some new insights into the natural history. There are several limitations of the current study. First, the size of the cohort is small (n = 16), with fewer individuals having completed a second or third FDS. This limitation is a reflection of the day-to-day demands that are required in providing time-intensive care for PMD patients. For this reason, many statistical analyses were not possible. Given the small sample size, it may be difficult to generalize or extrapolate a conclusion from this study to a larger population of PMD patients, particularly given the extensive variability observed within and between families. Future research utilizing a larger cohort will be necessary to further clarify the natural history and clinical course of PMD. This work certainly provides a good foundation that opens a new window into the natural history of patients with PLP1 duplications. A second limitation arises due to the potential for inconsistency between measurements, given that some FDS scores were completed via self-report, and others were completed based on in-person physical examinations. Additionally, questionnaires were filled out based on the self-report or report of a parent or guardian. An important strength is the ability to quantify and analyze the FDS through self-report. A third limitation was the small number of time points available for several of the participants, which limited our ability to comment on whether PMD was progressive. In addition, there were differences in the number of years of follow-up between participants, and initial questionnaires and FDS scores were gathered at a wide range of ages. Finally, there were several sibling pairs analyzed in the study. If the natural history and clinical course of PMD are assumed to vary less between members of the same family than it does between members of different families, the inclusion of multiple members of the same family in our study has the potential to bias our results. ACKNOWLEDGMENT The authors thank all the patients and their families for their participation in this study. FUNDING INFORMATION Some of the data used in this study was collected as part of a study funded by the European Leukodystrophy Association ELA (Grant # ELA 2011-02215). CONFLICT OF INTEREST STATEMENT John Kamholz, Jeremy Laukka, and Angela Trepanier declare that there is no conflict of interest.
v3-fos-license
2017-10-20T05:26:09.869Z
2015-09-18T00:00:00.000
22524729
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4931641", "pdf_hash": "75faafc2b97db2082cfecef4d88ad07c564843d1", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44640", "s2fieldsofstudy": [ "Physics" ], "sha1": "75faafc2b97db2082cfecef4d88ad07c564843d1", "year": 2015 }
pes2o/s2orc
Acousto-plasmofluidics : Acoustic modulation of surface plasmon resonance in microfluidic systems Modulated surface plasmon resonance for adsorption studies J. Vac. Sci. Technol. 16, 483 (1979); 10.1116/1.569988 All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported license. See: http://creativecommons.org/licenses/by/3.0/ Downloaded to IP: 128.62.47.153 On: Fri, 18 Sep 2015 23:18:02 AIP ADVANCES 5, 097161 (2015) Acousto-plasmofluidics: Acoustic modulation of surface plasmon resonance in microfluidic systems Daniel Ahmed,1 Xiaolei Peng,2 Adem Ozcelik,1 Yuebing Zheng,2,a and Tony Jun Huang1,a 1Department of Engineering Science and Mechanics, Department of Biomedical Engineering, The Pennsylvania State University, University Park, PA 16802 USA 2Department of Mechanical Engineering, Materials Science and Engineering Program, Texas Materials Institute, The University of Texas at Austin, Austin, TX 78712 USA (Received 1 July 2015; accepted 10 September 2015; published online 18 September 2015) We acoustically modulated the localized surface plasmon resonances (LSPRs) of metal nanostructures integrated within microfluidic systems. An acoustically driven micromixing device based on bubble microstreaming quickly and homogeneously mixes multiple laminar flows of different refractive indices. The altered refractive index of the mixed fluids enables rapid modulation of the LSPRs of gold nanodisk arrays embedded within the microfluidic channel. The device features fast response for dynamic operation, and the refractive index within the channel is tailorable. With these unique features, our “acousto-plasmofluidic” device can be useful in applications such as optical switches, modulators, filters, biosensors, and lab-on-a-chip systems. C 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi.org/10.1063/1.4931641] Localized surface plasmon resonances (LSPRs) are charge density oscillations which are confined to subwavelength conductive nanoparticles within an oscillating electromagnetic field.1–3 Associated with LSPR are sharp spectral absorption and scattering peaks as well as strong near-field electromagnetic enhancement. LSPR can play an important role in applications such as ultrasensitive spectroscopy,4 biosensing,3,5–7 imaging,8–10 nanophotonic devices,11–16 medical diagnostics and therapy.17,18 Applications of LSPR require active tuning of resonance wavelength as well as high sensitivity to local refractive index changes. Nanostructures of gold and silver are common for LSPR applications, and their sizes, shapes, and morphology have been optimized for LSPR applications.19–26 For example, for gold nanospheres of LSPR extinction between 500 and 600 nm, the central wavelength of the absorption peak may be tuned over 60 nm by varying the diameter between 10 and 100 nm.20 The plasmonic properties of metallic nanostructures are defined upon fabrication. This fixed definition is a hurdle when dynamically reconfigurable functionalities are needed in LSPR-based devices. One solution is to build a multiplexed analysis platform, which exploits LSPR for highthroughput laboratory and clinical settings.3,4,8–10 Yu et al. designed and fabricated gold nanorod based molecular probes with aspect ratios of 1.5, 2.8, and 4.5 for multiplexed identification of cell surface markers.27 In a similar strategy, a duplexed sensor featured patterned Ag nano-triangles of two different heights, yielding plasmon resonances at 683 and 725 nm.10,28 In these approaches, many samples are fabricated at once by multiplexing techniques, but the fabrication is complex and is suitable only for specific specimens. The fusion of plasmonics and microfluidics, known as plasmofluidics, has yielded lab-on-achip devices with the advantages of integration and reconfigurability.29–32 These plasmofluidic devices feature precise sample delivery and analysis, small sample volumes, and high integration.33–37 aAuthors to whom correspondence should be addressed. Electronic mail: zheng@austin.utexas.edu (Y.B.Z.); junhuang@psu.edu (T.J.H.) 2158-3226/2015/5(9)/097161/6 5, 097161-1 ©Author(s) 2015 All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported license. See: http://creativecommons.org/licenses/by/3.0/ Downloaded to IP: 128.62.47.153 On: Fri, 18 Sep 2015 23:18:02 097161-2 Ahmed et al. AIP Advances 5, 097161 (2015) Moreover, it offers unprecedented ability to tune the device’s optical properties (including LSPRs) simply by changing fluids. Here, we demonstrate dynamic modulation of LSPR by integrating acoustics38–42 with plasmofluidics. We developed an acoustic-based micromixing technique: millisecond-scale mixing of two laminar streams with different refractive indices. Diffusion across the interface yields localized refractive indices and creates refractive index gradient profiles.33,43 A change in environmental refractive index of Au nanodisks fabricated on a glass substrate is induced via acoustically driven oscillating microbubbles trapped in sidewalls of the microfluidic channel. The change can be reversed by turning the acoustic field on and off, resulting in millisecond-scale, repeatable LSPR tuning. By selective mixing of input fluids with different refractive indices, we realized tailorable, millisecond-scale modulation of the refractive index. Figure 1(a) is a schematic of the experimental setup as well as the working principle of the acoustically driven plasmofluidic device. Gold nanodisks, each of diameter 180 nm and in arrays of period 320 nm, were fabricated on glass substrates by integrating conventional nanosphere lithography. A template formed by the self-assembly of monodisperse nanospheres on flat surfaces acts as an etching/deposition mask, with two types of reactive ion etching processes.44,45 A single-layered, Y-shaped (two inlets and one outlet) microchannel (length: 5 mm, width: 120 μm, and height: 50 μm) was fabricated of polydimethylsiloxane (PDMS) by soft lithography and a mold-replica technique.46 The microfluidic channel periodically featured rectangular cavities. These were activated with oxygen plasma, and were bonded to the glass substrate which supported the gold nanodisk arrays. A piezoelectric transducer (273-073, RadioShack, USA) mounted to the glass slide generated acoustic waves. Once liquid was injected into the channel, air bubbles became trapped within the cavities due to contact line pinning at the leading edge of the cavity when the channel is first filled. FIG. 1. (a) Schematic of the experimental setup. The microfluidic channel and the piezoelectric transducer were bonded to a glass slide with Au nanodisk arrays (Inset: Au nanodisk arrays on glass substrates). (b) Laminar flow of liquids with different refractive indices (in the absence of acoustic waves). (c) Fluid of combinatorial refractive index, resulting from mixture of two fluids actuated by acoustic waves. All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported license. See: http://creativecommons.org/licenses/by/3.0/ Downloaded to IP: 128.62.47.153 On: Fri, 18 Sep 2015 23:18:02 097161-3 Ahmed et al. AIP Advances 5, 097161 (2015) When a trapped bubble was exposed to a uniform acoustic field of wavelength much larger than the bubble’s diameter, the microbubble oscillated. Viscous damping in the microchannel displaced the oscillating fluid, which induced a steady flow around the air bubble, a phenomenon known as acoustic microstreaming.47 When the frequency driven by the transducer neared the resonance frequency of the trapped microbubble, the oscillation amplitude of the liquid-air interface was maximized.48–50 We exploited this phenomenon to realize rapid micromixing. Fig. 1(b) shows the laminar flow when the transducer was off, and Fig. 1(c) when on. The oscillating microbubbles disrupted the clear liquid-liquid interface and rapidly mixed the fluids. By regulating the fluids of different refractive indices, we modulated the LSPRs of the gold nanodisk arrays. To demonstrate acoustic mixing, we infused the two inlets with a dye and a buffer solutions, both at 5 μL/min. Once we established a laminar flow (Fig. 2(a)), the bubbles were actuated at the resonance frequency. The developed microstreaming from the oscillating bubbles disrupted the clear liquid interface and induced mixing (Fig. 2(b)). We observed upon mixing a significant change in refractive index. Before mixing, the refractive index corresponds to the Z-profile, and the gradient at the interface depends on the flow rate and the miscibility of the two liquids (Fig. 2(c)). Once the transducer was turned on, the mixing of two fluids yielded a change in volumetric-average refractive index. With the knowledge of refractive index changes which are induced by mixing, we predicted the modulation of the LSPRs. To demonstrate the micromixing-enabled modulation of LSPR, we used deionized water (H2O with a refractive index of 1.33) and calcium chloride solution (CaCl2 with a refractive index of 1.44 FIG. 2. (a) Laminar flow of CaCl2 solution and water in the absence of acoustic waves. (b) Mixing of ink and water in the presence of acoustic waves. (c) Modulation of refractive index change across the channel with the transducer turned on and ][3] Associated with LSPR are sharp spectral absorption and scattering peaks as well as strong near-field electromagnetic enhancement.LSPR can play an important role in applications such as ultrasensitive spectroscopy, 4 biosensing, 3,[5][6][7] imaging, 8-10 nanophotonic devices, [11][12][13][14][15][16] medical diagnostics and therapy. 17,18pplications of LSPR require active tuning of resonance wavelength as well as high sensitivity to local refractive index changes.0][21][22][23][24][25][26] For example, for gold nanospheres of LSPR extinction between 500 and 600 nm, the central wavelength of the absorption peak may be tuned over 60 nm by varying the diameter between 10 and 100 nm. 20he plasmonic properties of metallic nanostructures are defined upon fabrication.This fixed definition is a hurdle when dynamically reconfigurable functionalities are needed in LSPR-based devices.][10] Yu et al. designed and fabricated gold nanorod based molecular probes with aspect ratios of 1.5, 2.8, and 4.5 for multiplexed identification of cell surface markers. 27In a similar strategy, a duplexed sensor featured patterned Ag nano-triangles of two different heights, yielding plasmon resonances at 683 and 725 nm. 10,28In these approaches, many samples are fabricated at once by multiplexing techniques, but the fabrication is complex and is suitable only for specific specimens. Here, we demonstrate dynamic modulation of LSPR by integrating acoustics [38][39][40][41][42] with plasmofluidics.We developed an acoustic-based micromixing technique: millisecond-scale mixing of two laminar streams with different refractive indices.Diffusion across the interface yields localized refractive indices and creates refractive index gradient profiles. 33,43A change in environmental refractive index of Au nanodisks fabricated on a glass substrate is induced via acoustically driven oscillating microbubbles trapped in sidewalls of the microfluidic channel.The change can be reversed by turning the acoustic field on and off, resulting in millisecond-scale, repeatable LSPR tuning.By selective mixing of input fluids with different refractive indices, we realized tailorable, millisecond-scale modulation of the refractive index. Figure 1(a) is a schematic of the experimental setup as well as the working principle of the acoustically driven plasmofluidic device.Gold nanodisks, each of diameter 180 nm and in arrays of period 320 nm, were fabricated on glass substrates by integrating conventional nanosphere lithography.A template formed by the self-assembly of monodisperse nanospheres on flat surfaces acts as an etching/deposition mask, with two types of reactive ion etching processes. 44,45A single-layered, Y-shaped (two inlets and one outlet) microchannel (length: 5 mm, width: 120 µm, and height: 50 µm) was fabricated of polydimethylsiloxane (PDMS) by soft lithography and a mold-replica technique. 46he microfluidic channel periodically featured rectangular cavities.These were activated with oxygen plasma, and were bonded to the glass substrate which supported the gold nanodisk arrays.A piezoelectric transducer (273-073, RadioShack, USA) mounted to the glass slide generated acoustic waves.Once liquid was injected into the channel, air bubbles became trapped within the cavities due to contact line pinning at the leading edge of the cavity when the channel is first filled.When a trapped bubble was exposed to a uniform acoustic field of wavelength much larger than the bubble's diameter, the microbubble oscillated.Viscous damping in the microchannel displaced the oscillating fluid, which induced a steady flow around the air bubble, a phenomenon known as acoustic microstreaming. 479][50] We exploited this phenomenon to realize rapid micromixing.To demonstrate acoustic mixing, we infused the two inlets with a dye and a buffer solutions, both at 5 µL/min.Once we established a laminar flow (Fig. 2(a)), the bubbles were actuated at the resonance frequency.The developed microstreaming from the oscillating bubbles disrupted the clear liquid interface and induced mixing (Fig. 2(b)).We observed upon mixing a significant change in refractive index.Before mixing, the refractive index corresponds to the Z-profile, and the gradient at the interface depends on the flow rate and the miscibility of the two liquids (Fig. 2(c)).Once the transducer was turned on, the mixing of two fluids yielded a change in volumetric-average refractive index.With the knowledge of refractive index changes which are induced by mixing, we predicted the modulation of the LSPRs. To demonstrate the micromixing-enabled modulation of LSPR, we used deionized water (H 2 O with a refractive index of 1.33) and calcium chloride solution (CaCl 2 with a refractive index of 1.44 at a concentration of 5 M).A micro-spectroscope (Spectrapro 2300i, Acton, USA) was used before mixing to detect the signal at the CaCl 2 side.Figure 3 shows extinction spectra collected from the region of nanodisk array located beneath the CaCl 2 flow before and after switching the transducer on.The single peak arises from the in-plane dipole resonance of the Au nanodisk arrays.A blue shift occurred after mixing due to the decreased refractive index of the mixed liquid.To relate the peak shift to the micromixing-induced refractive index change, we first calibrated the sensitivity of the Au nanodisk arrays relative to the change in the surrounding refractive index.Calibration was realized by injecting liquids of known refractive indexes into the channel and recording the corresponding extinction spectra.Plotting the peak wavelength as a function of the refractive index of surrounding fluid, we found sensitivity as 120 nm/RIU (RIU: refractive index unit).Second, we calculated the refractive index before and after mixing as 1.44 (5 M) and 1.384 (2.5 M), respectively.The latter comes from the diluted CaCl 2 by assuming that the two flows are of the same volume and are homogeneously mixed.From Fig. 3, we measured the peak wavelength of the LSPR to be 691 nm and 684 nm before and after mixing, respectively.According to the two data pairs (refractive index: 1.44 and peak wavelength: 691; and refractive index: 1.384 and peak wavelength: 684), we calculated the ratio of the shift in peak wavelength to the change in the refractive index as 125 nm/RIU.The good matching (deviance < 5%) between the calibrated sensitivity (120 nm/RIU) and the calculated ratio (125 nm/RIU) indicates well-controlled, predictable modulations of the refractive indices, thereby indicating modulation of LSPR. Next we calibrated the response speed.Reversible tuning of the LSPRs was achieved by switching the transducer on and off (Fig. 4).From the close-ups of the parts of the curve (Figs.4(b) and 4(c)), the response times for the "off" and "on" switching processes were estimated to be 270 ms and 250 ms, respectively.We employed a UV-Vis-IR spectrometer (USB4000, Ocean Optics, USA) to measure the dynamic process.There is an intrinsic signal delay associated with the integration process in the photodetector.The actual response time should be faster than what was measured, and a more-accurate measurement technique based on photodiodes is under development. In summary, we have modulated the LSPR of metallic nanostructures by acoustically-driven oscillations of sidewall-trapped microbubbles.Our work demonstrates that fluids within microchannels are effective active media for modulating LSPR, by means of efficient acoustic transduction.The modulation range and the spectra bands are flexible, and the response time is on the order of milliseconds.Our hybridized acoustic-microfluidic-plasmonic devices can lead to the development of many reconfigurable optical components such as switches and modulators. 3 Ahmed FIG. 1.(a) Schematic of the experimental setup.The microfluidic channel and the piezoelectric transducer were bonded to a glass slide with Au nanodisk arrays (Inset: Au nanodisk arrays on glass substrates).(b) Laminar flow of liquids with different refractive indices (in the absence of acoustic waves).(c) Fluid of combinatorial refractive index, resulting from mixture of two fluids actuated by acoustic waves. Fig. 1(b) shows the laminar flow when the transducer was off, and Fig. 1(c) when on.The oscillating microbubbles disrupted the clear liquid-liquid interface and rapidly mixed the fluids.By regulating the fluids of different refractive indices, we modulated the LSPRs of the gold nanodisk arrays. FIG. 2 . FIG. 2. (a) Laminar flow of CaCl 2 solution and water in the absence of acoustic waves.(b) Mixing of ink and water in the presence of acoustic waves.(c) Modulation of refractive index change across the channel with the transducer turned on and off. FIG. 4 . FIG. 4. (a) Time dependence of extinction efficiency for Au nanodisks embedded in the fluids when the transducer was turned on and off.(b,c) Close-ups of parts of the curve shown in (a).
v3-fos-license
2017-06-22T14:12:21.900Z
2012-06-18T00:00:00.000
9960802
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://cancerci.biomedcentral.com/track/pdf/10.1186/1475-2867-12-30", "pdf_hash": "8029293d53a675798840152d53c0aba219bbd048", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44641", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "8029293d53a675798840152d53c0aba219bbd048", "year": 2012 }
pes2o/s2orc
Knockdown of PLC-gamma-2 and calmodulin 1 genes sensitizes human cervical adenocarcinoma cells to doxorubicin and paclitaxel Background RNA interference (RNAi) is a powerful approach in functional genomics to selectively silence messenger mRNA (mRNA) expression and can be employed to rapidly develop potential novel drugs against a complex disease like cancer. However, naked siRNA being anionic is unable to cross the anionic cell membrane through passive diffusion and therefore, delivery of siRNA remains a major hurdle to overcome before the potential of siRNA technology can fully be exploited in cancer. pH-sensitive carbonate apatite has recently been developed as an efficient tool to deliver siRNA into the mammalian cells by virtue of its high affinity interaction with the siRNA and the desirable size distribution of the resulting siRNA-apatite complex for effective cellular endocytosis. Moreover, internalized siRNA was found to escape from the endosomes in a time-dependent manner and efficiently silence gene expression. Results Here we show that carbonate apatite-mediated delivery of siRNA against PLC-gamma-2 (PLCG2) and calmodulin 1 (CALM1) genes has led to the sensitization of a human cervical cancer cell line to doxorubicin- and paclitaxel depending on the dosage of the individual drug whereas no such enhancement in cell death was observed with cisplatin irrespective of the dosage following intracellular delivery of the siRNAs. Conclusion Thus, PLCG2 and CALM1 genes are two potential targets for gene knockdown in doxorubicin and paclitaxel-based chemotherapy of cervical cancer. Background Genes are transcribed into mRNAs and subsequently translated into proteins to carry out the major functions within a cell and the mutations in certain genes leading to their suppression or overexpression are usually responsible for both acquired and genetic diseases. Delivery of functional gene(s) or gene-silencing element(s) could be the potential options in restoring the normal functions of the cell. RNA interference (RNAi) that can selectively silence mRNA expression in cell cytoplasm can be utilized to develop new drugs against target therapeutic genes [1][2][3][4][5]. RNAi can be harnessed for selective gene inhibition in two different routes: 1) cytoplasmic delivery of short interfering RNA (siRNA) for directly breaking down the specific mRNA and 2) nuclear delivery of gene expression cassettes to express a short hairpin RNA (shRNA) which is further processed by cellular machinery to siRNA in the cytoplasm [6]. However, siRNA, a synthetic RNA duplex of 21-23 nucleotides, is more advantageous than shRNA because of the difficulty in the construction of a shRNA expression system [6], and the requirement of the expression system to overcome the nuclear barrier for shRNA expression [7]. siRNA in the cytoplasm of the cells incorporates into a multiprotein RNA-induced silencing complex (RISC) and is unwound into single-stranded RNAs by Argonaute 2, a multifunctional protein within the RISC, forming antisense strand-associated RISC in order to guide and selectively degrade the complementary mRNA with the help of Argonaute-2 [8]. Perfect hybridization between the antisense strand of siRNA and the target mRNA leads to degradation of the mRNA near the center of the target-siRNA duplex [8]. However, the strong anionic phosphate backbone with consequential electrostatic repulsion from the anionic cell membrane is an obstacle to the passive diffusion of siRNA across the membrane [9]. The hydrophobic lipid bilayer could pose an additional barrier to the hydrophilic siRNA. Moreover, naked siRNA can be degraded by the plasma nucleases and even subjected to renal elimination due to its small size before reaching the target site in vivo [10,11]. A number of existing non-viral vectors have been developed for intracellular siRNA delivery with limited efficacy [8]. Usually, a non-viral vector being cationic can electrostatically bind with an anionic siRNA to form a stable complex, thus protecting it from nuclease-mediated degradation, enabling it to cross the plasma membrane through endocytosis and finally facilitating its endosomal escape [8]. Cancer is a complex disease responsible for millions of deaths worldwide and despite remarkable efforts made in the last decades limited successes have been achieved so far to cure various types of cancer. Clinical efficacy of current chemotherapeutic drugs are often limited owing to to their toxic effects on normal cells and the patients can tolerate only the doses which are therapeutically insufficient, thus leading to chemoresistance and subsequent tumor recurrence [12]. Since cancer is the result of overexpression or suppression of signaling pathways aiding cancer cell survival and proliferation, non-viral vector-mediated delivery of siRNAs specific for the genes of pathways, to cancer cells would be the potential treatment options that might additionally render cancer cells extremely sensitive to cytotoxic chemotherapy [11]. Among the signalling cascades, MAP kinase, PI-3 knase and Ca 2+ -calmodulin pathways are extensively involved in proliferation and survival of various cancer cells [13][14][15]. On the other hand, conventionally used chemotherapy drugs induce apoptosis of cancer cells by interfering with the major cellular functions which might have some of cross-talk with the components of cell proliferation/ survival pathways. siRNA-mediated knock-down of the genes encoding the enzymes of those pathways, therefore, might not only slow down the growth of cancer cells, but also sensitize them to anti-cancer drugs. In Ca 2+ -calmodulin pathway, stimulation with growth factor either G protein-coupled receptors or receptor tyrosine kinases activates the phospholipase C (PLC) enzyme, which, in turn, hydrolyses the membrane phospholipid, phosphatidylinositol 4, 5 bisphosphate (PIP2) to diacylglycerol (DAG) and inositol (1,4,5) trisphosphate (IP3). DAG activates PKC while IP3 binds to its receptor on the endoplasmic reticulum allowing diffusion of Ca 2+ from the ER to increase intracellular [Ca 2+ ] [16]. The released Ca 2+ binds to calmodulin (CaM) and Ca 2+ /CaM functions as an allosteric activator of a considerable number of protein kinases regulating cell proliferation and apoptosis [17]. Recently, we have developed an efficient siRNA delivery system based on some unique properties of carbonate apatite-electrostatic affinity for binding anionic siRNA, ability of preventing crystal growth for generation of nano-size particles for efficient endocytosis and fast dissolution kinetics in endosomal acidic compartments to facilitate the release of siRNA from the particles as well as from the endosomes, leading to the efficient silencing of reporter gene expression. Moreover, nanoparticle-assisted delivery of validated siRNA against cyclin B1 resulted in the significant inhibition of cancer cell growth [18,19]. Here we show that carbonate apatite-mediated delivery of siRNA against PLC-gamma-2 (PLCG2) and calmodulin 1 (CALM1) genes sensitized a human cervical cancer cell line (HeLa cell) to doxorubicin-and paclitaxelinduced cell death depending on the doses of the drugs while no such synergistic effect was observed with cisplatin, another commonly used chemotherapy drugs. Results and discussion Roles of PLCG2 and CALM1 in the proliferation/survival of cervical cancer cells In order to investigate the potential roles of PLCG2 and CALM1 in the proliferation or survival of HeLa cells that express both of the proteins [20][21][22][23], specific validated siRNA (10 mM) against PLCG2 or CALM1 mRNA was added together with Ca 2+ (3 mM) to the bicarbonate-buffered DMEM prior to the incubation at 37°C for 30 min to form carbonate apatite/siRNA complexes. Figure 1 shows the cell viability as assessed by MTT assay following consecutive 48 h incubation of HeLa cells with the apatite complexes carrying either anti-PLCG2 or anti-CALM1 siRNA. Almost 10% of the cells were killed due to the silencing of either PLCG2 or CALM1 gene expression indicating that PLCG2, an upstream molecule and CALM1, a downstream molecule of Ca 2+ -calmodulin pathway are critically involved in the proliferation or survival of HeLa cells. While both of the siRNAs were validated by the manufacturer (QIAGEN) using quantitative RT-PCR to confirm their knockdown efficiency of 82%, the relatively low efficacy of either treatment in killing cancer cells as compared to the free particles (positive control) was possibly due to the constitutive expression of the genes in spite of the cleavage of substantial amount of the respective mRNAs and the active roles being played by MAP kinase and PI-3 kinase pathways in cell survival or proliferation. Influences of PLCG2 and CALM1 gene knockdown on cisplatin-induced cell toxicity Cisplatin is one of the most effective anti-cancer drugs for solid tumors, including ovarian, testicular, cervical, and small cell lung cancers [24,25]. Treatment of HeLa cells with 1 μM of cisplatin for 2 consecutive days caused 25% of cell death compared with particles only ( Figure 2) and almost same level of cell death was observed for the treatment where both apatite/siRNA complexes and cisplatin were incubated together with the cells for the same period of time, suggesting an additive effect on cell death probably owing to the lack of cross-talk(s) between the pathways of Ca 2+ -calmodulin signaling and cisplatin-mediated toxicity. On the contrary, the combined treatment with apatite/ anti-PLCG2 siRNA complex and a lower dose of cisplatin (200 nM) led to the enhancement of cell viability compared with apatite/anti-PLCG2 siRNA or cisplatin (Figure 3), indicating that cisplatin at that particular dose might activate another form of PLC [26] or activate MAP kinase/PI-3 kinase signaling cascades, leading to the enhanced cell growth in the absence of PLCG2 . Silencing of PLCG2 gene promoted more cell growth than silencing of CALM1 gene at that lower dose of cisplatin probably because PLCG2 is more upstream to and therefore, more important regulator than CALM1 in Ca 2+ -calmodulin signaling. Influences of PLCG2 and CALM1 gene knockdown on doxorubicin-induced cell toxicity Doxorubicin is another chemotherapy drug widely used for the treatment of a variety of cancers including cervical cancer [27,28]. Doxorubicin which killed almost 50% of the cells at 1 μM concentration of the drug ( Figure 4) seems to be more potent than cisplatin which killed 25% of the cells at the same dose ( Figure 2) following continuous 2 day incubation with HeLa cells. Silencing of PLCG2 gene following intracellular delivery of apatite/anti-PLCG2 siRNA, clearly sensitized the cells to doxorubicin at that particular concentration (1 μM) killing more than 60% of the cells due to the synergistic effect of the drug and the gene knockdown. This could be due to the activation of Ca 2+ -calmodulin pathway [29] by doxirubicin-an effect that might have hindered the cytotoxic effect of doxorubicin and therefore, targeted cleavage of PLC mRNA or to some extent calmodulin 1 mRNA resulted in blocking of the Ca-calmodulin pathway and inhibition of cell growth or proliferation (Figure 4), thus synergistically enhancing the cancer cell apoptosis in presence of doxorubicin. Similar finding was observed after intracellular delivery of anti-PLCG2 siRNA and 200 nM of doxorubicin, whereas delivery of anti-CALM1 siRNA did not result in a synergistic effect in combination with doxorubicin (200 nM) ( Figure 5) probably because of CALM1 location more downstream to PLCG2 in the pathway. Influences of PLCG2 and CALM1 gene knockdown on paclitaxel-induced cell toxicity Paclitaxel as a microtubule stabilizer is used for the treatment for various cancers including cervical cancer in combination with cisplatin and other cancer drug(s) [30,31]. As shown in Figures 6, 1 μM paclitaxel when Figure 1 Effects of silencing of PLCG2 and CALM1 expression on cancer cell viability. 50,000 of HeLa cells from the exponentially growth phase were seeded in each of the wells of a 24-well plates the day before the siRNA/apatite complexes were prepared by mixing 3 μl of 1 M CaCl 2 with 10 nM of siRNA in 1 ml of fresh serum-free HCO 3 -(44 mM)-buffered DMEM medium (pH 7.5) and incubating at 37°C for 30 min. The medium containing the siRNA/apatite complexes supplemented with 10% FBS had been added onto the rinsed cells before the cells were cultured consecutively for 48 h and the assessment on cell viability was carried out. Each experiment was done in triplicate and the data represent mean value ± SE (n = 3). incubated with HeLa cells continuously for 2 days, caused more than 70% of the cells to death, indicating that paclitaxel is the most effective of the 3 drugs used in the study. However, the combined treatment of the apatite/siRNA complexes possessing either anti-PLCG2 or anti-CALM1 siRNA and paclitaxel (1 μM) resulted in reduction of the total cell death by almost 10%. The could be explained by the notion that silencing of the PLCG2 and CALM1 genes ends up with the downregulation of Ca 2+ /calmodulin signaling and the decline in the level of Ca 2+ /calmodulin-dependent protein kinases (CaMKs). Since CAMKs regulate microtubule dynamics by phosphorylation of the microtubule regulator stathmin [32], the overall effect of gene knockdown might cause disruption of microtubule dynamics, thus preventing paclitaxel to stabilize all of the microtubules and more effectively arrest the cell cycle for induction of apoptosis at that relatively higher concentration of the drug. On the contrary, when the concentration of paclitaxel was lowered to 200 nM, silencing of PLCG2 or CALM1 gene expression was associated with a robust decrease in cell viability demonstrating a synergistic effect of the drug action and the gene knockdown on cell proliferation or survival (Figure 7). Since Ca 2+ /CaM promotes cell proliferation by facilitating G 2 /M transition, M phase progression, and exit from mitosis [15] while microtubules induces apoptosis by arresting G2/M phase [33], silencing of either PLCG2 or CALM1 gene in presence of paclitaxel resulted in complete arrest of the cell cycle. Figure 4 Effects of silencing of PLCG2 and CALM1 expression on viability of cancer cells under higher dose of doxorubicin. 50,000 of HeLa cells from the exponentially growth phase were seeded in each of the wells of a 24-well plates the day before the siRNA/apatite complexes were prepared by mixing 3 μl of 1 M CaCl 2 with 10 nM of siRNA in 1 ml of fresh serum-free HCO 3 -(44 mM)-buffered DMEM medium (pH 7.5) and incubating at 37°C for 30 min. The medium containing the siRNA/apatite complexes supplemented with 10% FBS had been added onto the rinsed cells either with or without 1 μM of doxorubicin before the cells were cultured consecutively for 48 h and the assessment on cell viability was carried out. Each experiment was done in triplicate and the data represent mean value ± SE (n = 3). Conclusions PLCG2 and CALM1 of Ca 2+ -calmodulin signalling pathways are the two potential targets for gene knockdown in doxorubicin and paclitaxel-based chemotherapy of cervical cancer. Therefore, pre-clinical study in animal models of cervical cancer should be carried out through tumor-targeted delivery of anti-PLCG2 or CALM1 siRNA in combination with passively diffusible anti-cancer drugs. Figure 6 Effects of silencing of PLCG2 and CALM1 expression on viability of cancer cells under higher dose of paclitaxel. 50,000 of HeLa cells from the exponentially growth phase were seeded in each of the wells of a 24-well plates the day before the siRNA/apatite complexes were prepared by mixing 3 μl of 1 M CaCl 2 with 10 nM of siRNA in 1 ml of fresh serum-free HCO 3 -(44 mM)-buffered DMEM medium (pH 7.5) and incubating at 37°C for 30 min. The medium containing the siRNA/apatite complexes supplemented with 10% FBS had been added onto the rinsed cells either with or without 1 μM of paclitaxel before the cells were cultured consecutively for 48 h and the assessment on cell viability was carried out. Each experiment was done in triplicate and the data represent mean value ± SE (n = 3). Figure 7 Effects of silencing of PLCG2 and CALM1 expression on viability of cancer cells under lower dose of paclitaxel. 50,000 of HeLa cells from the exponentially growth phase were seeded in each of the wells of a 24-well plates the day before the siRNA/apatite complexes were prepared by mixing 3 μl of 1 M CaCl 2 with 10 nM of siRNA in 1 ml of fresh serum-free HCO 3 -(44 mM)-buffered DMEM medium (pH 7.5) and incubating at 37°C for 30 min. The medium containing the siRNA/apatite complexes supplemented with 10% FBS had been added onto the rinsed cells either with or without 200 nM of doxorubicin before the cells were cultured consecutively for 48 h and the assessment on cell viability was carried out. Each experiment was done in triplicate and the data represent mean value ± SE (n = 3). Formation of siRNA/carbonate apatite complexes and transfection of cells Cells from the exponentially growth phase were seeded at 50,000 cells per well into 24-well plates the day before transfection. 3 μl of 1 M CaCl 2 was mixed with 10 nM of siRNA in 1 ml of fresh serum-free HCO 3 -(44 mM)buffered DMEM medium (pH 7.5), followed by incubation at 37°C for 30 min for complete generation of siRNA/carbonate apatite particles [18,19]. 10% FBS and (depending on the experimental conditions) 0.2 to 1 μM drugs (cisplatin, doxorubicin, paclitaxel) had been mixed with the medium containing the siRNA/apatite complexes before the medium was added onto the rinsed cells. The cells were subsequently cultured for 48 h prior to the assessment on cell viability [18,19]. Cell viability assessment with MTT assay 30 μl of MTT solution (5 mg/ml) was added onto the cells in each well of the 24-well plate and incubated for 4 hr at 37°C. 0.5 ml of DMSO was added after removal of the medium from each well to resolve the crystals, followed by incubation for 5 min at 37°C. Absorbance was measured in a micro plate reader at 570 nm with a reference wavelength of 630 nm. Each experiment was done in triplicate with the data representing mean value ± SE (n = 3) and being statistically significant (< 0.05).
v3-fos-license
2019-03-05T14:18:36.888Z
2019-02-27T00:00:00.000
67872728
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00151/pdf", "pdf_hash": "527b9f6b516b5fae557b3c4f98399f45bb16083b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44642", "s2fieldsofstudy": [ "Medicine" ], "sha1": "527b9f6b516b5fae557b3c4f98399f45bb16083b", "year": 2019 }
pes2o/s2orc
Human Muscle Progenitor Cells Overexpressing Neurotrophic Factors Improve Neuronal Regeneration in a Sciatic Nerve Injury Mouse Model The peripheral nervous system has an intrinsic ability to regenerate after injury. However, this process is slow, incomplete, and often accompanied by disturbing motor and sensory consequences. Sciatic nerve injury (SNI), which is the most common model for studying peripheral nerve injury, is characterized by damage to both motor and sensory fibers. The main goal of this study is to examine the feasibility of administration of human muscle progenitor cells (hMPCs) overexpressing neurotrophic factor (NTF) genes, known to protect peripheral neurons and enhance axon regeneration and functional recovery, to ameliorate motoric and sensory deficits in SNI mouse model. To this end, hMPCs were isolated from a human muscle biopsy, and manipulated to ectopically express brain-derived neurotrophic factor (BDNF), glial-cell-line-derived neurotrophic factor (GDNF), vascular endothelial growth factor (VEGF), and insulin-like growth factor (IGF-1). These hMPC-NTF were transplanted into the gastrocnemius muscle of mice after SNI, and motor and sensory functions of the mice were assessed using the CatWalk XT system and the hot plate test. ELISA analysis showed that genetically manipulated hMPC-NTF express significant amounts of BDNF, GDNF, VEGF, or IGF-1. Transplantation of 3 × 106 hMPC-NTF was shown to improve motor function and gait pattern in mice following SNI surgery, as indicated by the CatWalk XT system 7 days post-surgery. Moreover, using the hot-plate test, performed 6 days after surgery, the treated mice showed less sensory deficits, indicating a palliative effect of the treatment. ELISA analysis following transplantation demonstrated increased NTF expression levels in the gastrocnemius muscle of the treated mice, reinforcing the hypothesis that the observed positive effect was due to the transplantation of the genetically manipulated hMPC-NTF. These results show that genetically modified hMPC can alleviate both motoric and sensory deficits of SNI. The use of hMPC-NTF demonstrates the feasibility of a treatment paradigm, which may lead to rapid, high-quality healing of damaged peripheral nerves due to administration of hMPC. Our approach suggests a possible clinical application for the treatment of peripheral nerve injury. INTRODUCTION Peripheral nerve injury can occur in daily life due to mechanical damage resulting from traffic accidents, sports, or surgery. Such injury poses challenges for patients ranging from minor discomfort to harming quality of life (Clin, 2015). Peripheral neurons have the ability to reactivate their intrinsic growth capacity and allow regeneration to occur following injury (Xu et al., 2013). Nevertheless, the clinical outcome of this regeneration is often incomplete, as expressed in symptoms such as poor and abnormal sensibility, deficient motor function, cold intolerance, and pain (Lundborg, 2000). Sciatic nerve crush is one of the most common models for peripheral nerve injury. The sciatic nerve is the longest nerve in the human body, extending from the lower part of the spinal cord to the buttocks and down the legs (Glat et al., 2016). It comprises both motor and sensory fibers, therefore the sciatic nerve injury (SNI) model closely simulates general peripheral nerve damage, in a simple, reproducible manner. Neurotrophic factors (NTFs), including brain-derived neurotrophic factor (BDNF), glial-cell-line-derived neurotrophic factor (GDNF), vascular endothelial growth factor (VEGF), and insulin-like growth factor 1 (IGF-1), are molecules which enhance growth and survivability of neurons. BDNF was found to be essential for peripheral nerve regeneration and remyelination after injury (Zhang et al., 2000). GDNF was shown to have a maintenance role for adult motor neurons (Naveilhan et al., 1997) and to prevent motor neuron degeneration following peripheral axotomy (Oppenheim et al., 1995;Yan et al., 1995;Hoozemans et al., 2009). VEGF was shown to support and enhance the growth of regenerating nerve fibers (Sondell et al., 2000;Lopes et al., 2011). IGF-1 was shown to exert important growth supporting effects on regenerating peripheral nerves (Hansson et al., 1986;Kanje et al., 1989;Sjöberg and Kanje, 1989). Unfortunately, these findings have not yet led to a clinical treatment improving peripheral nerve repair. Muscle progenitor cells (MPC) are an easily accessible cell type, with well-characterized markers associated with various stages of differentiation (Sarig et al., 2010). MPC are also relatively simple to clone and manipulate in culture (Yaffe, 1968(Yaffe, , 1969Sarig et al., 2006). In a previous study, we showed that transplantation of a mixture of rat myogenic cell line L8, genetically modified to express and secrete BDNF, GDNF, IGF-1 or VEGF (each population expressing a single NTF) have a strong synergistic effect on the regeneration of a damaged sciatic nerve, in a rat model (Dadon-Nachum et al., 2012). The MPC mixture harboring together the four NTFs, was shown to accelerate recovery of motor function, preserved the compound muscle action potential, and inhibited degeneration of the neuromuscular junctions. We further showed that direct intramuscular administration of a mixture of lentiviral vectors expressing the four NTFs significantly improved the recovery of axonal function in a mouse model of SNI (Glat et al., 2016). In the present study, we examined the effect of intramuscular administration of human muscle progenitor cells (hMPC) overexpressing the NTF genes -BDNF, GDNF, VEGF, and IGF-1, in a mouse model of SNI. We demonstrate that transplantation of the hMPC-NTF, 1 day after sciatic nerve crush, can speed and ameliorate natural neuronal regeneration. Isolation of Primary Myoblasts A human muscle biopsy (one patient; 2 cm 2 × 2 cm 2 ) was collected by an orthopedic surgeon during surgery performed for reasons unrelated to the biopsy for the current research. A written informed consent was obtained from the patient prior surgery. Experimental work with the human muscle cells was approved by the Helsinki Committee of the Israeli Ministry of Health (Yaffe, 1968;Sarig et al., 2010Sarig et al., , 2006. The muscle was minced with scissors, enzymatically dissociated, at 37 • C, with TrypLE (GIBCO 12604-013) for 30 min, and centrifuged at 2,500 rpm for 5 min (Yaffe, 1968;Sarig et al., 2010Sarig et al., , 2006. The cells were collected, and trypsinization of the remaining undigested tissue was repeated three more times by adding fresh trypsin solution. After centrifugation, the cells were suspended in proliferation medium BIO-AMF-2 (Biological Industries Ltd., Kibbutz Beit Haemek, Israel), collected, and filtered through a 70 µm Cell Strainer (Corning R 70 µm Cell Strainer, White, Sterile, Individually Packaged, 50/Case (Product #431751), Sigma CLS431751-50EA) to yield a single-cell suspension. The cells were plated on uncoated flasks for 2 h to deplete the fibroblasts (and macrophages). Unattached cells were then collected and transferred to gelatin coated flasks (Gelatin from bovine skin, Sigma G9391 Type B, powder, BioReagent, suitable for cell culture) to yield myogenic cells. Fluorescence-Activated Cell Sorting (FACS) After the isolated myogenic cells were harvested from the tissue culture flasks, samples were incubated with anti-human CD56-Phycoerythrin (PE) antibody (Merck KGaA, Darmstadt, Germany). The labeled cells were thoroughly washed twice in flow-buffer (5% FCS, 0.1% sodium azide in PBS). Cells were suspended in 0.5 ml PBS and analyzed by a FACSCalibur TM flow cytometer using an argon ion laser, adjusted to an excitation wavelength of 488 nm (FACS; Becton Dickinson Immunocytometry System, San Jose, CA, United States). An isotype control was performed with mouse IgG2b-PE (Miltenyi Biotec Inc., Auburn, CA, United States), and specific staining was measured from the cross point of the isotype with the specific antibody graph. Gene Cloning and Lentiviral Preparation Human BDNF, GDNF, IGF-1, and VEGF genes were amplified from the pBluescript plasmids which were purchased from Harvard Institute of Proteomics, Boston, MA, United States, using Plasmid Midi Kit (Qiagen, Valencia, CA, United States) (Dadon-Nachum et al., 2012;Glat et al., 2016). Each of the four genes was inserted into the destination plasmid under the Cytomegalovirus (CMV) promoter in a recombinant reaction. After incubation, the complexes were added to flasks containing 95% confluent hMPC cultured in an antibiotic free medium and incubated at 37 • C in a 5% CO 2 incubator. Each flask was transfected with a single NTF expression vector. Six hours later, the cell medium was changed to their standard growth medium. The lentiviral titer was determined using the Lenti-X TM p24 Rapid Titer Kit and the manufacturer's recommended procedure (Cat. No. 632200, Takara Bio USA, Mountain View, CA, United States). ELISA Analysis The hMPC were thawed and transduced with lentiviruses each containing BDNF, GDNF, IGF-1, or VEGF genes. Lentivirus containing green fluorescent protein (GFP) was used as a control. At the time of transduction, the cells were in passage 1 (P1). Transduction was done in multiplicity of infection (MOI) of 50. The presence of each of the secreted NTFs on the isolated and frozen cell supernatant was quantified 24 and 72 h after transduction using enzyme-linked immunosorbent assay (ELISA) kits (RayBiotech, Norcross, GA, United States). The assays were conducted according to the manufacturer's protocols in duplicate, and absorbance was read at 450 nm using an ELISA reader (PowerWave X; BioTek Instruments, Winooski, VT, United States). Animals Mice were maintained in 12-h-light/12-h-dark conditions in individually ventilated cages (IVC) with ad libitum access to food and water. All experimental protocols were authorized by the Tel Aviv University Committee of Animal Use for Research and Education. Every effort was made to reduce the number of mice used and minimize their suffering. Sciatic Nerve Crush Mouse Model The sciatic nerve crush model was performed on eight-week-old male C57BL/6J mice (n = 56; Harlan, Jerusalem, Israel). Just prior to surgery, mice were anesthetized with a mixture of ketaminexylazine (100 mg/kg ketamine, 10 mg/kg xylazine). The left sciatic nerve was exposed, and a vessel clamp was applied for 30 s above the first branching of the nerve (Dadon-Nachum et al., 2012). A sham group of mice was included in which the sciatic nerve was exposed but not crushed. Behavioral Analysis CatWalk test The CatWalk XT 10.6 system (Noldus Inc., Netherlands) was used to assess gait recovery and motor function after SNI (Neumann et al., 2009;Vandeputte et al., 2010). This test involves monitoring each animal when it crosses a walkway with a glass floor illuminated along the long edge. Data acquisition was carried out using a high-speed camera, and paw prints were automatically classified by the software. The performance of each mouse was recorded three times, to obtain approximately 15 step cycles per mouse for analysis. Paw prints of each animal were obtained 3, 7, and 13 days after surgery. Hot-plate test Antinociception in the SNI model was assessed by the hot-plate test (Polt et al., 1994) 6 days post-SNI. Animals were placed on a hot surface, which was maintained at 55 ± 0.5 • C. The time (in seconds) between placement and licking of the mice hind paws or jumping (whichever occurred first), was recorded as the response latency. A 20 s cut-off was used to prevent tissue damage. In vivo Imaging CRI Maestro TM non-invasive fluorescence imaging system was used to follow the cells 2, 5, and 12 days following hMPC-GFP transplantation (the right sciatic nerve was crushed 1 day before cell transplantation, as described above). The area of interest was shaved and mice were anesthetized using ketamine-xylazine mixture and placed inside the imaging system. A band-pass filter appropriate for the fluorochrome of interest (GFP; Ex 445-490 nm, Em 515 longpass filter; acquisition settings 500-720) was used for emission and excitation light, respectively. Mice autofluorescence and undesired background signals were eliminated by spectral analysis and linear unmixing algorithm. Gastrocnemius Preparation and Neurotrophic Factors Measurements Five days after SNI (4 days after cell transplantation), 3 × 10 6 hMPC-NTF treated mice (n = 3) and hMPC-GFP treated mice (n = 3) were sacrificed using CO 2 . Gastrocnemius muscles of both hind paws of each mouse were quickly removed in order to evaluate NTFs secretion from the tissues. Tissues were snap frozen in liquid nitrogen then transferred to −80 • C until analysis. Protein extraction Tissues were thawed, and total protein was produced as previously described. Protein concentration was determined using the bicinchoninic acid (BCA) kit (Thermo Scientific, Rockford, IL, United States). Quantification of NTFs levels using ELISA The presence of each of the secreted NTFs was quantified using ELISA specific kits (RayBiotech, Norcross, GA, United States). The assays were conducted according to the manufacturer's protocols in duplicate, and absorbance was read at 450 nm using an ELISA reader (PowerWave X; BioTek Instruments, Winooski, VT, United States). Statistical Analysis The results are expressed as means ± standard error (SE). Statistical analysis was performed using unpaired Student's t-test for the direct comparison between two groups. Statistical analysis of data sets was carried out with the aid of GraphPad Prism for Windows (GraphPad Software, La Jolla, CA, United States). Characterization of the Human Myogenic Cells CD56 antigen neural cell adhesion molecule (NCAM) is a known cell surface marker of human myogenic cells (Capkovic et al., 2008). Therefore, CD56 expression on the muscle-derived cells was examined by FACS analysis following labeling with mouse anti-human CD56-PE antibody. A high percentage of cells expressed the CD56 marker from passages P0-P5. Figures 1C, 2C show that 92.94 and 90.19% of the isolated cells in P1 and P3, respectively, expressed the marker. Incubation without antibodies was used as a baseline (Figures 1A, 2A), and staining for non-specific mouse immunoglobulin G (IgG) isotype fluorescence was used as a control (Figures 1B, 2B). Microscopic inspection revealed that the majority of the cells had a myogenic morphology, and only a few fibroblasts were observed, confirming that highly pure population of hMPC was established from the muscle biopsy. Characterization of the Transfected Human Muscle Progenitor Cells The expression and secretion of NTFs from hMPC-transfected cells were assessed using ELISA analysis. The genetically manipulated hMPCs were found to express high levels of BDNF, GDNF, VEGF, or IGF-1. According to the ELISA kits, levels of NTFs were found to be 495.1 ± 21.3 ng and 1500 ± 68.8 ng of BDNF, 325.5 ± 16 pg and 199.2 ± 10.5 pg of GDNF, 10906 ± 802.9 pg and 15709 ± 1093 pg of VEGF, and 1.13 ± 0.24 ng and 1.88 ± 0.04 ng of IGF-1 per million cells, 24 and 72 h after cell transduction. In contrast, levels of NTFs secreted from GFP-transfected hMPC were significantly lower (Figure 3). Transplantation of Genetically Modified hMPCs Expressing NTFs Improved Motor Function and Gait Pattern in Sciatic Nerve Injury Mouse Model To assess the effect of hMPC-NTF on recovery of nerve damage, mice were injected with cells 1 day after sciatic nerve crush surgery. The motor recovery effect was evaluated by parameters obtained from the CatWalk XT system on days 3, 7, and 13 after surgery. Figure 4 illustrates improvement in the gait pattern of the injured mice treated with 3 × 10 6 hMPC-NTF by displaying two representative images of left hind paw after sciatic nerve crush, assessed 7 days after injury. The bottom right image in Figure 4 shows an exemplary footprint from the injured mice group treated with 3 × 10 6 hMPC-NTF, and the left image is an exemplary footprint from the untreated injured group. FIGURE 4 | Illustration of gait pattern improvement following transplantation of genetically modified hMPCs expressing NTFs after SNI. Representative images of paw prints, acquired using CatWalk XT system, 7 days after SNI without treatment (right) or with 3 × 10 6 hMPC-NTF treatment (left). The maximum tread intensity, at maximum contact, and the paw print area parameter were quantified. Data from all groups were normalized to the average data of the naïve control group (n = 6). The values obtained for both of these parameters were similar in control and sham groups. These values were found to be significantly lower in the injured groups as compared with the sham group, 3 days post-SNI, regardless the treatment given (Figures 5, 6). However, 7 days post-surgery, the values were significantly different in the group of injured mice treated with 3 × 10 6 hMPC-NTF, as compared to both of the other injured groups, either treated with 10 6 hMPC-NTF, or untreated mice (Figures 5, 6). Recovery of motor function and gait pattern for the mice transplanted with 3 × 10 6 hMPC-NTF was better at this time point, and their motor function and gait pattern resembled those of the sham group. Nevertheless, the motor function and gait pattern of untreated injured mice recovered to some extent due to spontaneous regeneration of the sciatic nerve. Notably, FIGURE 5 | Transplantation of genetically modified hMPCs expressing NTFs improved motor function after SNI. Left hind paw maximum tread intensities, at maximum contact, were acquired using the CatWalk XT system on days 3, 7, and 13 post-SNI. These values were compared to those of naïve control mice, whose function was considered 100%. The data are presented as the relative mean function ± SEM of n mice per treatment group. * P < 0.05, * * P < 0.01, one-tailed t-test. FIGURE 6 | Transplantation of genetically modified hMPCs expressing NTFs improved gait pattern after SNI. Left hind paw print areas were acquired using CatWalk XT system 3, 7, and 13 days post-SNI. These values were compared to those of naïve control mice, whose function was considered 100%. The data are presented as the relative mean function ± SEM of n mice per treatment group. * * P < 0.01, one-tailed t-test. transplantation of 10 6 hMPC-NTF didn't have a significant effect on the injured mice. Thirteen days after surgery the significant difference in motor function and gait pattern between the experimental groups disappeared due to spontaneous regeneration of the sciatic nerve. Transplantation of Genetically Modified hMPCs Expressing NTFs Improved Sensory Deficits in the Sciatic Nerve Injury Mouse Model Sensory fiber regeneration was evaluated using the hot plate test on day 6 after SNI. Injured mice treated with 3 × 10 6 hMPC-NTF were significantly less sensitive to hot plate exposure, than untreated injured mice, and their response resembled those of naïve control mice (Figure 7). These results indicate a palliative effect of the treatment on the injured paws. In vivo Imaging of Transplanted Cells Was Correlated With Behavioral Results Using CRI Maestro TM non-invasive fluorescence imaging system hMPC-GFP were examined 2, 5, and 12 days after transplantation into the gastrocnemius muscle. As seen in Figures 8C,D cells were present in the tissue 2 and 5 days after transplantation. After 12 days, it was no longer possible to detect the cells (Figure 8E). A negative control mouse (to which cells were not FIGURE 7 | Transplantation of genetically modified hMPCs expressing NTFs improved sensory deficits after SNI. Nociceptive threshold of the left hind paw was tested by measuring latency of analgesic response in the hot-plate test. These values were compared to those of naïve control mice, whose function was considered 100%. The data are presented as the relative mean response ± SEM of n mice per treatment group, * P < 0.05, one-tailed t-test. transplanted) was used to verify that the detected fluorescence was not due to autofluorescence (Figure 8B). Figure 8A schematically illustrates the posterior right limb of a mouse in order to assist in understanding the animal's position in the images mentioned above. Intramuscular Injection of Genetically Modified hMPCs Expressing NTFs Resulted in an Increased NTF Expression Levels Using an ELISA analysis, the NTF expression levels in the gastrocnemius muscle of the 3 × 10 6 hMPC-NTF treated mice (n = 3) and hMPC-GFP treated mice (n = 3), were assessed 4 days after cell transplantation. According to the ELISA kits, levels of NTFs were found to be 4.92 ± 1.13 ng of BDNF, 4.16 ± 0.1 pg of GDNF, 7.44 ± 0.69 pg of VEGF, and 21.51 ± 5.91 ng of IGF-1, per mg protein extracted from the left hind gastrocnemius, injected with hMPC-NTF. In contrast, NTF expression levels from the left hind gastrocnemius injected with hMPC-GFP were lower than in the mice injected with hMPC-NTF (significantly lower for GDNF, VEGF, and IGF-1). NTF expression levels from the untreated right hind gastrocnemius were mostly undetectable. DISCUSSION Peripheral nerve injury is a common condition, ranging from mild to severe injury. Within several weeks a natural healing process begins to take place. Nevertheless, approximately 100,000 patients undergo peripheral nerve surgery in the United States and Europe annually, due to severe damage or continuous pain (De Albornoz et al., 2011). Functional recovery is frequently poor after peripheral nerve injury, and, except for surgical cases, the sole therapeutic options are palliative care, including painkillers and anti-inflammatory drugs to relieve the pain (Gordon et al., 2011). The object of this study was to evaluate whether ectopic transplantation of human MPC expressing the NTF genes BDNF, GDNF, VEGF, and IGF-1 can alleviate sensory and motoric deficits identified in a mouse model of SNI. Trophic activities in muscle and nerve were shown to increase after lesions and blockade of nerve activity (Brown et al., 1991;Houenou et al., 1991;Bedi et al., 1992;Danielsen et al., 1994). In addition, muscle extract was shown to potently prevent motor neuron degeneration (Oppenheim, 1985;Oppenheim et al., 1988). These results suggest that the presence of trophic factors in the nerve and muscle is important for motor neuron survival and nerve regeneration (Naveilhan et al., 1997). Furthermore, Rabinovsky et al. (2003) suggested that actions of IGF-1 on peripheral nerve regeneration can be connected to both, its neurotrophic effects and to its myogenic effects. The intracellular signaling mechanisms, stimulated by each one of the NTFs -BDNF, GDNF, VEGF, and IGF-1, involve binding to a specific receptor and initiation of the PI3/AKT signaling cascade, which promote cell survival (Wilker et al., 2005;Wang et al., 2007;Karar and Maity, 2011;Chen et al., 2013). Nevertheless, motoneuronal upregulation of NTFs including BDNF and GDNF was found to occur within 7 days of injury and to progressively decline thereafter (Boyd and Gordon, 2003). This decline was suggested as a likely factor that correlates with the reduction of regenerative capacity after severe nerve damage (Gordon et al., 2011). In addition, considering the properties of NTFs and their positive effect on regeneration and motor neuron support, the attempt to test for the provision of NTFs as a potential therapy for peripheral nerve injury is compelling. It can be concluded that the results obtained in the present research, regarding the improvement of motor and sensory deficits of SNI using transplantation of hMPC-NTF to the gastrocnemius muscle of the injured limb, were the result of both the positive effects of NTFs, and of an ongoing delivery of NTF secretion by MPC. These results reinforce those of two previous studies. The first, showed the use of myogenic cells ectopically expressing the NTFs -BDNF, GDNF, IGF-1, and VEGF, in treating SNI (Dadon-Nachum et al., 2012). This was the first study to show the synergistic protective effect of the four NTFs in supplying a nurturing environment to the injured nerve. However, since the study used rat cells, further work was needed to show that human cells can also provide the same therapeutic potential. The second study demonstrated that direct injection of viral vectors expressing the four NTF genes can accelerate the regeneration of the sciatic nerve after injury (Glat et al., 2016). However, lentiviruses are a powerful tool which, in addition to being immunogenic, also have the potential for being oncogenic, infectious, and can cause other transformative changes to infected cells (Schlimgen et al., 2016). Therefore, they are not a currently applicable approach in the clinic. The results presented in this paper further support and reinforce findings of previous studies, suggesting a synergistic effect of the four NTFs for sciatic nerve reconstruction after injury. The use of MPC as a delivery system of the NTFs close to the site of damage has a myogenic effect that may, in and of itself, contribute to nerve recovery. It should be emphasized that the use of human cells as a treatment for mice shows an immunocompetent property of the treatment. The use of hMPC-NTF demonstrates the feasibility of a treatment paradigm with safe biological characteristics to the human body, which can lead to rapid, high-quality healing of damaged peripheral nerves due to modifications, resulting in an overexpression of NTFs. AUTHOR CONTRIBUTIONS RG, FG, TB-Z, UD, DY, and DO designed the experiments. RG and TB-Z performed the experiments. RL and AP contributed the muscle biopsy. RG, FG, DY, and DO wrote and edited the manuscript. FUNDING This study was partially funded by Stem Cell Medicine (SCM) Ltd. (Jerusalem, Israel).
v3-fos-license
2022-12-04T16:38:39.984Z
2022-12-01T00:00:00.000
254217923
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/11/23/3894/pdf?version=1669964656", "pdf_hash": "2f8eb9d9d5ccdd3c06a5bbd979d338ce9d0f760d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44643", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c9694ede73c876feec8c78b109f9ae31a87296b5", "year": 2022 }
pes2o/s2orc
Pro- and Anti-Tumoral Factors Involved in Total Body Irradiation and Interleukin-2 Conditioning in Adoptive T Cell Therapy of Melanoma-Bearing Rag1 Knock-Out Mice In adoptive T cell therapy (ACT), the transfer of tumor-specific T cells is paralleled by the conditioning regimen to increase therapeutic efficacy. Pre-conditioning depletes immune-suppressive cells and post-conditioning increases homeostatic signals to improve the persistence of administered T cells. Identifying the favorable immunological factors involved in a conditioning regimen is important to design effective strategies in ACT. Here, by using an ACT model of murine melanoma, we evaluate the effect of the total body irradiation (TBI) and interleukin-2 (IL-2) treatment combination. The use of a Rag1 knock-out strain, which lacks endogenous T cells, enables the identification of factors in a way that focuses more on transferred T cells. We demonstrate that the TBI/IL-2 combination has no additive effect in ACT, although each conditioning improves the therapeutic outcome. While the combination increases the frequency of transferred T cells in lymphoid and tumor tissues, the activation intensity of the cells is reduced compared to that of the sole TBI treatment. Notably, we show that in the presence of TBI, the IL-2 treatment reduces the frequency of intra-tumoral dendritic cells, which are crucial for T cell activation. The current study provides insights into the immunological events involved in the TBI/IL-2 combination in ACT. Introduction Adoptive T cell therapy (ACT) transfers tumor-specific T cells into cancer patients to augment the extent of cellular immune responses against the tumor. While it has shown promising results in some cancers, the therapeutic efficacy has been limited in most solid cancers [1][2][3][4]. To design effective strategies in ACT, critical immunological factors that affect the therapeutic outcome must be defined. Lymphodepletion pre-conditioning is the most common regimen that is paralleled by T cell transfer to increase therapeutic efficacy. Pre-conditioning is known for inducing various features, such as the expansion of "space" for grafted T cells [5,6], depletion of cytokine sinks [7,8], and elimination of immune-suppressive cells [9]. Due to the diverse biological consequences caused by pre-conditioning, factors that drive pro-and anti-tumoral effects in ACT are difficult to identify. Studying pre-conditioning after "variable reduction,", e.g., the use of a Rag1 knock-out mouse strain, which is permanently lymphodepleted, may be an option to define factors that are not related to the effect of a large lymphoid space and endogenous T cells. The administration of homeostatic cytokines, such as interleukin-2 (IL-2), is frequently used in ACT as a post-conditioning regimen [10]. It increases the persistence of transferred T cells by sustaining homeostatic signals during the therapy. Notably, interleukins often exhibit pleiotropy and can contribute to both pro-and anti-tumoral effects [11][12][13][14]. Therefore, the consequence of a certain post-conditioning may be varied depending on the immunological context that is induced by the pre-conditioning. To examine the feasibility of a combination of certain pre-and post-conditioning regimens in terms of synergy, a direct comparison of each regimen and combination should be conducted. Here, we demonstrate the effect of total body irradiation (TBI) pre-conditioning and IL-2 treatment post-conditioning in ACT of melanoma. By using lymphodepleted Rag1 knock-out mice, we focus on investigating the immunological changes induced by TBI. Additionally, the current study shows the consequences of the TBI/IL-2 combination regarding therapeutic outcomes and immunological profiles in lymphoid and tumor tissues. These findings will benefit endeavors to develop effective ACT strategies. Adoptive Cell Transfer Model On day 0, 2 × 10 5 B16-F10 cells were injected subcutaneously into the back of Rag1 knock-out mice. Three days later, mice were exposed to nonmyeloablative (4 Gy) TBI using an X-RAD 320 (Precision X-Ray, Inc., North Branford, CT, USA). On day 5, activated CD8 + T cells were intravenously injected into the mice via the lateral tail vein. Recombinant human IL-2 (10,000 IU; Novartis, Basel, Switzerland) was intraperitoneally administered daily for 3 days. Mice were routinely monitored for tumor growth and survival. Tumors were measured using calipers and the volume was calculated as: 1/2 × length × width × height. Mice were euthanized when the tumor volume reached 2500 mm 3 or the animals displayed specific signs of illness (e.g., severe weight loss and tumor ulceration). Flow Cytometry On day 14, lymph node, spleen, and tumor tissues were collected from euthanized mice for the subsequent analyses. A single cell suspension of lymphoid tissues was prepared by gentle disruption of the inguinal tumor draining lymph nodes (TdLNs) and spleen, followed by filtration through a 40 µm nylon cell strainer (Falcon, NY, USA). Cells were treated with red blood cell lysis buffer (eBioscience, San Diego, CA, USA) before the analysis. A tumor tissue-derived single cell suspension was prepared using a Tumor Dissociation Kit (Miltenyi Biotec, Inc., Auburn, CA, USA) according to the manufacturer's instruction. Tumor tissues were cut into small pieces and transferred into the gentleMACS C tube containing the enzyme mix provided by the manufacturer. The tube was mounted on a gentleMACS dissociator (Miltenyi Biotec, Inc., Auburn, CA, USA) to dissociate the tumor tissue. The final single cell suspension was obtained by filtration through a 40 µm nylon cell strainer. Statistical Analysis All statistical data were analyzed in Prism v5.01 GraphPad (La Jolla, CA, USA). A twotailed unpaired Student's t-test was used in the comparison of cell counts and frequencies. The log-rank (Mantel-Cox) test determined the significance of the difference in survival rates. p values less than 0.05 were considered significant, which is designated with asterisks (* p < 0.05; ** p < 0.01; *** p < 0.001). TBI and IL-2 Treatment Independently Improves the Efficacy of Adoptive T Cell Therapy We investigated the therapeutic effect of TBI and/or IL-2 treatment in ACT of melanoma. The melanoma cell line B16-F10 was subcutaneously inoculated on the back of the Rag1 knock-out mice, which are deficient in T and B cells. Pre-conditioning was conducted with 4 Gy TBI at 2 days before adoptive T cell transfer. Ex vivo primed Thy1.1 pmel-1 CD8 + T cells (Pmel-1) were infused on day 5, and some mice were treated daily with IL-2 for 3 days as a post-conditioning regimen ( Figure 1A). Mice treated with Pmel-1 alone were able to control tumor growth until 10 days but then rapidly lost the tumor-suppression effect ( Figure 1B,C). More than half of the mice treated with both Pmel-1 and IL-2 displayed well-controlled tumor growth until 20 days. The TBI/Pmel-1 treatment further increased the efficacy, extending the period to more than 30 days. Intriguingly, the combined TBI, Pmel-1, and IL-2 treatment did not improve the therapeutic effect compared to the TBI/Pmel-1 treatment. Regardless of IL-2 treatment, tumors began to grow in all mice in the two groups 1 month after tumor inoculation. Survival of the mice was also improved following TBI treatment independently of IL-2 treatment ( Figure 1D). Notably, the mice that survived more than 30 days presented with vitiligo as a symptom. One third of the Pmel-1/IL-2 treatment group and all mice in the TBI-treatment groups displayed vitiligo at 50 days post therapy ( Figure 1E). These results show that the transferred Pmel-1 increased the therapeutic efficacy particularly when it was accompanied by TBI. The effect was confirmed by suppressed tumor growth, increased survival rate, and the presentation of vitiligo, a positive prognostic factor [15,16]. Although the extent of efficacy was lower than with TBI, IL-2 treatment enhanced the anti-melanoma effect of Pmel-1. However, despite expecting the highest anti-tumor effect with the TBI/IL-2 combination, no additive effect was observed in the treated animals. growth, increased survival rate, and the presentation of vitiligo, a positive prognostic factor [15,16]. Although the extent of efficacy was lower than with TBI, IL-2 treatment enhanced the anti-melanoma effect of Pmel-1. However, despite expecting the highest antitumor effect with the TBI/IL-2 combination, no additive effect was observed in the treated animals. Figure 1. Anti-melanoma effect of total body irradiation and interleukin-2 treatment in adoptive T cell therapy. (A) Schematic drawing of the experiment. Rag1 knock-out mice were subcutaneously inoculated with B16-F10 melanoma and treated with activated Pmel-1 as a form of adoptive T cell therapy. Pmel-1 stimulated for 2 days was administered into the mice on day 5. On day 3, some mice were exposed to 4 Gy total body irradiation (TBI). The interleukin-2 (IL-2) treatment group was injected daily (intraperitoneally) with 10,000 IU IL-2 on day 5 to day 7. (B) Tumor growth rate measured for 100 days. Each symbol and error bar indicate the mean and standard error of the mean (s.e.m.) of the tumor size in the same group. (C) Tumor growth rate of each mouse is indicated. (D) Kaplan-Meier curves showing the survival rate of the mice. (E) Representative images of the surviving mice in the TBI + Pmel-1 and TBI + Pmel-1 + IL-2 groups on day 80. Arrows indicate the tumor inoculation sites. UnTx (untreated) group, n = 7 mice; Pmel-1 and TBI + Pmel-1 groups, n = 5 mice per group; Pmel-1 + IL-2 and TBI + Pmel-1 + IL-2 groups, n = 6 mice per group. ns, not significant; * p < 0.05; ** p < 0.01; *** p < 0.001. TBI/IL-2 Combination Increases the Proportion of Transferred T Cells in Lymphoid Tissues Considering that the Rag1 knock-out strain lacks endogenous T cells that drive strong adaptive immune responses, the key player of this model is likely to be the transferred Pmel-1. To elucidate the reason why the TBI/IL-2 combination had no additive effect, we investigated the Pmel-1 changes in these animals. On day 14, when the subjects displayed an active short-term immune response, the cell population in the TdLN and spleen was collected and analyzed ( Figure 2A). First, we Rag1 knock-out mice were subcutaneously inoculated with B16-F10 melanoma and treated with activated Pmel-1 as a form of adoptive T cell therapy. Pmel-1 stimulated for 2 days was administered into the mice on day 5. On day 3, some mice were exposed to 4 Gy total body irradiation (TBI). The interleukin-2 (IL-2) treatment group was injected daily (intraperitoneally) with 10,000 IU IL-2 on day 5 to day 7. (B) Tumor growth rate measured for 100 days. Each symbol and error bar indicate the mean and standard error of the mean (s.e.m.) of the tumor size in the same group. (C) Tumor growth rate of each mouse is indicated. (D) Kaplan-Meier curves showing the survival rate of the mice. (E) Representative images of the surviving mice in the TBI + Pmel-1 and TBI + Pmel-1 + IL-2 groups on day 80. Arrows indicate the tumor inoculation sites. UnTx (untreated) group, n = 7 mice; Pmel-1 and TBI + Pmel-1 groups, n = 5 mice per group; Pmel-1 + IL-2 and TBI + Pmel-1 + IL-2 groups, n = 6 mice per group. ns, not significant; * p < 0.05; ** p < 0.01; *** p < 0.001. TBI/IL-2 Combination Increases the Proportion of Transferred T Cells in Lymphoid Tissues Considering that the Rag1 knock-out strain lacks endogenous T cells that drive strong adaptive immune responses, the key player of this model is likely to be the transferred Pmel-1. To elucidate the reason why the TBI/IL-2 combination had no additive effect, we investigated the Pmel-1 changes in these animals. On day 14, when the subjects displayed an active short-term immune response, the cell population in the TdLN and spleen was collected and analyzed ( Figure 2A). First, we examined the magnitude of TBI-induced lymphodepletion and IL-2-induced lymphoproliferation. The total cell count of the TdLN significantly decreased when ACT was paralleled by TBI. Compared to the group with the sole Pmel-1 transfer, the cell count of the TBI/Pmel-1-treated group was reduced by about 50% in the TdLN and spleen ( Figure 2B). The addition of IL-2 was insufficient to recover the TBI-induced cell destruction. Next, we checked the graft rate of Pmel-1 regarding the cell frequency modified by TBI and/or IL-2 treatment using flow cytometry ( Figure 2C,D). TBI significantly increased the proportion of Pmel-1 in the TdLN and spleen ( Figure 2D,E), indicating that the TBI-induced cellular space was refilled with the transferred Pmel-1. The increased proportion was mainly due to the depletion of other tissue-resident cells in response to TBI, since there was only minor increase in the total Pmel-1 count in these tissues ( Figure 2E). IL-2 treatment also affected the proportional increase in Pmel-1 in the absence and presence of TBI. examined the magnitude of TBI-induced lymphodepletion and IL-2-induced lymphoproliferation. The total cell count of the TdLN significantly decreased when ACT was paralleled by TBI. Compared to the group with the sole Pmel-1 transfer, the cell count of the TBI/Pmel-1-treated group was reduced by about 50% in the TdLN and spleen ( Figure 2B). The addition of IL-2 was insufficient to recover the TBI-induced cell destruction. Next, we checked the graft rate of Pmel-1 regarding the cell frequency modified by TBI and/or IL-2 treatment using flow cytometry ( Figure 2C,D). TBI significantly increased the proportion of Pmel-1 in the TdLN and spleen ( Figure 2D,E), indicating that the TBI-induced cellular space was refilled with the transferred Pmel-1. The increased proportion was mainly due to the depletion of other tissue-resident cells in response to TBI, since there was only minor increase in the total Pmel-1 count in these tissues ( Figure 2E). IL-2 treatment also affected the proportional increase in Pmel-1 in the absence and presence of TBI. Taken together, these data show that although TBI pre-conditioning and IL-2 treatment did not increase the genuine cell number of the transferred Pmel-1 in lymphoid tissues, the increased frequency implied an additive effect of the combination on the immunological profile of the subjects. At the same time, however, these results do not provide a clue to the absence of an additive effect between the TBI and IL-2 treatments. TBI/IL-2 Combination Increases Tumor-Infiltrating Pmel-1 While Decreasing the Activation Intensity The quantitative feature of tumor-infiltrating T cells accounts for the magnitude of anti-tumor responses. Therefore, we focused on the tumor-infiltrating Pmel-1 to identify the factor associated with the reduced additive effect in the TBI/IL-2 combination. Using the same condition as was used for the lymphoid tissue analysis, the tumor tissues on day 14 were dissociated into single cells and analyzed using flow cytometry ( Figure 3A,B). Taken together, these data show that although TBI pre-conditioning and IL-2 treatment did not increase the genuine cell number of the transferred Pmel-1 in lymphoid tissues, the increased frequency implied an additive effect of the combination on the immunological profile of the subjects. At the same time, however, these results do not provide a clue to the absence of an additive effect between the TBI and IL-2 treatments. TBI/IL-2 Combination Increases Tumor-Infiltrating Pmel-1 While Decreasing the Activation Intensity The quantitative feature of tumor-infiltrating T cells accounts for the magnitude of anti-tumor responses. Therefore, we focused on the tumor-infiltrating Pmel-1 to identify the factor associated with the reduced additive effect in the TBI/IL-2 combination. Using the same condition as was used for the lymphoid tissue analysis, the tumor tissues on day 14 were dissociated into single cells and analyzed using flow cytometry ( Figure 3A,B). The TBI increased the frequency of the tumor-infiltrating Pmel-1 as was similarly observed in the lymphoid tissues ( Figure 3C,D). Notably, IL-2 and TBI contributed to the proportional change, respectively, and the additive effect between the treatments was clearly observed in the result. We found that nearly half of the tumor-infiltrating live cells were Pmel-1 in the TBI/IL-2 combination group. Considering the differences in the tumorreactive Pmel-1 frequency in the lymphoid and tumor tissues, there being no difference in the therapeutic effect between the TBI and TBI/IL-2 groups was contradictory ( Figure 1B,C). Intriguingly, the following qualitative analysis of the tumor-infiltrating Pmel-1 resulted in an unexpected finding. We checked PD-1 expression level on Pmel-1 among the groups and found that TBI group had higher proportion of PD-1 + T cells compared to the TBI/IL-2 combination (Figure 3E,F). PD-1 is one of the representative inhibitory receptors in T cells, and its expression at early timepoints (hours to days after antigen encounter) reflects appropriate activation [17]. Given that the tumor-infiltrating Pmel-1 was analyzed on day 14, which was 9 days after adoptive transfer, the significant reduction in the PD-1 + proportion by the TBI/IL-2 combination implied insufficient activation. To elucidate the reason for this consequence, we investigated the detailed immunological features in the following experiments. TBI and IL-2 Combination Alters the Immune Cell Landscape in Melanoma-Bearing Rag1 Knock-Out Mice The purpose of pre-conditioning is to remove immune-suppressive cells before the adoptive transfer of tumor-reactive T cells. As all pre-conditioning regimens currently in use are not target-specific, diverse immune cell subsets that are not immune-suppressive are also likely to be affected by the regimens. Therefore, we sought to identify significant changes in the subsets that may have roles in anti-and pro-tumoral effects, by which the absence of the TBI/IL-2 combination additive effect can be explained. The Rag1 knock-out strain lacks B and T cells, and its immune cell population mainly consists of natural killer (NK) cells and other myeloid lineage cells. We investigated TBI/IL-2-induced changes in these subsets by conducting multiparametric flow cytometry analysis of the cells isolated from spleen and tumor tissue. Tumor tissues were analyzed on day 14 when the size was relatively small (<100 mm 3 ) and ideal for investigating immune cell reconstitution. We defined several immune subsets from the spleen and tumor by using CD8b, Thy1.1, PD-1, NK1.1, CD11b, CD11c, MHC-II, CD80, Ly6G, Ly6C, F4/80, and CD206 antibodies ( Figures 4A and 5A). The frequency of each subset within the lymphoid/myeloid cells (Lin + ; positive for Thy1.1, NK1.1, CD11b, and/or CD11c) was compared among the groups. As observed in the previous results ( Figures 2E and 3D), the ratio of Pmel-1 increased in the lymphoid and tumor tissues when the animals were treated with the TBI/IL-2 combination (Figures 4B and 5B). The decreased proportion of PD-1 + Pmel-1 in the TBI/IL-2-treated tumor was also consistent with the former data ( Figures 3F and 5B), indicating that these groups have an immunological status equivalent to that of the experimental setting shown in Figures 2 and 3. In the spleen, the presence of TBI significantly decreased the frequency of neutrophils and monocytes regardless of additional IL-2 treatments ( Figure 4B). In contrast to these subsets, TBI increased the ratio of splenic macrophages particularly enriching the CD206 + M2 subtype as observed in a previous report [18]. Notably, dendritic cells (DCs), a crucial component in T cell activation, showed a predominant alteration in the ratio of conventional DC types 1 and 2 (CD11b − cDC1 and CD11b + cDC2) sub-populations. Compared to the IL-2 treatment group, the TBI and TBI/IL-2 treatment groups increased the cDC1 to cDC2 ratio from 0.39 to 8.29 and 6.61 ( Figure 4C,D). In the tumor tissue, we observed substantial changes in the frequency of DCs among the tumor-infiltrating immune cells and alteration in the subset constitution, which consisted of monocyte-derived DCs (Ly6C + MoDC), cDC1, and cDC2 ( Figure 5B-D). Myeloidderived suppressor cells (MDSCs), which are well-known drivers in immune-suppression In the tumor tissue, we observed substantial changes in the frequency of DCs among the tumor-infiltrating immune cells and alteration in the subset constitution, which con-sisted of monocyte-derived DCs (Ly6C + MoDC), cDC1, and cDC2 ( Figure 5B-D). Myeloidderived suppressor cells (MDSCs), which are well-known drivers in immune-suppression [19][20][21][22], showed a decrease in monocytic MDSCs (Ly6Chi Ly6G − Mo-MDSCs) after TBI treatment, while no significant change was shown in the polymorphonuclear MDSCs (Ly6C + Ly6G + PMN-MDSCs) among the groups. Taken together, we observed the effect of TBI and IL-2 treatments on immune cell populations in lymphoid and tumor tissues. TBI not only altered the frequency of DCs but also reshaped the subset distribution in spleen and tumor tissue. Notably, compared to the sole TBI treatment, the TBI/IL-2 combination significantly decreased tumor-infiltrating DCs, while particularly reducing the population size of the cDC1 subset. Given that cDC1 plays an important role in tumor-specific T cell activation [23,24], these features may affect Pmel-1 activity in TBI/IL-2-treated mice. The decrease in neutrophils and monocytes in the spleen and Mo-MDSCs in the tumor after TBI treatment are other factors that can contribute to the anti-tumor efficacy in these groups, though no significant difference was observed between the TBI and TBI/IL-2 treatment groups. Discussion In the current study, we investigated the combination of TBI pre-conditioning and IL-2 post-conditioning regarding the therapeutic effect and immunological profile in ACT of murine melanoma. As expected from the previous studies that described the anti-tumor effect of these conditioning regimens [10,25], both TBI and IL-2 treatments improved the anti-tumor activity and survival rate of Pmel-1-infused mice. Intriguingly, the combination of regimens failed to improve the therapeutic outcome in contrast to the expected synergy between TBI and IL-2 conditioning. We found that while the frequency of Pmel-1 increased in the lymphoid and tumor tissues, the activation intensity of tumor-infiltrating Pmel-1 was reduced after the combination. Multiparametric flow cytometry analysis revealed alterations in the immune cell populations by the TBI/IL-2 combination. Modification of the frequencies of spleen-resident neutrophils, monocytes, macrophages, and tumorinfiltrating Mo-MDSCs were associated with TBI treatment regardless of IL-2 treatment. Notably, by comparing the TBI and TBI/IL-2 treatment groups, we identified the quantity and subset distribution of DCs as potential factors that affect Pmel-1 activation. A decrease in the frequencies of whole DCs and the cDC1 subset in tumor tissue suggests an explanatory mechanism of how the TBI/IL-2 combination reduces the additive effect in ACT of murine melanoma. DCs play a key role in the 'Cancer-Immunity Cycle' by activating T cells with tumorspecific TCRs [26]. Notably, the effect is context-dependent as the sub-populations consist of cDC1, cDC2, MoDC, and plasmacytoid DC (pDC), which are involved in various pathways of immune activation and suppression [27,28]. cDC1 is crucial in activating tumor-specific CD8 + T cells [23,24], whereas cDC2 promotes the activation of CD4 + T cells and has been known for driving anti-tumoral activities [29,30]. MoDC, which is frequently observed in tumors, is implicated in immunosuppression [31,32]. Since the current study setting lacks tumor-specific CD4 + T cells, an increased cDC1 to cDC2 ratio induced by TBI in the spleen was a better prognostic factor. Most importantly, the altered DC subset distribution in the tumor is likely to account for reduced Pmel-1 activation, since the mice that underwent TBI without IL-2 treatment showed the highest frequency of cDC1 and the lowest frequency of MoDC. Nevertheless, further study must be conducted to corroborate the hypothesis considering the controversies in MoDC function [33]. Additionally, considering the inhibitory role of PD-1 even in the early phase of T cell activation [17], an "insufficiently" activated Pmel-1 by the TBI/IL-2 combination could likely function better than a sufficiently activated Pmel-1 in the TBI group. A detailed longitudinal study may help explain the genuine consequences of the enhanced activation by TBI. In addition to DCs, we found that TBI changed the landscape of diverse immune cells that drive pro-and/or anti-tumoral functions in lymphoid and tumor tissues. Neutrophils were a subset significantly affected by TBI in the spleen. A recent study by Veglia et al. showed that spleen-resident neutrophils developed PMN-MDSC-like characteristics after tumor inoculation in a mouse model [34]. Despite no significant difference in tumorinfiltrating PMN-MDSCs among the groups, we observed a 2-to 3-fold reduction in splenic neutrophils after TBI. Considering the immune-suppressive activity of PMN-MDSCs in cancer [20,21], this may account for the enhanced therapeutic efficacy by TBI. A reduction in the frequency of monocytes paralleled by the increase in splenic macrophages was another feature of TBI-treated mice. Given that the cells mainly consisted of M2 macrophages, which were associated with immune suppression [35], TBI-induced sterile inflammation and the subsequent enrichment of the M2 subtype could be a factor that drives the pro-tumoral effect in TBI treatment. Mo-MDSCs, a strong mediator of immune suppression [19,22], indicated a~10-fold reduction in the frequency following TBI conditioning. Considering that~20% of lymphoid/myeloid cells in the tumor were Mo-MDSCs in the group without TBI treatment, this alteration was likely to increase the anti-tumor effect in the current study. Rag1 knock-out mice, the only strain used throughout this study, lack main components in the lymphoid lineage, such as T and B cells. Therefore, we could investigate the changes in various immune-related components apart from the dominant effect of the lymphodepleted space and endogenous T cells. Concomitantly, however, this means that the results found in the current study are not directly translated into immune-competent settings of other mouse models and human cancers. The key factor in this issue is the absence of T cells, which play essential roles in pro-and anti-tumoral activities. Endogenous CD8 + T cells that have polyclonal TCRs are a component that contributes to anti-tumor activity through the 'Cancer-Immunity Cycle' [26]. In the cycle, the killing of cancer cells by Pmel-1 results in the release of various tumor antigens and subsequently primes other endogenous T cells with a diverse TCR repertoire in lymphoid tissues. The use of the Rag1 knock-out strain and Pmel-1 in this study removed the benefit of a diverse arsenal of endogenous T cells, thus promoting cancer immune escape from single epitope-specific Pmel-1. Additionally, the lack of lymphoid cells likely widened the disparity in the tissue-intrinsic profile between the LN and spleen. For instance, the current setting presented that the proportion of non-Pmel-1 was higher in the spleen than in the lymph node ( Figure 2E; 20-70% in the TdLN and 90-99% in the spleen). This feature likely resulted in the increased Pmel-1 count in the TBI-treated spleen, which was in contrast with TdLN, by providing additional supplementary signals from myeloid subsets. Regulatory T cells (Tregs), a subset of CD4 + T cells that maintain immune homeostasis and self-tolerance, are a strong suppressor of tumor-specific T cell responses and are thus an important pro-tumoral component in the immune system [36,37]. Since Tregs are absent in the current study, the effect of TBI and IL-2 on Pmel-1 may be different in the immune competent condition. These limitations should be carefully considered before interpreting the results. TBI intensity determines the extent of myeloablation that the subject undergoes during ACT. Previous reports showed that the intensity of lymphodepletion correlated with the efficacy of ACT, since high intensity lymphodepletion removed endogenous cells with potential inhibitory activities [38]. However, high dose radiation around 10 Gy induces myeloablation, thus requiring CD34 + hematopoietic stem cell (HSC) transplantation [39]. Therefore, we used 4 Gy TBI, which causes nonmyeloablative lymphodepletion and dispenses with HSC transplantation. Although we observed various changes in the immune profiles after low dose TBI in this study, high dose TBI and an HSC graft may lead to different consequence regarding synergy with IL-2 because of the mechanistic disparity [39]. Other modifications such as local irradiation can also result in a better outcome as it targets tumor-infiltrating immune cells, which mostly consist of immune-suppressive populations. For instance, local tumor irradiation can spare splenic M1 macrophages with anti-tumor activity [40,41], which was depleted by TBI in the current study. Conclusions We showed that non-myeloablative TBI and IL-2 treatment independently contributed to ACT efficacy, whereas the combination failed to induce an additive effect between the regimens. The underlying factors related to this outcome included alterations in the DC sub-populations and insufficient activation of transferred Pmel-1. This highlights the importance of sufficient antigen presentation even if the tumor-specific T cells are abundant in the recipient. Lastly, other immunological changes induced by TBI provides insights into the development of an effective ACT strategy in future studies. Informed Consent Statement: Not applicable. Data Availability Statement: All data supporting the findings of this study are available within the article.
v3-fos-license
2017-07-14T18:42:46.477Z
2017-06-26T00:00:00.000
21542674
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2017.00135/pdf", "pdf_hash": "b489dabae5573b5be55cb6c69083104551b8ca9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44644", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "b1d1f4febf69991ba68c8ef4aff88319d795b606", "year": 2017 }
pes2o/s2orc
Ror2 Signaling and Its Relevance in Breast Cancer Progression Breast cancer is a heterogeneous disease and has been classified into five molecular subtypes based on gene expression profiles. Signaling processes linked to different breast cancer molecular subtypes and different clinical outcomes are still poorly understood. Aberrant regulation of Wnt signaling has been implicated in breast cancer progression. In particular Ror1/2 receptors and several other members of the non-canonical Wnt signaling pathway were associated with aggressive breast cancer behavior. However, Wnt signals are mediated via multiple complex pathways, and it is clinically important to determine which particular Wnt cascades, including their domains and targets, are deregulated in poor prognosis breast cancer. To investigate activation and outcome of the Ror2-dependent non-canonical Wnt signaling pathway, we overexpressed the Ror2 receptor in MCF-7 and MDA-MB231 breast cancer cells, stimulated the cells with its ligand Wnt5a, and we knocked-down Ror1 in MDA-MB231 cells. We measured the invasive capacity of perturbed cells to assess phenotypic changes, and mRNA was profiled to quantify gene expression changes. Differentially expressed genes were integrated into a literature-based non-canonical Wnt signaling network. The results were further used in the analysis of an independent dataset of breast cancer patients with metastasis-free survival annotation. Overexpression of the Ror2 receptor, stimulation with Wnt5a, as well as the combination of both perturbations enhanced invasiveness of MCF-7 cells. The expression–responsive targets of Ror2 overexpression in MCF-7 induced a Ror2/Wnt module of the non-canonical Wnt signaling pathway. These targets alter regulation of other pathways involved in cell remodeling processing and cell metabolism. Furthermore, the genes of the Ror2/Wnt module were assessed as a gene signature in patient gene expression data and showed an association with clinical outcome. In summary, results of this study indicate a role of a newly defined Ror2/Wnt module in breast cancer progression and present a link between Ror2 expression and increased cell invasiveness. Breast cancer is a heterogeneous disease and has been classified into five molecular subtypes based on gene expression profiles. Signaling processes linked to different breast cancer molecular subtypes and different clinical outcomes are still poorly understood. Aberrant regulation of Wnt signaling has been implicated in breast cancer progression. In particular Ror1/2 receptors and several other members of the non-canonical Wnt signaling pathway were associated with aggressive breast cancer behavior. However, Wnt signals are mediated via multiple complex pathways, and it is clinically important to determine which particular Wnt cascades, including their domains and targets, are deregulated in poor prognosis breast cancer. To investigate activation and outcome of the Ror2-dependent non-canonical Wnt signaling pathway, we overexpressed the Ror2 receptor in MCF-7 and MDA-MB231 breast cancer cells, stimulated the cells with its ligand Wnt5a, and we knocked-down Ror1 in MDA-MB231 cells. We measured the invasive capacity of perturbed cells to assess phenotypic changes, and mRNA was profiled to quantify gene expression changes. Differentially expressed genes were integrated into a literature-based non-canonical Wnt signaling network. The results were further used in the analysis of an independent dataset of breast cancer patients with metastasis-free survival annotation. Overexpression of the Ror2 receptor, stimulation with Wnt5a, as well as the combination of both perturbations enhanced invasiveness of MCF-7 cells. The expression-responsive targets of Ror2 overexpression in MCF-7 induced a Ror2/Wnt module of the non-canonical Wnt signaling pathway. These targets alter regulation of other pathways involved in cell remodeling processing and cell metabolism. Furthermore, the genes of the Ror2/Wnt module were assessed as a gene signature in patient gene expression data and showed an association with clinical outcome. In summary, results of this study indicate a role of a newly defined Ror2/Wnt module in breast cancer progression and present a link between Ror2 expression and increased cell invasiveness. Keywords: Wnt signaling, ror2, network integration, module, breast cancer, metastasis inTrODUcTiOn Breast cancer is a heterogeneous disease with respect to pathological characteristics, molecular profiles, and prognoses. Gene signatures derived from gene expression profiles proved to be useful to separate breast cancers into distinct molecular subtypes. Based on the PAM50 gene signature, Abbreviations: Basal, basal-like; DEGs, differentially expressed genes; ER+, estrogen receptor positive; FDR, false-discovery rate; Her2, ERBB2-overexpressing; KM, Kaplan-Meier; LumA, luminal A; LumB, luminal B; MFS, metastasis-free survival. five subtypes have been defined: Basal-like (Basal), ERBB2overexpressing (Her2), luminal A (LumA), luminal B (LumB), and normal-breast-like breast cancer (1,2). They have been associated with significant differences in clinical outcome in terms of developing distant metastasis and overall survival (3). Furthermore, these subtypes vary in activation states of multiple signaling pathways, among them the Wnt signaling pathway. Aberrant regulation of Wnt signaling has been implicated in breast cancer progression (4) and expression of a number of important Wnt pathway members has been shown to be altered in different molecular subtypes (5). However, Wnt signals are channeled through several distinct cascades. Activation of the canonical, β-catenin-dependent Wnt pathway is characterized by the accumulation of β-catenin in the cytosol and its translocation to the nucleus. Subsequent transcription changes determine cell survival and proliferation (6). In contrast, alternative non-canonical Wnt pathways mediate β-catenin-independent signals. Multiple non-canonical Wnt ligands bind receptors, such as Ror1, Ror2, Ryk, and several members of the Frizzled receptor family, in a rather promiscuous way. Three main cascades can be distinguished: Wnt/Ror signaling, Wnt/Ca2+ signaling, and Wnt/planar cell polarity (PCP); however, these cascades are greatly intertwined (7). For example, the Wnt5a ligand can bind Ror1/Ror2 tyrosine kinase receptors, which activate Jun-N-terminal kinase (Jnk). Subsequently, this initiates transcription via the c-Jun transcription factor and can inhibit β-catenin-dependent Wnt signaling. Moreover, Wnt5a can also traffic signals toward the PCP cascade via RhoA, Rac, and Cdc42. The outcome of non-canonical Wnt signaling in general is linked to cytoskeletal rearrangements and changes in cell motility (7)(8)(9)(10). Several particular non-canonical pathway members have been associated with aggressive breast cancer subtypes. For instance, Wnt5a and Wnt5b were found to be overexpressed in basal-like MDA-MB-231 cells compared to less aggressive LumA MCF-7 cells and their expression levels were also elevated together with Ror1/Ror2 in breast cancer brain metastases (11). Furthermore, breast cancer patients expressing Ror1 and Ror2 have been reported to show a poor survival (12,13). However, specific outcomes of distinct Wnt signaling pathways triggered by a particular ligand-receptor binding are still poorly understood in the context of breast cancer. Here, we aim to further investigate activation and outcome of Ror2-dependent non-canonical Wnt signaling in breast cancer. To that end, we used the weakly invasive, estrogen receptor positive (ER+) breast cancer cell line MCF-7 as a model system for intervention experiments. The Ror2 receptor and the ligand Wnt5a were chosen as non-canonical Wnt pathway members for the perturbation of MCF-7 cells. To explore the effect of these perturbations, the invasive capacity of the cells was measured and the mRNA of the cell lines was profiled. The RNA sequencing (RNA-Seq) data were further analyzed in a bioinfomatic framework by integration with existing Wnt signaling networks. The resulting Ror2/Wnt module was further explored in independent gene expression data of breast cancer patients in order to verify the involvement of non-canonical Wnt signaling in metastasis development (for an overview of experimental procedure/workflow steps see Figure 1). Patient Gene Expression Data The breast cancer patient data are a collection of 10 public microarray datasets measured on Affymetrix Human Genome HG-U133 Plus 2.0 and HG-U133A arrays. The datasets were retrieved from Gene Expression Omnibus (GEO) (14) data repository, accession numbers GSE25066, GSE20685, GSE19615, GSE17907, GSE16446, GSE17705, GSE2603, GSE11121, GSE7390, and GSE6532. Each dataset was processed using the RMA probesummary algorithm (15), and only samples with metastasis (or distant relapse)-free survival annotation were selected. The datasets were combined together on the basis of HG-U133A array probe names and quantile normalization was used over all datasets. Breast cancer molecular subtypes for the patient samples were predicted by fitting a single sample predictor as implemented in the genefu R-package (16,17) at prediction strength threshold 0.5 using PAM50 intrinsic genes list (1). Wnt Network Models Four previously published network models (18) represent distinct Wnt signaling cascades: canonical Wnt signaling, non-canonical Wnt signaling, inhibition of canonical Wnt signaling, and regulation of Wnt signaling pathways. Briefly, these models were constructed based on data from multiple pathway databases as directed signaling graphs with nodes corresponding to genes and edges corresponding to activation or inhibition processes. The network models can be utilized also as simple gene sets consisting of respective graph node labels. immunohistochemistry To investigate the hormone receptor status immunohistochemistry was performed. Therefore, cells were centrifugated and washed in PBS. Then, they were resuspended (1⋅10 −6 /ml PBS). 200 µl were placed on the object carrier and centrifuged at 800 RPM for 5 min and then dried. Estrogen (ER), progesteron (PR), and Her2 were determined from the routine histopathological workup using immunohistochemical staining. The monoclonal mouse anti-human ER α antibody (#1D5) as well as the monoclonal mouse anti-human PR antibody (#636, both DAKO, Denmark) were used at a dilution of 1:100 and the rabbit monoclonal Her2 antibody (#SP3, Thermo Scientific, UK) at a dilution of 1:200. For all three antibodies, a standardized immunohistochemical staining technique was performed including a 90-min heat epitope retrieval using the immunostainer followed by a 45-min incubation with the specific antibody. rna Deep sequencing Library preparation for RNA-Seq was performed using the TruSeq Stranded Total RNA Sample Preparation Kit (Illumina, RS-122-2201) starting from 1,000 ng of total RNA. Accurate quantitation of cDNA libraries was performed by using the QuantiFluor TM dsDNA System (Promega). The size range of final cDNA libraries was determined applying the SS-NGS-Fragment 1-6,000 bp Kit on the Fragment Analyzer from Advanced Analytical (320 bp). cDNA libraries were amplified and sequenced by using the cBot and the HiSeq2000 from Illumina (SR; 50 bp; 35 million reads per sample). Sequence images were transformed with Illumina software BaseCaller to bcl files, which were demultiplexed to fastq files with CASAVA v1.8.2. Quantitative real-time Pcr (qrT-Pcr) Total RNA from empty vector (pcDNA) and Ror2-overexpressing (pRor2) cells was extracted using the High Pure RNA isolation kit (Roche). For each sample, 1 µg of RNA was transcribed into cDNA with the iscript cDNA synthesis kit (Bio-Rad). Gene expression was measured by SYBR green detection on the ABI PRISM 7900HT system (Applied Biosystems) from 10 ng cDNA per reaction with gene-specific primers. Data were analyzed with the SDS software version 2.4. (Applied Biosystems) and target gene expression quantified with the ΔΔct-method after normalization to the two housekeeping genes HPRT1 and GNB2L1. Primer sequences are as follows: statistical and Bioinformatic analyses RNA-Seq Processing and Differential Analysis RNA sequencing data were first quality checked via FastQC (Babraham Bioinformatics). The reads were then mapped against the reference genome GRCh37 with the STAR RNA-Seq alignment tool (23), while incorporating database information from Ensembl ver. 37.73 during the reference indexing step. Gene-level abundances were estimated using the RSEM algorithm (24). Further processing steps were performed using the edgeR (25) R-package. Non-expressed genes were filtered out by keeping the genes with at least one count-per-million reads in at least three samples. Differential genes between different conditions were identified by fitting negative binomial generalized linear models (26). Gene p-values were adjusted for multiple testing using Benjamini-Hochberg method (27) resulting in false-discovery rate (FDR) values and significantly differentially expressed genes (DEGs) were considered at FDR < 0.05 level. The raw RNA-Seq data have been submitted to GEO repository under the accession number GSE74383 for MCF-7 conditions and under the accession number GSE96637 for MDA-MB231 conditions. Gene Set Enrichment and Network Integration Analyses Differential targets identified in the analysis of RNA-Seq data were further subjected to enrichment and network integration analyses. To test enrichment of pathways, a simple gene set approach was applied (28). In particular, over-representation of the common target genes was tested using Fisher's exact test (29), whereas rank-based enrichment testing of a full list of gene p-values was performed using Wilcoxon rank-sum test. The network integration analysis steps were performed as described in Ref. (30). In brief, the common targets were first mapped onto the nodes of the non-canonical Wnt signaling network model, and the nodes induced by the mapped targets were used as terminal nodes for the Steiner tree analysis as implemented in the SteinerNet R-package (31). In this analysis the Steiner tree, minimal size subgraph connecting all terminal nodes, is searched within the undirected network based on shortest path approximation and so-called Steiner nodes are introduced to ensure connectivity. All nodes of the Steiner tree were used to extract an induced subnetwork containing all original directed edges. For visualization purpose the range for the node color coding was limited to ±2-fold change. Clustering and Survival Analyses For the analysis of public gene expression data of breast cancer patients, the complete-linkage hierarchical clustering was performed based on Pearson correlation as the distance measure. When multiple probes corresponded to a single gene, the probe with highest average expression level was used to represent a gene in the clustering analysis. The patient samples were clustered based on a gene signature originating from the network integration analysis. Distinct patient clusters within the dendrogram were identified using dynamic hybrid cut algorithm implemented in the function cutreeDynamic from the dynamicTreeCut R-package (32). The clusters were detected in a bottom-up manner based on the dendrogram shape and the correlation dissimilarity information among the patients. The minimum cluster size parameter was set as 12.5% of the patients when the whole dataset was clustered and 25% of the patients when only patients of a particular molecular breast cancer subtype were clustered. Resulting patient clusters were subjected to a Kaplan-Meier (KM) analysis of metastasis-free survival (MFS). KM curves were compared using a log-rank test implemented the in survival R-package (33). When plotting the KM curves the first 15 years were visualized. Clustering and Survival Analyses Based on Random Signatures For control purposes, significance of 1,000 random signatures was investigated in the same manner as the original signature from network integration analysis. Random signatures were generated by sampling 76 genes from the pool of 4,140 KEGG pathway's genes for 1,000 times. The gene pool was created by merging all KEGG pathway gene sets and by limiting the pool to the unique genes which are represented by a HG-U133A array probe in the patient gene expression dataset. Subsequently, all steps of hierarchical clustering, detection of clusters in the dendrogram using To investigate the effect of non-canonical Wnt signaling on cancer progression and downstream signaling, the Wnt5a ligand and the membrane receptor Ror2 were chosen as non-canonical Wnt pathway members to be perturbed. In particular, MCF-7 cells were stably transfected either with an empty vector (pcDNA) or with a Ror2 overexpression construct (pRor2) and successful transfection was confirmed by flow cytometry (Figure 2A), qRT-PCR (Figure 2B), and western blotting ( Figure 2C). Overexpression of Ror2 led to an activation of non-canonical Wnt signaling in MCF-7 cells with an increase in PKC and RhoA expression as well as Jnk phosphorylation ( Figure 2C; Figure S1 in Supplementary Material). Wnt5a stimulation increased total JNK levels in control cells; however, it had no additional stimulatory effect in Ror2-overexpressing cells. Moreover, elevated expression of Ror2 also increased MCF-7 invasiveness ( Figure 2D). The same effect was observed when empty vector cells were stimulated with Wnt5a. Interestingly, the combination of both, Ror2 overexpression and additional stimulation with its ligand Wnt5a (pRor2 + Wnt5a condition), was able to even further enhance cancer cell invasion compared to the Ror2 overexpression alone ( Figure 2D). This suggests that at least a part of the pro-invasive effect of Wnt5a is mediated through the Ror2 receptor. Since it has been reported that under distinct stimulation MCF-7 cells can change their phenotype from hormone receptor positive to triple negative (34), we investigated whether this is true for Ror2 overexpression and might explain the gain in cell invasiveness. However, we did not detect any changes in the hormone status of the cells as analyzed by immunohistochemistry ( Figure 2E). Therefore, we decided to characterize in depth the gene expression profiles of the cells with induced Ror2 overexpression and Wnt5a stimulation in order to identify novel targets, which might be involved in the increased invasiveness of the cells. The following four conditions were selected for further analysis by RNA-Seq: control MCF-7 cells with the empty vector (pcDNA), cells stimulated with Wnt5a (pcDNA + Wnt5a), cells with stable overexpression of Ror2 (pRor2), and a combination of both perturbations (pRor2 + Wnt5a). mrna Profiling reveals Targets of ror2 Overexpression To quantify the gene expression changes linked with the observed pro-invasive effects of Wnt5a and Ror2 in MCF-7 cells, each of the four conditions was profiled in three replicates using RNA-Seq. The library size of the sequenced samples ranged from 35 to 55 million ( Table 1). In the differential analysis, gene expression profiles of the different conditions were compared to identify downstream targets of the distinct perturbations. Therefore, five comparisons ( Table 2) were performed to identify DEGs. The two comparisons testing for the effect of the Wnt5a stimulation, with and without presence of the overexpressed Ror2 (pcDNA vs. pcDNA + Wnt5a and pRor2 vs. pRor2 + Wnt5a), yielded rather low numbers of significantly DEGs (Figure 3A; Table S1 in Supplementary Material). The single significant DEG, which was detected in the both comparison, was MUC5AC. These low numbers of DEGs indicated that Wnt5a stimulation had only moderate effect on the gene expression changes of the MCF-7 cell line and suggest that it mediates its pro-invasive effects rather on the protein level. The three comparisons that tested for the impact of the Ror2 overexpression (pcDNA vs. pRor2, pcDNA vs. pRor2 + Wnt5a, pcDNA + Wnt5a vs. pRor2 + Wnt5a) demonstrated the strongest effects resulting in 2,860, 3,729, and 3,022 DEGs, respectively (Table S2 in Supplementary Material). Stable targets of Ror2 overexpression were determined in a Venn analysis as an overlap of these three differential gene lists that resulted in 2,068 common targets (Figure 3B). We selected the top five genes from this overlap to validate the observed gene expression changes by qRT-PCR. Indeed, we were able to detect a significant upregulation of FAT1, VIL1, HNF4G, and WIPF1 in Ror2-overexpressing MCF-7 cells compared to control cells, whereas the upregulation of LCP1 was not confirmed (Figure 3C). ror2 Targets are enriched in non-canonical but not in canonical Wnt gene set To explore the gene list of 2,068 Ror2 targets in the context of different Wnt signaling cascades, we performed enrichment analysis. Four Wnt models representing distinct Wnt signaling pathways were used as gene sets for enrichment testing. To further scrutinize the contribution of the upregulated and downregulated genes to the enrichment, the targets were sorted based on positive and negative fold-changes into three lists: all, up, and down. Two Wnt pathways were detected as over-represented in the list of all targets: non-canonical Wnt signaling and Regulation of Wnt signaling ( Table 3). Whereas the Non-canonical Wnt signaling gene set was significant for all target list as well as for upregulated targets, the Canonical Wnt signaling gene set was not significant for any target list. ror2 Targets affect cell remodeling Processes and cell Metabolism We further investigated the target list in an enrichment analysis beyond the context of Wnt signaling, exploring other signaling and metabolic processes which were altered by Ror2 overexpression. We tested pathway gene sets from the KEGG database and identified 16 pathways enriched in the all list, 18 pathways enriched in the up list and no pathway enriched in the down list (Figure 4). This resulted in a collection of 26 pathways network integration reveals a ror2/Wnt Module: ror2-expression-responsive subnetwork of non-canonical Wnt Pathway The list of 2,068 targets was further used for network integration analysis. The results of the Wnt pathways enrichment analysis suggested activation of non-canonical Wnt signaling in the gene expression data. Therefore, the non-canonical Wnt network model was chosen for the subsequent network integration in order to identify a module induced by the Ror2 overexpression targets. The underlying non-canonical Wnt model is a signaling network of 489 nodes representing pathway genes interconnected by activation and inhibition edges. First, 2,068 target genes were mapped onto the nodes of the non-canonical Wnt network, which resulted in 66 induced nodes. To link these induced nodes within the network structure, the Steiner tree algorithm was employed. In this step, 18 connecting nodes, so-called Steiner nodes, were introduced that do not embody differential targets. Subsequently, the induced subnetwork including all original edges between the 84 nodes was extracted (Figure 5; Table S3 in Supplementary Material). This subnetwork represents the module of non-canonical Wnt pathway regulated by the overexpressed Ror2 receptor (hereinafter referred to as Ror2/Wnt module). The Ror2/Wnt module revealed several important Wnt pathway members: differentially regulated ligand WNT11, receptors FZD5 and FZD4, and signal transducer DVL1; as well as WNT5A, DVL2, and CD36 as Steiner nodes interconnecting the differential targets. Predicted Breast cancer Molecular subtypes show Metastatic Differences We considered the members of the Ror2/Wnt module to be candidate genes of the non-canonical Wnt pathway that confer an aggressive phenotype to MCF-7 breast cancer cells after Ror2 overexpression. Therefore, we further aimed to assess the impact of the Ror2/Wnt module genes in the clinical context of metastatic breast cancer. To this end, we first collected available breast cancer patient data of expression profiles annotated with MFS follow-ups. Ten public gene expression datasets of patient samples were assembled into a compendium dataset. Annotations of a metastasis event with time to metastasis (or distant recurrence/distant relapse) information were compiled together for 2,075 patients. In this cohort, the molecular breast cancer subtypes were predicted using the PAM50 gene signature. For 1,724 patients (out of 2,075) one of the following subtypes was assigned: Basal, LumA, LumB, or Her2 ( Table 4). As no sample was predicted as normal-breastlike subtype above the prediction strength threshold, we did not consider this subtype for further analyses. As the molecular breast cancer subtypes are known to have different prognoses, we investigated differences in MFS among predicted subtypes as a quality benchmark step. We identified the highest 5-year MFS rate of 0.92 for the LumA patients, whereas for the Basal-like and Her2 subtype patients the rate was the lowest-0.74 and 0.61, respectively ( Table 4). We further tested the KM curves of predicted patient groups and showed prognostic significance of breast cancer subtypes in terms of developing metastasis ( Figure S2 in Supplementary Material). Translation of ror2/Wnt Module genes to Breast cancer Patient Data The identified 84 genes in the Ror2/Wnt module were used as a pathway-based gene signature to assess prognostic power for metastasis development in breast cancer. Out of these 84 genes, 76 could be mapped to the patient expression data. Expression levels of these 76 genes of the Ror2/Wnt module were utilized for the correlation distance-based clustering analysis (Figure 6A). To determine the number of clusters in the patient dendrogram, the dynamic hybrid algorithm was employed and identified four distinct clusters. These four clusters exhibited significant differences in MFS (Figure 6B) with the Cluster 3 (magenta) having markedly worse prognosis. This cluster comprised a majority of Basal subtype patients; however, all clusters contained mixtures of at least two or more subtypes (Figure 6C). The mixed distribution of breast cancer subtypes across the four patient clusters in the dendrogram motivated us to explore the MFS within the individual subtypes in regard to the Ror2/Wnt module expression patterns. Therefore, we performed subtypespecific patient clustering followed by KM analysis of MFS, based on the same Ror2/Wnt module-based gene signature as previously done for the whole cohort. Clustering and cluster-detection analyses revealed two clusters in each of the patient groups of LumA, LumB, and Basal subtypes (Figure 7; Figures S3-5 in Supplementary Material). The Her2 subtype was not included due to the relatively small number of patients. For the LumA and Basal subtype the patient subgroups showed significant differences in MFS (p = 0.0377 and p = 0.0145, respectively). In contrast, two subgroups detected within the LumB subtype showed no difference in MFS (p = 0.9775). LumA subtype patients grouped in the cluster of 312 samples (light-green) had better prognosis than the 404 patients in the second LumA cluster (deep-sea-blue). In the Basal subtype-specific analysis the two clusters exhibited a significant difference with the KM curve of the smaller patient cluster (115 samples, magenta) showing worse metastasis prognosis than the bigger cluster of 174 patients (brown). Furthermore, we compared the performance of Ror2/Wnt module genes signature to the prognostic performance of random signatures in the same data. This step was taken in order to investigate prognostic superiority of the original signature over random ones and thus to ascertain its clinical relevance. Thousand gene signatures of the same size as the original (76 genes) were randomly sampled from the pool of 4,140 genes from KEGG pathways. Thereon, the analysis pipeline of hierarchical clustering, automatic detection of patient clusters, and KM analysis of MFS was executed. For each random signature the pipeline yielded a log-rank p-value that describes the significance of difference in MFS in detected patient groups. The same set of 1,000 random signatures was applied to the whole cohort as well as to LumA and Basal subsets. The resulting p-values were log-transformed (−log10) and visualized together with the corresponding p-values of the original Ror2/Wnt module gene signature (Figure 8). From the 1,000 random signatures 63.2, 34.9, and 69.9% were detected as significantly prognostic (p < 0.05) in the whole cohort, the LumA group and the Basal group, respectively. Further, we checked the percentage of random signatures that performed better than the original Ror2/Wnt module-based signature: in the whole patient cohort 9% of the random signatures were more strongly associated with MFS than the original signature (p = 9.53e−05). In the groups of LumA and Basal subtypes, 30.5 and 42.9% of random signatures outperformed the original (p = 0.0377 and p = 0.0145), respectively. Knockdown of ror1 in MDa-MB231 cells Decreases non-canonical Wnt signaling In order to investigate whether Ror2 and Wnt5a also have pro-invasive effects in triple-negative MDA-MB231 cells, we overexpressed Ror2 in the cells and confirmed successful transfection by qRT-PCR ( Figure 9A). However, cell invasion assays showed that, in contrast to MCF-7, MDA-MB231 cells are already highly invasive and cannot be stimulated any further neither by Ror2 overexpression nor by addition of Wnt5a ( Figure 9B). Interestingly, MDA-MB231 cells do not express any endogenous Ror2 ( Figure 9C); however, it has been shown previously that instead they express its family member Ror1, which is important for the invasive phenotype of the cells (21). Therefore, we performed a stable knockdown of Ror1 in these cells (Figure 9D). While this had no effect on canonical Wnt signaling (Figure 9E), RhoA levels and JNK phosphorylation were decreased in the knockdown cells (Figure 9F). Similar to the MCF-7 cells, we were interested in a large-scale identification of the downstream gene expression changes and, therefore, these two conditions were selected for RNA-Seq: MDA-MB231 cells transfected with a non-silencing shRNA (shCTL) and cells transfected with a Ror1 shRNA (shRor1). The library size of the sequenced samples ranged from 47 to 60 million (Table 5). However, in the differential gene expression analysis comparing shCTL vs. shRor1 samples we identified only two significant DEGs: proto-oncogene AGR2 and a gene for uncharacterized protein RP11-1012A1.4. Despite of this low number of DEGs, we aimed to further explore the entire lists of measured genes in the context of distinct Wnt signaling cascades using rank-based gene set enrichment procedures. We detected significant enrichment of the Non-canonical Wnt signaling gene set (p = 0.035) and the Inhibition of canonical Wnt signaling gene set (p = 0.002), whereas Canonical Wnt signaling (p = 0.056) and Regulation of Wnt signaling (p = 0.870) gene sets were not significant. This indicates that although the expression changes after Ror1 knockdown were only moderate, the decreased invasiveness could be associated with altered activity of the non-canonical Wnt signaling. DiscUssiOn Activation states of canonical and non-canonical Wnt signaling pathways in breast cancer have so far eluded detection. Based on the previous results of Klemm et al. (11), we hypothesized that the non-canonical Wnt pathway is critical for progression of breast cancer. Here, we performed pathway interventions at the ligand and membrane receptor level in order to elucidate mechanism and outcome of this signaling cascade. In particular, we used the weakly invasive ER-positive MCF-7 cells which were transfected with an empty or Ror2 overexpression vector and optionally stimulated with recombinant Wnt5a in parallel. At the phenotypic level, the major consequence of the individual as well as combined perturbations was increased cell invasion. Therefore, to explore the large-scale effects of these perturbations at the gene expression level, the mRNA of the cells was sequenced and DEGs were identified. Studies on the role of Wnt5a in breast cancer reported contradicting evidence, with Wnt5a either enhancing or suppressing invasiveness of different breast cancer cells (35). Here we showed that Wnt5a has a clear pro-invasive effect on the MCF-7 cells. However, the numbers of differential targets of Wnt5a stimulation identified in RNA-Seq data were rather low (up to 11 genes). As we could not observe major changes in gene expression, the Wnt5a ligand could potentially mediate the signals leading to the phenotypic changes by activation of proteins in the PCP pathway (36) as opposed to the transcription of new genes. The top ranked differentially expressed target, which was upregulated after Wnt5a stimulation, is the ROR2 gene. Wnt5a is known to bind Ror2 (37) and this Ror2 upregulation demonstrates a possible positive feedback loop. The single differential gene common for both comparisons testing for Wnt5a stimulation effect in MCF-7 was MUC5AC, mucin 5AC. This gene has been studied in the context of colorectal (38) as well as pancreatic cancer (39), and in the latter cancer type its expression was associated with tumor growth. However, to our best knowledge, expression of MUC5AC has so far neither been linked to invasive breast cancer nor been reported as a potential target of Wnt5a signaling. The Ror2-overexpressing cells also showed a significant increase in their invasiveness compared to the control cell line. Similar observations have not only been made for MCF-7, but also for Her2-positive SK-BR-3 cells (21), thus pointing to a general effect of Ror2 on breast cancer cell invasiveness. However, invasion of triple-negative MDA-MB231 cells was not enhanced further by Ror2 overexpression, probably due to their already high invasive potential. In contrast, knockdown of Ror1, which is highly expressed endogenously in these cells, diminished non-canonical Wnt signaling as shown by Western Blotting and reduces the invasive potential of the cells (21). Although by RNA-Seq we have not detect any major gene expression changes between MDA-MB231 control cells and Ror1 knockdown, we identified enrichment of non-canonical Wnt signaling gene set which could be associated with previously observed decrease of MDA-MB231 cells invasiveness. Interestingly, both Ror1 and Ror2 have been suggested as receptors for Wnt5a (40), have been linked to breast cancer progression and their expression was previously observed in breast cancer brain metastasis (11). Combined treatment of MCF-7 cells with both Wnt5a stimulation and Ror2 overexpression exhibited even a stronger pro-invasive effect than the single stimulations. We assume that by presence of both the ligand as well as its receptor the noncanonical Wnt5a/Ror2 signaling cascade was highly stimulated (37), which then further drove the invasiveness. However, without Wnt5a stimulation the MCF-7 cells express no or low levels of Wnt5a (11) and there were no changes detected in WNT5A gene expression levels after Ror2 overexpression. This opens an intriguing question about which other ligand could have mediated the signaling via Ror2 and the subsequent cell invasion in the Ror2-overexpressing cells. An interesting candidate could be Wnt11. Although it is unclear whether the Wnt11 protein was present at the time of perturbation, the WNT11 gene was subsequently transcribed and could potentially act as a ligand mediating non-canonical signals via Ror2. However, Wnt11/ Ror2 signaling has been described only in zebrafish (41) and is not known in humans so far. Nevertheless, Wnt11 itself has been reported to be involved in tumor progression of several cancer types (42)(43)(44)(45). At the transcriptomic level, the three differential comparisons that tested for the Ror2 overexpression targets demonstrated, in contrast to the low number of genes affected by the Wnt5a exposure, a stronger effect of this perturbation. The overlap of the three gene lists revealed 2,068 common DEGs, which represent stable targets of Ror2 overexpression independent on whether the Wnt5a stimulation was present or not. We consider these common targets to be candidate genes that confer the invasive phenotype to MCF-7 breast cancer cells. To gain further insight into the biology underlying this fairly long list of expressionresponsive Ror2 targets, we performed enrichment and network integration analyses. For testing the enrichment of Wnt signaling gene sets and KEGG pathways in the common targets list the over-representation analysis approach was utilized. In the context of four different Wnt gene sets, the one representing the Non-canonical Wnt signaling pathway was detected as significant in the all target list as well as in the sublist of only upregulated targets. This suggests that the Ror2 overexpression induces activation of non-canonical Wnt signaling, which is in accordance with the upregulation and/or activation of several non-canonical Wnt proteins that we observed in the MCF-7 Ror2-overexpressing cells by Western Blotting. Beside the non-canonical Wnt gene set, the Regulation of Wnt signaling gene set was significantly over-represented in the target list, which indicates that Ror2 overexpression also modulates activity of the pathways acting upstream of the Wnt ligands. Enrichment analysis of KEGG gene sets suggests that the observed increase of cell invasiveness induced by Ror2 could be driven via activation of signaling pathways such as regulation of actin cytoskeleton (46), chemokine signaling pathway (47,48), and ECM receptor interaction (49). Furthermore, the detection of multiple metabolic pathways supports the evidence of a regulatory connection between the non-canonical Wnt signaling and cancer cell metabolism (50,51). Although the Wnt signaling pathway was not found enriched, this KEGG gene set does not differentiate between the canonical and non-canonical Wnt branches and is therefore less specific. In contrast, the Calcium signaling pathway that shares functional overlap with the β-catenin-independent Wnt signaling (52) was identified as 1 60 shCLT 2 55 shCLT 3 47 shRor1 1 56 shRor1 2 40 shRor1 3 49 The significantly enriched in the upregulated as well as all target lists which also points toward the activation of non-canonical Wnt signaling. As the results of enrichment and Western Blot analyses indicate an induction of the non-canonical Wnt pathway, we further utilized the previously constructed non-canonical Wnt signaling network model (18) for integration of Ror2 targets. We have chosen an approach of a direct projection of the targets onto the signaling network nodes combined with Steiner tree analysis. The identified differentially regulated subnetwork can be considered as non-canonical Wnt module responsive to the signals channeled via Ror2 receptor. Into this Ror2/Wnt module Steiner nodes were introduced, which do not embody expression-responsive targets; however, they represent important connector genes that play a central role in network (53). When we focused on the Steiner nodes within the module, we found several of these genes to have already been investigated in the context of aggressive breast cancer, such as CD36 (54), CSNK1D (55), WNT5A along with DVL2 (56), and PPARGC1A (57). In summary, this Ror2/Wnt module highlights the importance of non-canonical Wnt signaling, and in addition to the Ror2 targets it reveals further key genes relevant for breast cancer progression. Furthermore, we were interested whether this pathway module is indeed associated with the observed increased invasiveness of the breast cancer cells. To explore this association in a clinical context, we applied Ror2/Wnt module genes as a prognostic signature in the MFS analysis of a patient cohort. To this end, the expression profiles of patients were collected across 10 public datasets and metastasis event and follow-up annotations were compiled creating one large compendium dataset. The breast cancer molecular subtypes (LumA, LumB, Her2, and Basal) were predicted within this cohort using the PAM50 signature. However, the predicted classification based on gene signatures should be regarded with caution (58). Therefore, to check reliability of this stratification at a basic biological level, we investigated the four predicted subtype groups in the MFS analysis. The results are consistent with the relapse-free survival observed in the study of Parker et al. (1), which confirms the association of breast cancer subtypes with different metastatic potentials. Along these lines, regarding the 5-year MFS the LumA subtype showed the best prognosis, as expected (59), whereas the Her2 subtype was found to show the worst prognosis, followed by the Basal subtype patients. The Ror2/Wnt module-based gene signature used for clustering analysis of patient expression profiles revealed four patient subgroups with varying prognosis. The patient cluster with the worst prognosis contains a major proportion of the Basal subtype patients, which is consistent with an increased likelihood of metastasis development in triple-negative breast cancers (60). Also the study of Smid et al. (5) suggested that different activation states of Wnt signaling are associated with different molecular subtypes. However, each subtype was distributed across two or more clusters, which indicates that these clusters do not simply mirror the biology underlying the breast cancer molecular subtypes. Therefore, we applied the Ror2/Wnt module-based gene signature within the individual subtypes to explore whether the subtype-specific differences in metastasis development may be associated with varying expression levels of the module genes. In both LumA and Basal subtypes we found two patient subgroups significantly differing in MFS. Within the Basal subtype, multiple subgroups have been previously identified and linked to remarkable biological differences (61,62). Here, we further demonstrated that the expression of Ror2/Wnt module genes has prognostic power in this breast cancer subtype. The luminal patients in contrast to the aggressive subtypes are characterized by continuous relapses occurring in later years (63), which are reflected in the proximity of the two KM curves of the LumA subgroups within the initial years. In contrast to the LumA and Basal subtypes, in the LumB subtype the expression patterns of the module genes are not associated with metastasis development. Although these results demonstrated the prognostic potential of the Ror2/Wnt module gene signature, the study of Venet et al. (64) has suggested that also random gene signatures are able to separate breast cancer patient into groups with significantly different outcomes. Therefore, to ascertain clinical relevance of the results we compared the performance of the original Ror2/Wnt module gene signature to the randomly generated signatures of the same size sampled from the pool of KEGG database genes. The original signature showed to be more strongly associated with metastasis outcome than the median random signature in the whole cohort as well as in the LumA and Basal subgroups. While in the whole cohort the original signature ranked between the top 9% of the random signatures, in the LumA and Basal subtypes it was 30.5 and 42.9% of the random signatures which showed to be more related to metastasis-free survival than the original, respectively. Therefore, in the view of subtype-specific results the actual clinical utility of the Ror2/Wnt module-based signature seems rather limited. Nevertheless, the analysis of patient data showed an association of the signature with metastasis development complementary to the results of the invasive assays, thus providing a relevant insight into non-canonical Ror2/Wnt signaling. In conclusion, in this study we explored effects of Wnt5a and Ror2 perturbations in the ER-positive breast cancer cell line MCF-7 on the phenotypic and gene expression levels. We demonstrated that the overexpression of the Ror2 receptor as well as the stimulation with Wnt5a and the combination of both perturbations enhance cancer cell invasion. The expressionresponsive targets of Ror2 induce a module of the non-canonical Wnt signaling pathway. Furthermore, these targets alter regulation of further pathways involved in cell remodeling processing and cell metabolism. Moreover, we showed in the gene expression data of breast cancer patients that the Ror2/Wnt module-based gene signature is associated with metastasis-free survival. In summary, these results indicate an important role of non-canonical Wnt signaling mediated via the Ror2 receptor in breast cancer progression. aUThOr cOnTriBUTiOns MB performed the bioinformatic analyses and wrote the manuscript. KM, FK, and AB designed and performed the wet lab experiments. AW was involved in the RNA-Seq data preprocessing. TP, CB, and AB contributed with clinical expertise. FK, TB, and AB co-conceived and oversaw the study. All authors read and approved the final manuscript.
v3-fos-license
2017-01-13T08:00:58.000Z
2017-01-13T00:00:00.000
23983039
{ "extfieldsofstudy": [ "Medicine", "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-017-00927-4.pdf", "pdf_hash": "89011166c411660e80f1fe2bee312bb785934fd2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44645", "s2fieldsofstudy": [ "Physics" ], "sha1": "2c7201dbb6df5cd26913bbe41f9886031ad17a23", "year": 2017 }
pes2o/s2orc
Zeeman splitting via spin-valley-layer coupling in bilayer MoTe2 Atomically thin monolayer transition metal dichalcogenides possess coupling of spin and valley degrees of freedom. The chirality is locked to identical valleys as a consequence of spin–orbit coupling and inversion symmetry breaking, leading to a valley analog of the Zeeman effect in presence of an out-of-plane magnetic field. Owing to the inversion symmetry in bilayers, the photoluminescence helicity should no longer be locked to the valleys. Here we show that the Zeeman splitting, however, persists in 2H-MoTe2 bilayers, as a result of an additional degree of freedom, namely the layer pseudospin, and spin–valley-layer locking. Unlike monolayers, the Zeeman splitting in bilayers occurs without lifting valley degeneracy. The degree of circularly polarized photoluminescence is tuned with magnetic field from −37% to 37%. Our results demonstrate the control of degree of freedom in bilayer with magnetic field, which makes bilayer a promising platform for spin-valley quantum gates based on magnetoelectric effects. I n monolayer group VI transition metal dichalcogenides (TMDs) such as MoS 2 and WSe 2 , broken spatial inversion symmetry leads to finite but opposite Berry curvature and magnetic moment in the two valleys [1][2][3] . Altogether with strong spin-orbit interaction, broken symmetry enables the coupling of spin and valley degrees of freedom, which gives rise to a series of exotic valley effects, such as the valley Hall effect 4,5 , valley optical selection rule [6][7][8][9] , and valley Zeeman splitting [10][11][12][13][14][15] . In bilayer TMDs, the layers are rotated by 180°with respect to each other, leading to the recovery of inversion symmetry. It is therefore natural to query whether the above-mentioned valley-chirality still persists in bilayer TMDs. When the interlayer coupling is much smaller than the spin-orbit interaction, a bilayer can be regarded as two decoupled monolayers with the layer pseudospin leading to a spin-valley-layer coupling. This can be potentially utilized as a platform for spin-valley quantum gates with magnetic and electric control 16 . To this end, spin-layer locking induced valley Hall effect 17 , spin-polarized bulk bands 18 , valley optical selection rule 19 , and electric control 20 have been experimentally investigated. In this work, we demonstrate the Zeeman splitting persisting in bilayer 2H-MoTe 2 due to spin-valley-layer locking by means of polarization-selective magneto-photoluminescence. The circularly polarized photoluminescence of opposite helicity shows spectral splitting in the presence of an out-of-plane magnetic field despite the inversion symmetry of the bilayer system. Our study shows that in bilayer TMDs, the magnetic field has an important role in the toolbox for exploring the rich interplay between real spin and valley, layer pseudospins in bilayer TMDs. The magnetic control, together with electric control as demonstrated previously, pave the way for quantum manipulation of spin, valley, and layer degrees of freedom in bilayer TMDs 16 . Results Sample characterization. We perform our experiments on 2H-MoTe 2 , which is a layered semiconductor with hexagonal lattice. With decreasing number of layers, the indirect bandgap of bulk MoTe 2 turns into direct bandgap 21,22 . Berry curvature and orbital magnetic moments can be studied through the polarization-selective emission of photoluminescence. Monolayer and bilayer 2H-MoTe 2 have a relatively smaller bandgap among the TMDs and their photoluminescence emission lies in the near infrared range around~1.1 eV. A reversible structural phase transition between hexagonal and stable monoclinic has been reported in bulk single-crystalline MoTe 2 23 and a semiconductor-to-metal electronic phase transition has been demonstrated by thinning down bulk MoTe 2 or straining the MoTe 2 thin films 24 . These features make MoTe 2 a flexible material suitable for valley-based optoelectronic applications. An optical image of the studied sample is illustrated in Fig. 1a, where the monolayer (1L) and bilayer (2L) can be easily identified by their optical contrasts. The flakes are mechanically exfoliated using adhesive tapes and then transferred onto a silicon wafer with a 300 nm thick thermally grown SiO 2 . The as-prepared samples are kept under vacuum to prevent oxidation and deliquesce. The crystal structure of a bilayer AB-stacked MoTe 2 is shown in Fig. 1b. The bilayer has inversion symmetry as compared to monolayers. Monolayer and bilayer MoTe 2 have exciton energy of~1.1 eV, which can be experimentally extracted by the photoluminescence (PL) spectroscopy. We utilize a homemade fiber-based confocal microscope setup for the micro-PL experiments (Fig. 1c). We show the details of our experimental setup in the "Methods" section. The excitation and collection polarizations are controlled by a series of polarizers and quarter-wave plates. Below, we refer to co-polarization (cross-polarization) when the quarter-wave plates are configured for the same (opposite) handedness. To further confirm the number of layers in our sample, we perform Raman spectroscopy of the monolayer, bilayer, and multilayers at room temperature as shown in Fig. 1d. The The two layers are rotated in-plane by 180°relative to each other. c Optical setup for the polarization-resolved PL spectroscopy. The optical components are: achromatic lenses (AL1-3), polarizers (P1 and P2), half-wave plates (HWP1 and HWP2), quarter-wave plates (QWP1 and QWP2), a short pass filter (SPF), a long pass filter (LPF), and a beam splitter (BS). The sample is placed in a helium bath cryostat with an out-of-plane magnetic field in a Faraday geometry. The green arrow shows a negative magnetic field. d Raman spectroscopy of the MoTe 2 monolayer, bilayer, and multilayer. A 1g , B 1 2g , and E 1 2g represent different modes in Raman spectroscopy reports 21,22 . The exciton peak shows a linear power dependence, whereas the trion peak shows a sub-linear dependence with I PL / I 0:8 ex , where I PL and I ex are the intensity of the excitation and photoluminescence, respectively ( Supplementary Fig. 2). Polarization-resolved magneto-photoluminescence spectroscopy. After sample characterization, we demonstrate the Zeeman splitting in PL spectroscopy and PL polarization control by magnetic field. Following these, we will discuss the origin of our observations. Figure 2a and b shows the polarization-resolved PL spectra of monolayer and bilayer MoTe 2 under external magnetic field of −7, 0, and +7 T perpendicular to the sample plane at 2 K. Monolayer PL shows peak A 1 (B 1 ) with an energy of 1.187 eV (1.164 eV). The bilayer shows emissions at 1.154 and 1.136 eV (peak A 2 and B 2 in Fig. 2b). Peak A 2 (A 1 ) is attributed to the optical transition of the neutral exciton state in the 2L (1L) MoTe 2 . Peak B 2 (B 1 ) corresponds to the transition of charged exciton (trion) state in the 2L (1L) 27 . From the figure, we can make two main observations: First, at zero magnetic field, the wavelengths of PL emission are at the same position for σ + and σ − detection. At −7 T (+7 T), however, the position of the peaks blueshift (redshift) for σ + (σ − ) detection, which indicates an energy splitting. Second, the magnitudes of the peaks for σ + and σ − detection under magnetic field also differ, which manifests itself as magnetic field-dependent PL polarization. Here, the degree of the PL polarization can be defined as η PL ¼ To further illustrate these results, spectrum splitting and PL polarization for the neutral excitons are quantitatively depicted in Fig. 2c and d, respectively. The Zeeman splitting of an optical transition is fit with ΔE = gμ B B, where g is the g-factor associated with the magnetic moment in the system, μ B is the Bohr magneton, and B is the magnetic field. The energy splitting of the exciton state (peak A 1 of 1L and A 2 of 2L) depends linearly on the magnetic field, with slopes of −243 ± 3 and −274 ± 6 μeV T −1 , corresponding to g(A 1 ) = 4.21 ± 0.06 and g(A 2 ) = 4.73 ± 0.11. Although our experiment shows a finite valley polarization in bilayer MoTe 2 with near-resonant excitation (Supplementary Note 2; Supplementary Fig. 4), here we focus on PL polarization only with off-resonant excitation. In Fig. 2c and d, the averaged PL polarization of σ + and σ − excitation is shown, where the PL polarization of the 1L and 2L depends linearly on the magnetic field. We fit the relationship of the PL polarization η PL and the magnetic field B with η PL = βB, where β is a coefficient with (3.82 ± 0.04) × 10 −2 T −1 for peak A 1 of the 1L and (5.25 ± 0.28) × 10 −2 T −1 for peak A 2 of the 2L. The fit result of the trion (peak B 2 ) are shown in the Supplementary Fig. 3. In addition, we have measured the temperature dependence of the g-factor and PL polarization for both monolayer and bilayer MoTe 2 . As shown in the inset of Fig. 2c and d, although g-factor of monolayer exciton stays around 4, the g-factor of bilayer varies from 4.73 to 2.54 when the temperature changes from 2 to 70 K. When the temperature increases, PL polarization has a trend to decrease for both monolayer and bilayer as shown in Fig. 2d. The Zeeman splitting in MoTe 2 monolayers was already reported, which is attributed to the lifting of the valley degeneracy in the band structure due to the breaking of the time-reversal symmetry in the presence of a magnetic field, so called valley Zeeman splitting, or valley splitting for short [10][11][12][13] . The main observation here is that such Zeeman splitting still persists in bilayer, which can not be simply considered as valley Zeeman splitting anymore. Below, we focus on the physical origin of such splittings, as well as the magnetic field-dependent PL polarization. In monolayer TMDs, the spin and the valley pseudospin are effectively coupled by spin-orbital coupling and broken inversion symmetry 1 . Bilayer TMDs possess another degree of freedom, viz., layer pseudospin 16 . In a bilayer, the Hamiltonian at ±Kpoints can be expressed in a two-band approximation as where Δ is the bandgap, λ c (λ v ) denotes the spin-orbit coupling of conduction (valence) band and t ⊥ is the interlayer coupling of the layers. The strong coupling between the valley (τ z ) and layer σ c;v z À Á pseudospin, and the real spin (s z ) is a distinguishing feature of bilayers. The layer Pauli operators σ c z (σ v z ) are expressed in the basis of d z 2 d x 2 Ày 2 ± id xy À Á . The interlayer hopping, t ⊥ , vanishes for conduction band due to the symmetry of d z 2 orbitals. When the spin-orbit coupling strength λ v is much larger than the interlayer hopping amplitude, holes are primarily confined to either upper or lower layer, which can be labeled with layer pseudospin up u j i or down l j i. Figure 3a depicts the energy level diagram at zero magnetic field emphasizing the spin-valley-layer locking in bilayer. At a given energy in a given valley, different layers carry opposite spins. The lowest energy single-particle optical transitions giving rise to excitonic resonance for different valley, layer, and spins are also shown in Fig. 3a. As the spin is conserved in the optical transition (singlet exciton), spin-valley-layer locking leads to emission helicity being locked with the spin degree of freedom in both valleys. Upon diagonalizing H v , the hole energies shift q and the new eigenstates are admixtures of d u ± id u and d l ∓ id l orbitals. Unlike the case of monolayer where helicity of emission is tied to the valley degree of freedom, optical transitions of either helicity are present in both valleys for bilayers. In the absence of magnetic field, all four optical transition depicted in Fig. 3a are degenerate. When an out-of-plane B-field is applied, conduction and valence band energy will be shifted, in accordance with the respective magnetic moments as shown in Fig. 3b. The conduction band states have contribution only from the spin as d 2 z orbitals do not carry any magnetic moment, whereas the valence band states have orbital magnetic moment (intracellular contribution) stemming from d ± id orbitals in addition to the spin contribution. In the ideal case without substrate effect, spatial inversion symmetry is restored for bilayers 28 , which makes intercellular contribution vanish. With possible substrate effect 29 , there can still be asymmetry in bilayer, which might still introduce the intercellular term. The spin Zeeman shift can be written as Δ s = 2s z μ B B. As Δ s has the same value for conduction and valence band, it thus does not contribute to the net energy shift. Thus, intracellular contribution which, differs for the two bands, causes a measurable shift in the optical transition energies. In the limit of negligible interlayer coupling, the valence band is mainly comprised of d ± id orbitals with m = ±2, whereas the conduction band has m = 0. This intracellular contribution leads to a valley Zeeman splitting with a g-factor of 4 in monolayer TMDs. The bilayer case is in stark contrast with this as can be seen from Fig. 3b-although the valley degeneracy is not lifted, each valley experiences a splitting of emission helicity (σ + /σ − ) due to intracellular contribution. In other words, whereas there is a lifting of degeneracy of σ + /σ − emission in bilayers in the presence of B-field, it does not imply a valley Zeeman splitting as the emission helicity is no longer tied to the valley degree of freedom. Instead, the helicity of emission is tied to the spin degree of freedom. A g-factor of 4 is thus expected for bilayer Zeeman splitting as well, however, due to finite interlayer hopping, the valence band states are no longer purely d + id or d − id but an admixture of the two. The exact eigenstates of H v at K-valley (τ z = 1) and spin up (s z = 1) are given by u + = (cos θ/2, sin θ/2) T and u − = (sin θ/2, cos θ/2) T in the basis of where cos θ ¼ λ ffiffiffiffiffiffiffiffiffi p . Thus, the magnetic moment of valence band states reduces from m = ±2 tõ This would imply a Zeeman splitting g-factor of 4λ ffiffiffiffiffiffiffiffiffi . From recent report of A − B splitting of monolayers, we get λ of~135 meV 30,31 . Assuming B exciton has the same energy for monolayer and bilayers 31 , and the difference of exciton peak position is 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi λ 2 þ t 2 ? q À 2λ ¼ 33 meV, we get interlayer coupling of t ⊥ = 69 meV and g-factor of 3.56. The difference between the predicted value and experimental value of g factors might come from several origins. First, it can come from intercellular components which comes from inversion symmetry breaking due to substrate effect. In addition, the intracellular contribution from other orbitals (e.g., p-orbitals for the conduction band) will need to be considered to calculate the value of precise g-factor 35 . We note that change of g-factor for bilayer is much larger than the case of monolayer. We speculate that the temperature dependence for bilayer arises due to the change in the interlayer distance with temperature, just as lattice constant changes with temperature [32][33][34] . A systematic understanding of the temperature dependence of g-factor is very interesting in its own right and is left for future investigations. Finally, we discuss the magnetic field dependence of η PL shown in Fig. 2d. As the PL polarization is primarily independent of the excitation polarization, we can conclude that there is fast-spin relaxation, which leads to creation of both σ + and σ − excitons upon excitation. At zero field, conversion of σ + to σ − and vice versa is equally likely leading to emission from both helicities, as dictated by time-reversal symmetry. At finite B-field, the emission intensity of the lower energy peak is always larger. This is true even when the polarity of the B-field is reversed implying that the higher energy exciton is transformed into the lower energy exciton with the opposite emission helicity on a timescale, which is comparable to exciton lifetime. If we assume that the interlayer coupling is suppressed due to large spin-orbit coupling, the conversion of a σ + -exciton to a σ − -exciton and vice versa requires flipping of both spin and valley degrees of freedom, as shown in Fig. 4a. The spin angular momentum required for such a process is possibly provided during scattering with residual charge carriers present in the sample due to accidental doping. Although at zero B-field, such a Fig. 3 Origin for Zeeman splitting in bilayer. a Single-particle energy states at ±K-points of bilayer TMD at zero magnetic field and in presence of interlayer hopping t ⊥ . The spin-valley-layer locking results in optical selection rules such that in both valleys, the spin degree of freedom is locked to the emission helicity. b Schematic diagram of the Zeeman splitting in bilayer MoTe 2 under positive magnetic field at ±K-points. Red (blue) transition indicates the PL emission with photon energy E + (E − ) and circular polarization σ + (σ − ). The upper and the lower layer in the same valley have opposite spin. Green and gray arrows show the magnetic moment contributions of the spin and atomic orbital to the Zeeman splitting, respectively. In the presence of a positive magnetic field, the spin contributions cancel, whereas the intracellular or atomic orbital contribution leads to E + < E − in both the valleys spin flip-induced conversion of exciton helicity can occur in both directions, at finite B-field, conversion to the lower energy exciton is energetically favorable. To explain the dependence of η PL with B-field, we assume that the spin-flip process is energy conserving, whereas the energy relaxation via phonons primarily conserves spin. Although spin flip via phonon is possible in presence of spin-orbit coupling, it is usually slower than spin-conserving processes 36 . As shown in Fig. 4b, at finite field, spin flip can happen from the higher energy exciton to the excited states of lower energy exciton band at the same energy, which then relax to the lowest energy states by phonons. The reverse process must first involve phonon absorption followed by spin flip due to the absence of opposite spin states for the lowest energy exciton. As the phonon absorption is suppressed by the Boltzmann factor, exp(−Δ B /k B T) for a Zeeman splitting of Δ B , the intensity of PL from the lowest energy exciton is dominant. The quantitative dependence of η PL on B depends on the spin-flip rate γ s , exciton lifetime γ l and the phonon relaxation rate γ ph which appear to be comparable to each other in bilayer TMDs (Supplementary Note 3; Supplementary Fig. 5). Discussion In summary, we have experimentally demonstrated the Zeeman splitting in bilayer TMDs and discussed their origin from spin-valley-layer coupling. Electrical control of orbital magnetic momentum as demonstrated previously 2, 20 , together with magnetic control here will form a complete toolbox set for controlling valley and layer pseudospins. Magnetoelectric effect by the interference between electrical and magnetic field will be naturally the next step towards quantum gates or quantum entanglement between spin, valley, and layer degree of freedom in bilayer platforms 16 . Optical stark effect by means of pseudomagnetic field has been demonstrated to control the coherence of valley pseudospins [37][38][39] . Real magnetic control of bilayer as demonstrated here, combined with pseudomagnetic method provides access to manipulate the coherence in the bilayer system. Methods Spectroscopy experiment setup. The Raman spectra are taken at room temperature with an excitation wavelength of 532 nm using a commercial WITech confocal Raman spectrometer. We use a homemade fiber-based confocal microscope for polarization-resolved PL spectroscopy. The wavelength of the excitation is 795 nm (1.560 eV) for off-resonant excitation and 1040 nm (1.192 eV) for near-resonant excitation. Polarizers and quarter-wave plates are installed on the excitation and detection arm of the confocal microscope for polarization-selective excitation and PL detection. The PL emission is directed by an multi-mode optical fiber into a spectrometer (Princeton Instruments) with a liquid nitrogen-cooled infrared camera for spectroscopy recording. The sample is loaded into a magneto-cryostat (Cryomagnetics close-cycle cryostat (CMag) for off-resonant experiment and Quantum Design Physical Properties Measurement System (PPMS) for near-resonant experiment) and cooled down to 2-4 K. The magnetic field is applied perpendicular to the sample plane ranging from −7 to +7 T (CMag) or −9 to +9 T (PPMS). Preparation of MoTe 2 thin flakes. The MoTe 2 single crystals are synthesized through chemical vapor transport using iodine as the transport agent. A scotch tape-based mechanical exfoliation method is used to peel thin flakes from bulk crystal onto degenerately doped silicon wafer covered with a layer of 285 nm thermally grown silicon dioxide. Optical microscopy (Olympus BX-51) is used to identify thin flake samples with different thickness via optical contrast. Data availability. The data that support the findings of this study are available from the corresponding authors on request. Fig. 4 Origin for PL polarization in bilayer. a The requirement for the switching of light helicity can be thought of as interconversion between circularly polarized excitons, σ ex þ and σ ex À . The emission helicity is tied to the spin of the electron in the exciton, which is formed out of electron-hole pairs in both valleys/layers. The hole spin is labeled by thick vertical arrows and in our convention have spin, which is opposite to that of the electron. Owing to spin-valley-layer locking, both spin and valley need to be flipped for the switching of helicity, in the absence of interlayer hopping. b At positive magnetic field, the selective conversion of σ ex À to σ ex þ is energetically favorable due to the Zeeman splitting
v3-fos-license
2021-10-02T20:07:44.717Z
2021-01-01T00:00:00.000
238246769
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/839/3/032046/pdf", "pdf_hash": "ca123c62d3cab1fccbafdc243c3f06f1ab7886cc", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44647", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "ca123c62d3cab1fccbafdc243c3f06f1ab7886cc", "year": 2021 }
pes2o/s2orc
Shot lot transportation by road transport In the cargo turn-over of our country, the most important place is taken by transportation of perishable products and shot lots. Specific peculiarities of perishable products transportation make them one of the most critical ones and impose higher requirements to technical means of mobile refrigeration. The main task for mobile refrigeration is improvement of quality of transportation of perishable goods, productivity of work, capital productivity, reduction of current material costs per unit of transported cargo, more effective use of available and more rapid development of new economically efficient engineering tools. Introduction Comprehensive control of the carrying equipment is an important prerequisite for effective use and a guarantee of competitiveness. The provision of real-time information is of paramount importance. Advanced technologies, such as GPS (GLONASS), as well as mobile data, allow to determine the location of the vehicle, control loading, etc. Collection of information and processing it in computer databases, maintenance and evaluation of these data web-resource unmistakably and quickly will allow to influence the situation. This is the basis for the indispensability of telematics systems even today. In the transport system of the Russian Federation, these telematics resources are not used by all companies. Consequently, the supply of the needs of the national economy, as well as the needs of the population for transportation and the creation of conditions for the development of a common economic space on the entire available territory is being hindered. Telematics include the system of data review, including key data on the vehicle for the purpose of planning and optimization in real time. In the process of monitoring the carrying equipment, current information is included, namely the temperature indicator, which specifies the conditions of vehicles transporting frozen or refrigerated goods, ensuring IFS and HACCP standards in the industry. The efficiency of the carrying equipment or fleet is achieved by transmitting data from CAN/FMS (truck) as well as EBS (trailer/semi-trailer). These are primarily information about fuel consumption, temperature inside the isothermal van, tire condition monitoring and many other indicators. This provides the potential to reduce operating costs. Transmission of the information to the customers' parallel IT systems and similarly prevents loss of data [1]. Confirmation of the safety of the goods transported (control of door opening, locking). Possibilities of equipment and degree of use of corresponding systems in carrying equipment will depend on the set tasks in logistics and forwarding of enterprise, and begin from elementary determination of location of automobile transport resources to the maximum amount of information and its integration into the Materials and methods To ensure the required efficiency in management, information resources must be integrated in the first and second directions. As for the first direction, the task is to ensure the consistency of the planned systems and the executive systems. Under the second direction, the task is integration of complex forming the requirements for the executive corresponding systems. Information systems, belonging to different groups, have differences as functional subsystems, and sustaining subsystems. Functional subsystems differ in the list of tasks to be solved. Results Sustaining subsystems will entirely differ by their elements, in particular information, technical, as well as mathematical. There is a specificity of some information systems. Planning systems. These information systems are formed at the management level and are intended for the implementation of decisions of long-term mainly strategic direction. This possibility undoubtedly increases the quality of transportation, in particular the preservation of cargo in terms of quantity. Goods transported in isothermal vans are pre-cooled or preheated; they can also be thermally untreated. Mandatory temperature regime during transport is maintained by the following factors: • Thermal insulation materials and special construction in the vehicle and the surfaces themselves; • Controllable ventilation systems; • Controllable direct cooling/heating system. The implementation of temperature control will ensure a high level of competitiveness of transportation, in particular, the preservation of cargo quality. In order to maintain the quality of perishable commercially transported goods, it is necessary to impose requirements on isothermal vehicles: • Maintaining the required temperature and appropriate humidity inside the isothermal van when loaded, regardless of influencing factors; • Maintaining the movement of isothermal rolling stock at the maximum permitted speeds and at the same time ensuring smooth running, in order to reduce damage to the cargo being transported; • Automating the operation of climatic installations and monitoring temperature and humidity, the reliability and simplicity of these installations, both in repair and maintenance. Isothermal vehicles are classified depending on the type of cargo transported into universal and specialized. Isothermal vehicles differ in the way they cool or heat cargo: • Refrigerators due to steam or refrigeration compressor units, cool autonomously; • Iceboxes contain containers for stacking ice or a formed mixture of salt and ice; • Thermoses are insulated with no cooling or heating structures. Perishable goods are transported by rail, water, as well as by road and in shot lots by air. The most mass transportation is conducted by road, mostly in small quantities. This is where there is little control over the temperature, which means a low level of cargo quality safety and big losses [2]. The National State Standard provides for the implementation of heat calculation when planning the transportation of perishable goods, as well as the calculation of ventilation [3,4]. For the purpose of continuous monitoring, it is proposed to equip vehicles and warehouses with a temperature and humidity recorder, which is a compact device similar in function to the familiar thermometer, which will be able to ensure: • Microclimate monitoring; • Safety of the cargo in terms of quality ensuring competitiveness, i.e., if the driver periodically turns off, the climate system that activates saving fuel or electricity and it will be reflected in the information of the web resource and unmistakably allows operator to influence the situation. In the same way, deviations will be recorded for timely management decisions. The temperature sensor is located directly in the frozen locker. The information is available in the driver's cab, on the body shell, on the server and can also be duplicated. Thus, control is carried out not only on the cargo route, but also in the places of loading and unloading [5]. If a spare vehicle is available, it is possible to deliver shot lots to external customers. In particular, from the logistics center to the proposed compensator warehouse for short-term storage with the following services: unloading from the isothermal van; storage within the compensator warehouse; formation of the required batch; loading the required batch in the isothermal van of the vehicle; delivery to the destination, followed by unloading at the premises of the customer. Distribution to several consignees, if necessary, ensuring that each consignee is unloaded from the isothermal van of the vehicle, should also be a compulsory service [6,7]. Discussion The key aspect of this interaction with the recipients of perishable cargo is a real-time delivery schedule with the possibility of adjustments. The schedules will form transport contours, in view of the existing network of roads and their load and intensity. The transport contours will be designed to serve the participants of shot lots transportation in a timely and high-quality manner. Their advantages are in the formation of transport links, both horizontal and vertical, and between the regional logistics center and consumers. Stable transport links will have to meet the conditions of infrastructure functioning in the shot lots transportation. In real time, this allows to optimize the total costs associated with the delivery of goods to customers and consumers. Conclusion Based on the research, it follows that in order to reduce non-productive downtime, it is necessary to implement a number of measures when creating a specialized terminal or equipped room: • Grouping of cargoes organizationally, namely to bring large ones at the end of the accumulation to the composition, and small ones at the beginning of the accumulation; • For the most coordinated action within the existing enterprise, it is proposed to implement an ACS, such as Cargo Express, this will help to predict and plan the cargo operations, which will help to reduce non-productive downtime while waiting for cargo operations; • Improvement of technical facilities at the customers themselves for unloading and loading of perishable goods to increase productivity.
v3-fos-license
2023-04-26T15:22:10.115Z
2023-04-01T00:00:00.000
258314909
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "90474ff844b8108bf0e8dd1b271817dfaaae15f0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44649", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "9e494618e9c8ea431ae54c314521690646c18e42", "year": 2023 }
pes2o/s2orc
Advances in Platelet-Rich Plasma Treatment for Spinal Diseases: A Systematic Review Spinal diseases are commonly associated with pain and neurological symptoms, which negatively impact patients’ quality of life. Platelet-rich plasma (PRP) is an autologous source of multiple growth factors and cytokines, with the potential to promote tissue regeneration. Recently, PRP has been widely used for the treatment of musculoskeletal diseases, including spinal diseases, in clinics. Given the increasing popularity of PRP therapy, this article examines the current literature for basic research and emerging clinical applications of this therapy for treating spinal diseases. First, we review in vitro and in vivo studies, evaluating the potential of PRP in repairing intervertebral disc degeneration, promoting bone union in spinal fusion surgeries, and aiding in neurological recovery from spinal cord injury. Second, we address the clinical applications of PRP in treating degenerative spinal disease, including its analgesic effect on low back pain and radicular pain, as well as accelerating bone union during spinal fusion surgery. Basic research demonstrates the promising regenerative potential of PRP, and clinical studies have reported on the safety and efficacy of PRP therapy for treating several spinal diseases. Nevertheless, further high-quality randomized controlled trials would be required to establish clinical evidence of PRP therapy. Introduction Spinal diseases, including spinal degenerative diseases, the ossification of spinal ligaments, spinal deformities, and spinal cord injury (SCI) cause pain and neurological symptoms. These greatly affect patients' activity of daily life (ADL) and quality of life (QOL). Low back pain (LBP) is one of the most common complaints of patients with spinal diseases. Disorders of the intervertebral disc, facet joint, sacroiliac arthritis, and lumbar nerve root can cause LBP, which often becomes chronic and intractable. These disorders are generally treated with medications and rehabilitation, but often with limited efficacy [1]. The development of new, more effective treatments for chronic LBP is desirable. Lumbar spinal stenosis and lumbar degenerative spondylolisthesis can cause pain, numbness, and muscle weakness in the lower extremities. Conservative treatments, such as medication and rehabilitation, have a certain degree of effectiveness [2]. When conservative treatments are less effective, surgical treatment is recommended; however, the further development of conservative treatments is desirable. In the surgical setting, spinal fusion is often indicated in cases of high instability or degeneration. Spinal fusion is often performed in conjunction with bone grafting, but the grafted bone may fail to fuse, resulting in a pseudoarthrosis. The incidence of pseudoarthrosis in long instrumented posterior spinal fusion for adult spinal deformities is estimated to be from 25 to 35% [3], and the establishment of Table 1. Bioactive proteins released from α-granule. PRP Classification There are a wide variety of methods used in the purification of PRP; depending on the centrifugal conditions and extraction method, the concentration of platelets, white blood cells, and growth factors varies. Additionally, there are many commercially available kits that aim to efficiently purify highly stable PRP, but the quality of the purified PRP varies depending on the kit used. This is one of the obstacles for increasing the efficacy of PRP therapy. There are two main PRP purification methods: the open and closed techniques. In the open technique, the blood is in contact with the environment in the working area during PRP purification. Pipettes and tubes are sterilized separately and used in the PRP purification process. In contrast, the closed technique uses commercially available equipment and kits, and the blood and PRP are not exposed to the environment during the preparation process [22]. The open technique has the advantage of being low cost, but there is a risk of bacterial contamination. The closed technique has a lower contamination risk, but is more costly; additionally, certain kits provide lower yield in terms of platelet concentration. Kushida et al. [23] compared seven systems and evaluated the purified PRP in detail. Centrifugation was performed two times in four of the systems and once in three of the systems, each system following the original protocol for PRP preparation. PRP was separated by tube centrifugation in four systems, gel separation in two systems, and fully automated centrifugation in one system. The required whole blood volume ranged from 8 to 60 mL, the final volume of PRP ranged from 0.6 to 3 mL, and the average platelet concentration of PRP varied widely from 8.8 × 10 4 /µL to 152.1 × 10 4 /µL, depending on the system. Although PRP containing more than a specific concentration of platelets tends to have higher concentrations of platelet-derived growth factor-AB (PDGF-AB), the relationship was not always directly proportional. The concentrations of transforming growth factor beta-1 (TGF-β1) and vascular endothelial growth factor (VEGF) vary widely from system to system. Platelet concentration ratios from less than 2-fold to an 8.5-fold increase have been reported over baseline; however, reports suggest a 3-to 5-fold increase is desirable [23,24]. A certain concentration of platelets is necessary because a low platelet concentration tends to reduce the number of growth factors. As mentioned above, the content of PRP is considered to have a significant impact on treatment efficacy, and evaluating the content and quality of PRP is essential to validate its efficacy. DeLong et al. proposed a classification system based on Platelet concentration, Activation, or not, and leukocyte (White blood cell) concentration (PAW classification), which can be used to quickly evaluate the PRP preparations used in multiple studies and clinical practice [25]. The activation of PRP namely refers to two main processes: degranulation and cleavage-released growth factors from platelets. This process turns liquid plasma into a solid clot or membrane [26]. Exogenous activation techniques of PRP have been used for in vivo and clinical studies. PRP is usually activated by the addition of calcium chloride and/or thrombin, freezing and thawing, or exposure to collagen [11]. In a systematic review and meta-analysis, activated PRP was reported to be more effective for improving pain and functionality in patients with knee OA compared with non-activated PRP [27]. Additionally, Gentile reported that non-activated PRP was more useful for hair growth than activated PRP [28]. When PRP is injected into soft tissue, activation prior to administration is not always necessary because natural collagen type I acts as the activator [28]. Various basic and clinical studies have reported on the role of leukocyte content in the efficacy of PRP, but no consensus has been reached [29]. High concentrations of leukocytes may negatively affect PRP therapy, as leukocytes (especially neutrophils) act as inflammatory mediators. Nevertheless, leukocytes play an important role in the wound healing process, and their bactericidal activity has been reported to be beneficial for the treatment of bedsores and extensive soft tissue injuries [25]. Jia et al. reported that the presence of leukocytes in PRP may stimulate an inflammatory response at the cellular level [30]. Yan et al. reported that Leukocyte-poor PRP (Lp-PRP) significantly induced tendon regeneration compared to Leukocyte-rich PRP (Lr-PRP) in animal studies [31]. The results of clinical trials on patellar tendonitis [32], Achilles tendinopathy [33], and lateral epicondylitis [34] suggest that there was no difference in treatment outcomes between the Lr-PRP and Lp-PRP groups. Dohan et al. used a simpler classification: Pure Platelet-Rich Plasma (P-PRP), Leukocyteand Platelet-Rich Plasma (L-PRP), and Pure Platelet-Rich Fibrin (P-PRF), depending on whether the preparations were plasma or fibrin products and whether they contained white blood cells [35]. PRF is purified by collecting blood in dry glass or glass-coated plastic tubes and immediately centrifuging it at a low RPM. PRF preparations have a high-density fibrin network, meaning they can be handled as if they are a solid material [36]. Mishra et al. proposed dividing PRP preparations into eight categories based on white blood cell (WBC) count, activation or lack of, and platelet concentration (subtype), as follows [37]. Type 1: Increased WBCs without activation; Type 2: Increased WBCs with activation; Type 3: Minimal or no WBCs without activation; Type 4: Minimal or no WBCs with activation. Subtype A contains an increased platelet concentration at or above five times the baseline. Subtype B contains an increased platelet concentration less than five times the baseline. This classification is simple and best reflects the characteristics of PRP. The present review uses this classification system to evaluate the PRP used in each study. Considering that PRP varies in content and efficacy depending on the purification method, it is important to consider which PRP preparation is used in each study. Basic Studies on PRP for Intervertebral Disc Degeneration (Table 2) Since the study examining the effects of PRP on intervertebral disc (IVD) cells was first reported in 2006 [38], several in vitro studies have been published. Many studies have used human IVD cells while others have used porcine, bovine, and rabbit cells to investigate the effects of PRP on cell growth and matrix metabolism [11]. Akeda et al. reported that PRP releasate increased the activity of the extracellular matrix metabolism of porcine nucleus pulposus and anulus fibrosus cells cultured in alginate beads [38]. Concurrently, Chen et al. concluded that growth factors in PRP, including transforming growth factor-beta1, could effectively act as a growth factor cocktail to promote the proliferation and differentiation of human nucleus pulposus cells and tissue-engineered NP formation [39]. In terms of molecular mechanisms, Kim et al. reported that PRP was effective in reducing the expression of the proteolytic enzymes matrix metalloproteinase-3 (MMP3) and cyclooxygenase-2 (COX-2), which were increased by the stimulation of inflammatory cytokines, in human intervertebral disc cells [40]. Xu et al. recently reported that PRP secreted exosomal miR-141-3p to activate the Keap1-NF-E2-related factor 2 pathway, which was found to prevent IVD degeneration [41]. In addition, PRP-derived exosomes were reported to alleviate IVD degeneration-associated inflammation by regulating the ubiquitination and autophagic degradation of the NLRP3 inflammasome [42]. Thus, exosomes have recently attracted attention in relation to PRP function, and in the study of mesenchymal stem cells (MSC) [43]. These mechanisms were summarized in Figure 1. used human IVD cells while others have used porcine, bovine, and rabbit cells to investigate the effects of PRP on cell growth and matrix metabolism [11]. Akeda et al. reported that PRP releasate increased the activity of the extracellular matrix metabolism of porcine nucleus pulposus and anulus fibrosus cells cultured in alginate beads [38]. Concurrently, Chen et al. concluded that growth factors in PRP, including transforming growth factor-beta1, could effectively act as a growth factor cocktail to promote the proliferation and differentiation of human nucleus pulposus cells and tissue-engineered NP formation [39]. In terms of molecular mechanisms, Kim et al. reported that PRP was effective in reducing the expression of the proteolytic enzymes matrix metalloproteinase-3 (MMP3) and cyclooxygenase-2 (COX-2), which were increased by the stimulation of inflammatory cytokines, in human intervertebral disc cells [40]. Xu et al. recently reported that PRP secreted exosomal miR-141-3p to activate the Keap1-NF-E2-related factor 2 pathway, which was found to prevent IVD degeneration [41]. In addition, PRP-derived exosomes were reported to alleviate IVD degeneration-associated inflammation by regulating the ubiquitination and autophagic degradation of the NLRP3 inflammasome [42]. Thus, exosomes have recently attracted attention in relation to PRP function, and in the study of mesenchymal stem cells (MSC) [43]. These mechanisms were summarized in Figure 1. Several in vivo studies have been conducted in which PRP were injected into degenerated IVDs in animal models after the 2006 study by Nagae et al. reported the efficacy of PRP in IVD degeneration in a rabbit IVD model [44]. In the majority of papers, IVD degeneration models have been created in rabbits using a needle puncture to verify the effects of PRP. Obata et al. reported that PRP releasate could activate IVD cells and improve their microenvironment in rabbit IVD degeneration models [45]. Meanwhile, Chen et al. evaluated the therapeutic potential of MSC and/or PRP in miniature porcine IVD degeneration model with chymopapain [46]. By using a rat IVD degeneration model with needle puncture, Gullung et al. reported that earlier interventions of PRP in the IVD degeneration process were more beneficial than when IVD were severely degenerated [47]. In 2017, Li et al. conducted a meta-analysis of PRP animal studies and reported that IVD administration of PRP led to histological improvement in IVD degeneration and increased the magnetic resonance imaging (MRI) T2 values within IVD, which suggests that IVD degeneration was improved; the authors concluded that PRP had great potential for clinical application against IVD degeneration [48]. Although the final goal is to build up valid clinical evidence to establish PRP as an effective treatment for IVD degeneration, we should Several in vivo studies have been conducted in which PRP were injected into degenerated IVDs in animal models after the 2006 study by Nagae et al. reported the efficacy of PRP in IVD degeneration in a rabbit IVD model [44]. In the majority of papers, IVD degeneration models have been created in rabbits using a needle puncture to verify the effects of PRP. Obata et al. reported that PRP releasate could activate IVD cells and improve their microenvironment in rabbit IVD degeneration models [45]. Meanwhile, Chen et al. evaluated the therapeutic potential of MSC and/or PRP in miniature porcine IVD degeneration model with chymopapain [46]. By using a rat IVD degeneration model with needle puncture, Gullung et al. reported that earlier interventions of PRP in the IVD degeneration process were more beneficial than when IVD were severely degenerated [47]. In 2017, Li et al. conducted a meta-analysis of PRP animal studies and reported that IVD administration of PRP led to histological improvement in IVD degeneration and increased the magnetic resonance imaging (MRI) T2 values within IVD, which suggests that IVD degeneration was improved; the authors concluded that PRP had great potential for clinical application against IVD degeneration [48]. Although the final goal is to build up valid clinical evidence to establish PRP as an effective treatment for IVD degeneration, we should determine the molecular mechanisms of PRP in greater detail and provide patients with higher quality PRP by continuing in vitro and in vivo studies of PRP (Table 2). (Table 2) PRP is expected to be one of the therapeutic agents capable of enhancing spinal fusion. However, the efficacy of PRP for spinal fusion remains controversial on the basis of preclinical studies [49][50][51][52][53][54]. Most previous studies have reported the results of in vivo studies, but, to the best of our knowledge, only one in vitro study has assessed the effect of PRP on osteoblasts [54]. In that study, the pharmacological activity of growth factors in freeze-dried PRP was maintained, even after four weeks of storage [54]. Basic Studies of PRP in Other Spine Research Areas Kamoda, et al. reported that PRP was beneficial for both posterolateral lumbar fusion and lumbar interbody fusion in a rat model [51,53]. Meanwhile, in middle-sized animal models including rabbit, porcine, and sheep, PRP was reported to have no stimulating effect on spinal fusion [49,50,52]. Further basic studies on the effect of PRP on spinal fusions are needed in the future to reach a more accurate conclusion. There are relatively few basic studies on PRP for treating SCI [55][56][57][58][59]. The rat SCI model was used in all studies [55][56][57][58][59]. In 2017, Salarinia et al. reported the positive effects of intrathecal PRP on nerve regeneration after SCI [55]. An additional study by Chen et al. suggested that intrathecal PRP stimulated angiogenesis, enhancing axonal regeneration after SCI in rats [56]. In 2020, Salarinia, et al. reported that a combination of PRP with MSCs synergistically promoted their therapeutic effects in the SCI [57]. Recently, Behroozi et al. concluded that human umbilical cord blood-derived PRP had the potential to reduce neuropathic pain in SCI by altering the expression of ATP receptors, and could induce motor function recovery and axonal regeneration after SCI [58,59]. However, no evidence has yet been reported in basic studies on the superiority of PRP over other treatments. Clinical Application of PRP for Intradiscal Therapy A clinical study on PRP for intradiscal therapy was first reported in 2016 by Tuakli-Wosornu et al. [60]. Since then, 13 clinical studies or case reports have been reported (Table 3). Two randomized controlled studies, five prospective cohort studies, three retrospective cohort studies, and three case reports have been published. Most of the target diseases were discogenic LBP; however, there was one study for each targeted lumbar disc herniation (LDH) [61] and cervical degenerative disc disease [62]. Lr-PRP was used in five studies, and Lp-PRP was used in eight studies. According to the Mishra classification [37], type 1 was found in five studies, type 3 in four studies, and type 4 in three studies. Soluble releasate isolated from activated PRP (PRP-releasate), but not PRP itself, was used in two studies [63,64]. PRP classification by Mishra et al. [37] revealed that a wide variety of PRP has been utilized for intradiscal treatments. PRP isolation kits were used in nine studies on the isolation method of PRP. PRP was manually isolated in two studies [63,64]. In all reported studies, PRP was intradiscally administrated into the targeted discs and the follow-up period varied from 3 months to 6.57 years. Tuakli-Wosornu et al. [60] conducted a prospective, double-blinded, randomized controlled study to determine the efficacy of PRP in symptomatic degenerated IVDs. Participants who received intradiscal PRP showed significantly greater improvements in functional rating index (FRI), numeric rating scale (NRS), and North American Spine Society (NASS) satisfaction scores compared to those who received a contrast agent during the eight weeks post-injection. A randomized, double-blind, active-controlled clinical trial was conducted to evaluate the efficacy and safety of an intradiscal injection of PRP-releasate compared with corticosteroid (CS) injection in discogenic LBP patients [64]. This clinical study by Akeda et al. [64] showed a clinically significant improvement in the extent of LBP evaluated using a visual analog scale (VAS) in both the PRP-releasate and CS groups at 8 weeks post-injection; however, no significant differences were found between the groups. Nevertheless, PRP-releasate injection therapy was reported to be safe and maintained improvements in LBP, disability, and QOL during the 60-week follow-up. Four prospective cohort studies revealed that a single injection of PRP or PRP-releasate induced significant improvements in pain, disability, and quality of life (QOL) during the observational period (from 3 to 12 months) [63,[65][66][67]. Among them, one study by Jian et al. [65] reported that improvements in NRS and Oswestry Disability Index (ODI) scores were positively correlated with the platelet concentration of PRP (Mishra classification: 3B). Two clinical studies evaluated the long-term effect of PRP or PRP-releasate treatment in patients with discogenic LBP. Both studies reported that the treatments had a safe and efficacious impact on improving LBP and LBP-related disability during the five to nine years of follow-up [68,69]. Recently, Jiang et al. [61] retrospectively evaluated the effect of transforaminal endoscopic lumbar discectomy (TELD) with PRP injection for patients with lumbar disc herniation. They reported that TELD with PRP treatment significantly improved LBP and LBP-related disability, MRI findings, and lowered the recurrence rate of LDH compared with the control (TELD without PRP treatment) group. Kawabata et al. [70] evaluated the safety and efficacy of PRP administration in two discogenic LBP patients with Modic type 1 change, known to be an MRI biomarker of LBP [71]. They reported that PRP injection into targeted discs with Modic type 1 change was safe and showed a tendency to alleviate LBP. In summary, intradiscal injection therapy of PRP for degenerative disc disease is safe and shows promise for improving pain, disability, and QOL. Clinical Application of PRP for Spinal Fusion Surgery Clinical application of PRP for spinal fusion surgery was first reported in 2003 by Weiner and Walker [74]. Since then, 17 clinical studies have been conducted (Table 4). Five randomized controlled studies, two nonrandomized studies, six prospective cohort studies, and four retrospective cohort studies have been published. The patients in 16 studies received lumbar spinal surgeries (10 posterolateral lumbar fusions [PLFs] and six interbody fusions). Only one anterior cervical discectomy and fusion was performed as a cervical spinal surgery [75]. Lr-PRP was used in 11 studies, and Lp-PRP in four studies. According to the Mishra classification [37], type 2 was found in 11 studies, and type 4 in four studies. PRP isolation kits were used in six studies. PRP was manually isolated in 11 studies. PRP was activated before surgery in all studies. The bone fusion rate was assessed in all studies using radiography and/or computed tomography (CT). The follow-up period varied from 6 to 34 months. Six studies (35.5% of total) reported that the use of PRP significantly increased the bone fusion rate compared to the control group; however, in seven studies the use of PRP showed no significant improvement in the bone fusion rate. Furthermore, two studies reported that the use of PRP in PLF surgery decreased the bone fusion rate. Kubota et al. [76] conducted a prospective randomized controlled study with a 2-year follow-up to evaluate the efficacy of PRP after PLF surgery. Sixty-two patients who underwent one-or two-level instrumented PLF for lumbar degenerative spondylosis with instability were stratified into either the PRP (31 patients) or control (31 patients) group. PRP-treated patients underwent surgery using an autograft local bone. This clinical study showed that the bone fusion rate at the final follow-up was significantly higher in the PRP group (94%) than in the control group (74%). Moreover, they reported that the area of fusion mass was significantly higher in the PRP group than in the control group. The mean period necessary for the fusion in the PRP group was shorter than that of the control group. Imagama et al. [77] reported on the efficacy of PRP on the rate and extent of bone fusion in PLF surgery using autologous local bone graft and PRP and the safety of PRP application during a follow-up period of 10 years. Local application of PRP combined with autologous local bone had a positive impact on early fusion in lumbar arthrodesis. They also reported that there were no adverse symptoms and events related to PRP, including seroma, and no massive bone formation or deep infection visible on MRI over the 10 year follow-up. In contrast, two studies reported that the use of PRP in PLF surgery caused a decreased bone fusion rate. Weiner and Walker [74] reported a retrospective cohort study that evaluated the bone fusion rate in PLF surgery using autograft bone combined with PRP. The fusion rate for the control group was 91% (24 of 27). The fusion rate for the PRP group was 62% (18 of 32). They concluded that bone fusion rates using autograft bone alone were significantly higher than those using autograft combined with PRP (p < 0.05). Acebal-Cortina [78] conducted a prospective controlled blinded non-randomized study to analyze if adding the PRP to a mixture of local autograft plus tricalcium phosphate and hydroxyapatite (TCP/HA) would improve the fusion rate in PLF surgery. They reported that correct fusion was seen in 93% of the cases (37 of 40) in the control group. In the PRP group, correct fusion was seen in 75% of the cases (50 of 67). They concluded that the addition of PRP to a mixture of autologous bone graft plus TCP/HA decreased the fusion rate of PLF. Sys et al. [79] conducted a prospective randomized controlled study to assess the radiological effect of PRP when added to autograft iliac crest bone in mono-segmental posterior lumbar interbody fusion. PRP was produced using an isolation kit and activated with thrombin (1000 U/mL in 10% CaCl 2 ). Then, the cages were filled with autologous bone chips and steeped in a plasma-thrombin solution until clotting visually occurred (approximately 10 min). However, the authors concluded that adding PRP in posterior lumbar interbody fusion did not lead to a substantial improvement or deterioration when compared to autologous bone alone. The assessment of clinical outcomes, such as visual analog scale (VAS) of LBP, VAS of leg pain, VAS of leg numbness, the ODI, and the Short-Form 36, was performed in eight studies [75,76,[79][80][81][82][83][84]. All studies reported that there was no clear benefit in terms of clinical outcomes when PRP was used in spinal surgery. In summary, the effectiveness of PRP in spinal fusion surgery is limited. Whether the addition of PRP to autologous bone grafts increases the bone fusion rate remains controversial, and there were no differences in the clinical outcomes between PRP and control groups. Clinical Application of PRP for Intraarticular Therapy of Facet or Sacroiliac Joint Pain A clinical study of PRP for the treatment of lumbar facet joint syndrome was first reported in 2016 by Wu et al. [91]. Since then, three clinical studies or case reports have been reported (Table 5). One randomized controlled study and two case series have been published. Most of the targeted diseases were regarding lumbar facet joint pain; however, in one study addressing chronic LBP, multiple site injection was performed. Lr-PRP was used in two studies, and Lp-PRP in one study. According to the Mishra classification [37], type 1 was found in two studies and type 4 in one study. PRP isolation kits were used in one study, and manual isolation in two studies [91,92]. PRP was administrated into the facet joint under x-ray fluoroscopic control in all the reported studies. The follow-up period varied from three to six months. Wu et al. [92] conducted a prospective, double-blinded, randomized controlled study to determine the efficacy of PRP in lumbar facet joint syndrome. Both PRP injection and local anesthetic (LA)/corticosteroid (CS) injection were determined to be effective, easy, and safe enough for the treatment of lumbar facet joint syndrome after six months of follow-up. However, autologous PRP had better outcomes than LA/CS for the duration of treatment efficacy. One case series [93] evaluated the multiple site injections (intradiscal, facet joint and/or epidural space) of PRP for the treatment of chronic LBP and reported significant improvement after injection. A clinical study of PRP for the treatment of sacroiliac joint pain was first reported in 2016 by Navani et al. [94]. Since then, seven clinical studies or case reports have been reported (Table 6). Two randomized controlled studies, one non-randomized controlled study, and four case series have been published. Lr-PRP was used in five studies, and Lp-PRP was used in one study. According to the Mishra classification [37], type 1 was found in four studies, type 2 in one study, and type 3 in one study. PRP isolation kits were used in five studies on the isolation methods of PRP; PRP was manually isolated in two studies [94,95]. PRP was reportedly administered into the sacroiliac joint under ultrasound guidance in four studies, and under fluoroscopic guidance in three studies. The follow-up period varied from three months to four years. Singla et al. [95] conducted a prospective, randomized, open-label, blinded-endpoint (PROBE) study to determine the efficacy of PRP in 40 patients with sacroiliac joint pain. The reduction in pain intensity and improvements in functional disability were significantly greater and lasted longer in the PRP group compared to the steroid group. In contrast, Chen et al. [96] conducted a prospective randomized doubleblinded clinical trial in 26 patients with sacroiliac joint pain and with a positive diagnostic block. The results of the study showed that both PRP and corticosteroid groups showed improvements in pain and function; however, the steroid group had a significantly greater response and more responders than the PRP group. Eldin et al. [97] conducted a non-randomized controlled trial to compare platelet concentrates (PRP and platelet-rich fibrin [PRF]) in injectable form in sacroiliac joint dysfunction. This study showed a clinically significant improvement in the extent of LBP evaluated by a visual analog scale (VAS) in both the PRP and PRF groups; however, the reduction in pain intensity lasted longer in the PRF group than in the PRP group. Four case series revealed that a single injection of PRP induced significant improvements in pain, disability, or QOL during the observational period (from 6 to 48 months) [94,95,98,99]. In summary, an injection therapy of PRP for the patients with facet joint or sacroiliac joint pain is safe and useful for improving pain, disability, and QOL. Clinical Application of PRP for Epidural Therapy Regarding epidural injection therapy for spinal symptoms, eleven studies, including two randomized controlled trial (RCT) studies, four prospective cohort studies, three retrospective cohort studies, and two case series, have been reported (Table 7). Targeted symptoms were LBP and radicular pain; however, one study targeted cervical pain [101]. From the viewpoint of an epidural injection approach, a transforaminal approach was used in five studies, an interlaminar approach in two studies, both a transforaminal and interlaminar approach in one study, and a caudal (sacral hiatus) approach in three studies. PRP was injected into the epidural space with or without additional sites of PRP injection, including intradiscal, intraarticular, or intraosseous injection. The follow-up period varied from 3 to 35.7 months. Lr-PRP was used in two studies, and Lp-PRP in five. There were four studies without detailed descriptions of PRP characteristics. According to the Mishra classification system [37], type 1 was found in two studies, type 3 in two studies, and type 4 in three studies; there was no classification assigned in the remaining studies. PRP was isolated using a commercially available kit in three studies and manually in eight studies. Ruiz-Lopez R, et al. [102] conducted a randomized controlled double-blinded study comparing Lr-PRP and corticosteroid administered via a caudal epidural injection for chronic low back pain (LBP). The patients whose LBP with or without radiculopathy lasted for at least three months were randomly assigned to receive an epidural injection of Lr-PRP (n = 25) or corticosteroid (n = 25) into the S3-4 epidural space under fluoroscopic guidance. At one month after the epidural injection, both the corticosteroid and Lr-PRP groups showed a significant reduction in VAS; however, the Lr-PRP group showed sustained improvements at six months after treatment, while VAS in the corticosteroid groups was re-increased and reached a baseline level at six months after treatment. Furthermore, all domains of the Short Form 36-Item Health Survey (SF-36) after treatment in the Lr-PRP group were significantly higher than those in the corticosteroid group. The authors concluded that both autologous Lr-PRP and corticosteroids for caudal epidural injections are equally safe and therapeutically effective in patients with chronic LBP, and that Lr-PRP is superior to corticosteroids for achieving an increased duration of the analgesic effect and improved quality of life. Xu Z, et al. [103] conducted an RCT to compare the efficacy and safety of transforaminal injections of PRP (n = 61) and steroid (n = 63) in patients who suffer from LBP with unilateral radicular pain due to lumbar disc herniation. Significant improvements in VAS, ODI, and other parameters were observed in both groups after one month, and were maintained for one year. There were no significant differences in all assessments between the steroid and PRP treatment groups. Bise et al. [104] conducted a prospective cohort study to compare the short-term (6 weeks) therapeutic effect of PRP versus corticosteroid by an interlaminar approach in patients with prolonged unilateral radicular pain. Patients underwent prednisolone injection (n = 30) or Lr-PRP injection (n = 30). At six weeks postinjection, both treatments equally and significantly decreased the numerical rating scale and ODI without any major complications. PRP injection into multiple sites for patients with chronic LBP has been reported [93,101,105,106]. Kirchner et al. [93] retrospectively reported that intradiscal, intra-articular facet, and transforaminal epidural injection of PRP under fluoroscopic guidance-control significantly decreased VAS in 86 patients with chronic LBP for 6 months. They also showed that minimal clinically important differences for NRS and ODI were achieved in 47 patients with chronic LBP after intradiscal, epidural, and intraosseous PRP injection [101]. In summary, the epidural injection of PRP showed safety and efficacy for the treatment of LBP and radiculopathy. The analgesic effect induced by PRP on LBP was slower but lasted longer compared to corticosteroid injections. Clinical Application of PRP for Spinal Cord Injury Several in vitro and in vivo studies showed the regenerative effects of PRP on SCI; however, only one clinical case series reported the efficacy of the administration of PRP and bone marrow aspirate concentrate (BMAC) in SCI patients [111]. Shehadi et al. conducted intrathecal and intravenous co-administration of PRP and BMAC in seven patients (age range: 22-65 years), with SCI as the salvage therapy. Injury levels ranged from C3 through T11, and the elapsed time between the injury and salvage therapy ranged from 2.4 months to 6.2 years. They reported improvements in ODI in several patients and concluded that intrathecal/intravenous co-administration of PRP and BMAC resulted in no significant complications and may have had some clinical benefits. Future Perspectives There are still open questions regarding the mechanism of action of PRP. For example, in future studies, key bioactive molecules that exert biological effects should be identified among the functional components included in PRP to understand the molecular mechanisms of tissue repair. This would increase the reliability of PRP in clinical use. The efficacy of PRP on other pathologies of spinal diseases, including lumbar canal stenosis, postoperative pain due to surgical tissue damage, or cervical spine diseases, should also be verified in future clinical applications. The application of allogenic and/or stem cell-derived platelets [112,113] for PRP should be considered to obtain the equivalence of therapeutic effects of PRP and to promote its commercialization in the future. Conclusions In this paper, based on previously published basic and clinical studies, we reviewed the effects of PRP on pathological spinal conditions, including degenerative disc disease, spinal fusion, spinal cord injury, LBP, and radicular pain. Because our primary aim was to provide a comprehensive review of the current literature on the basic mechanisms and emerging clinical applications for the treatment of several spinal diseases with PRP, we did not perform individual meta-analyses of the efficacy of PRP for treating these pathological spinal conditions. Among basic studies, it is clearly suggested that PRP is effective for treating degenerative disc disease. However, we cannot draw conclusions about the effect of PRP for spinal fusion and spinal cord injury because different studies have reported opposing results, and the number of studies is insufficient. In the future, to enhance the clinical efficacy of PRP for degenerative disc disease, more detailed basic studies are needed to further clarify the molecular mechanisms of PRP. Meanwhile, for spinal fusion and spinal cord injuries, higher quality basic studies are required to determine the effect of PRP. In clinical studies, PRP has the advantage of being safe and easily applied in a clinical setting since PRP is derived from autologous blood; however, because of individual differences in the concentration and function of platelets, it is difficult to standardize the treatment using PRP. In addition, it is even more difficult to determine the effect of PRP because it lacks uniform characteristics due to the variety of purification methods used. Therefore, we assessed the PRP used in each clinical study by Mishra's classification to determine the effect of PRP more accurately. In this review, intradiscal injection therapy of PRP for degenerative disc disease is considered safe and effective. In contrast, the effect of PRP for spinal fusion surgery may be limited. For facet joint or sacroiliac joint pain, an injection therapy of PRP may be safe and useful; although, patient selection was a challenge in certain studies. In addition, the epidural injection of PRP also showed safety and efficacy for LBP and radiculopathy, but future studies need to include additional eligible patients and limited injection sites. Taken together, PRP has the potential to be a breakthrough treatment for several spinal diseases. However, to establish PRP therapy as an evidence-based treatment, large-scale double-blind randomized trials with appropriate patient selection and homogeneity of PRP components are required in the future.
v3-fos-license
2018-04-03T05:59:34.612Z
2015-09-17T00:00:00.000
263949060
{ "extfieldsofstudy": [ "Geology", "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2014JD022022", "pdf_hash": "445e46773918f9208c84d751b7e7e7733c425fda", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44653", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "445e46773918f9208c84d751b7e7e7733c425fda", "year": 2015 }
pes2o/s2orc
Possible impacts of a future grand solar minimum on climate: Stratospheric and global circulation changes Abstract It has been suggested that the Sun may evolve into a period of lower activity over the 21st century. This study examines the potential climate impacts of the onset of an extreme “Maunder Minimum‐like” grand solar minimum using a comprehensive global climate model. Over the second half of the 21st century, the scenario assumes a decrease in total solar irradiance of 0.12% compared to a reference Representative Concentration Pathway 8.5 experiment. The decrease in solar irradiance cools the stratopause (∼1 hPa) in the annual and global mean by 1.2 K. The impact on global mean near‐surface temperature is small (∼−0.1 K), but larger changes in regional climate occur during the stratospheric dynamically active seasons. In Northern Hemisphere wintertime, there is a weakening of the stratospheric westerly jet by up to ∼3–4 m s−1, with the largest changes occurring in January–February. This is accompanied by a deepening of the Aleutian Low at the surface and an increase in blocking over Northern Europe and the North Pacific. There is also an equatorward shift in the Southern Hemisphere midlatitude eddy‐driven jet in austral spring. The occurrence of an amplified regional response during winter and spring suggests a contribution from a top‐down pathway for solar‐climate coupling; this is tested using an experiment in which ultraviolet (200–320 nm) radiation is decreased in isolation of other changes. The results show that a large decline in solar activity over the 21st century could have important impacts on the stratosphere and regional surface climate. Introduction Electromagnetic radiation from the Sun is a fundamental source of energy for the terrestrial climate system. Therefore, changes in solar activity have the potential to influence global climate. The Sun's output varies on a number of characteristic time scales. In the context of Earth's climate, the most frequently studied of these is the approximately 11 year (Schwabe) solar cycle, which is typically associated with a maximum to minimum change in total solar irradiance (TSI) of ∼1 W m −2 or ∼0.07% . The Sun's output is also known to vary on longer time scales; however, characterizing these variations requires a much longer record of solar activity. Direct measurements of sunspot numbers extend back to 1610 [Ribes and Nesme-Ribes, 1993], but proxy records must be used to reconstruct solar activity further back in time. Steinhilber et al. [2008] compiled a record of the solar modulation potential, , for the last 9300 years. This is a measure of the shielding of the Earth from galactic cosmic rays by the Sun's magnetic field and is derived from cosmogenic radionuclide data from ice cores. Lockwood [2010] showed that when smoothed to remove the signal of the 11 year cycle, the record exhibits "grand maxima" and "grand minima" with a time scale of ∼100-200 years. The period of relatively high solar activity over the last ∼50 years has coincided with a so-called "grand solar maximum," and the period of low solar activity during the late seventeenth century, known as the Maunder Minimum (MM), is believed to have coincided with a "grand solar minimum." Abreu et al. [2008] conducted a statistical analysis of the record and deduced that the current grand maximum is only likely to persist for up to another 15-36 years, after which the Sun would be expected to evolve toward a state of lower output. It has been suggested that the amplitude and persistence of the recent 11 year solar cycle 23 minimum and the relatively low cycle 24 maximum may be indicative of the onset of a grand solar minimum Lockwood, 2011]. However, the time scale and amplitude of such a grand solar minimum are unpredictable and highly uncertain. Barnard et al. [2011] used the record to construct a range of possible future scenarios for solar activity based on past variations [see also Lockwood, 2010]. Given the fundamental role of solar energy in the climate system, a period of low solar activity may have important ramifications for the state of both the stratosphere and troposphere, and it is these aspects which are the focus of this study. It has been found, for example, that colder UK winters tend to occur more frequently during periods of low solar activity . Jones et al. [2012] examined the impact of a range of possible future TSI scenarios on global mean surface temperatures using a simple energy balance climate model. They found that a descent into MM-like conditions over the next ∼70 years would only decrease global mean surface temperatures by up to ∼0.2 K, with some uncertainty depending on the assumed reconstruction of past TSI. Feulner and Rahmstorf [2010] reached similar conclusions about the impact on global surface temperature using an intermediate complexity model and two scenarios for a decline in TSI of 0.08% and 0.25% relative to 1950 levels. These results make clear that even a large reduction in solar output would only offset a small fraction of the projected global warming due to anthropogenic activities. This has been further emphasized by Meehl et al. [2013], who used a comprehensive climate model to show that a 0.25% decrease in TSI in the mid-21st century would only offset the projected anthropogenic global warming trend by a few tenths of a degree. In addition to considerations of the impact on global mean climate, where solar influences are thought to be small, there has been considerable interest and debate surrounding mechanisms for an amplified regional response to solar perturbations [see, e.g., Gray et al., 2010Gray et al., , 2013. These are broadly categorized as "top-down" mechanisms, which focus on the impact of changes in ultraviolet (UV) radiation on stratospheric temperatures and ozone and associated changes in the extratropical stratospheric circulation [e.g., Haigh, 1996;Soukharev and Hood, 2006;Frame and Gray, 2010], which can impact on surface weather and climate via stratosphere-troposphere dynamical coupling [e.g., Kuroda and Kodera, 2002;Haigh et al., 2005;Matthes et al., 2006;Ineson et al., 2011]; and "bottom-up" mechanisms, which focus on changes in surface heating and sensible and latent heat fluxes over the oceans and associated coupled air-sea feedbacks [e.g., White et al., 1997;Meehl et al., 2008Meehl et al., , 2009Misios and Schmidt, 2012;Scaife et al., 2013]. Anet et al. [2013] found amplified cooling over the Arctic and a warming over the North Atlantic in response to a 6 W m −2 (0.45%) decrease in TSI over the 21st century; the North Atlantic warming was related to a reduction in the weakening of the Atlantic meridional overturning circulation due to climate change, which may be partly related to a top-down pathway [Reichler et al., 2012]. Meehl et al. [2013] also found regionally dependent temperature changes in the tropical East Pacific in response to a sudden decrease in TSI, where an initial warm anomaly transitioned to a cold anomaly after around a decade. This is broadly similar to the East Pacific response to the 11 year solar cycle identified in some studies [e.g., Meehl et al., 2009]. In addition to the effects of changes in solar irradiance, there has also been discussion around the possible climate impacts of changes in solar energetic particle fluxes. For example, Seppälä et al. [2013] analyzed reanalysis data and found a stronger Arctic polar vortex under high solar geomagnetic activity and changes in surface temperature that resemble the positive phase of the North Atlantic Oscillation (NAO) [Seppälä et al., 2009]. Such effects will not be considered in this study because solar particles are not currently represented in the climate model employed here; further research is therefore required to test whether this may also play a role in climate in the case of a decline in solar activity. In this study, we investigate the possible climate impacts of a descent into a deep grand solar minimum over the 21st century using a comprehensive stratosphere-resolving coupled atmosphere-ocean climate model. While some studies have focused on the surface response to a grand solar minimum-like forcing, we provide further context by analyzing the effects on the stratosphere and their relationship to the surface changes. The focus of this study is on changes in the stratospheric and tropospheric circulations. A separate paper [Ineson et al., 2015] examines the European wintertime surface response in more detail. The remainder of the paper is structured as follows: section 2 describes the model and experiments carried out, section 3 describes the results of the core solar minimum experiment, section 4 assesses the role of a top-down mechanism for enhanced regional effects, and section 5 summarizes our findings. The Global Climate Model Experiments have been conducted using the Met Office's "high-top" HadGEM2-CC climate model, which is one configuration of the HadGEM2 model suite [Martin et al., 2011]. The model is described in detail by Hardiman et al. [2012] and Osprey et al. [2013] and participated in the Coupled Model Intercomparison Project Phase 5 (CMIP5) [Jones et al., 2011]. The model has 60 levels in the vertical domain with an upper boundary at ∼84 km and is run at N96 horizontal resolution (1.250 ∘ × 1.875 ∘ ). It includes orographic and nonorographic gravity wave drag schemes and simulates a realistic quasi-biennial oscillation (QBO) [Scaife et al., 2002]. The atmosphere is coupled to the Hadley Centre ocean model, which has 40 vertical layers and 1 ∘ × 1 ∘ resolution (increasing in the tropics), and sea ice scheme as described by Johns et al. [2006]. The model also includes an interactive carbon cycle. The atmospheric model uses the Edwards and Slingo [1996] radiative transfer scheme, which has been updated to use the correlated k method for calculating transmittances [Cusack et al., 1999]. In the configuration used here, the radiation code has six bands in the shortwave spectral region covering the intervals 200-320 nm, 320-690 nm (ozone only), 320-690 nm (ozone and water vapor), 690-1190 nm, 1190-2380 nm, and 2380-10000 nm. The radiation scheme also employs updates to the treatment of shortwave absorption by ozone as described by Zhong et al. [2008]. Each experiment consists of a three-member ensemble run from 1 December 2005 to 1 January 2100 with atmospheric and oceanic initial conditions taken from three "historical" all-forcings HadGEM2-CC simulations. All experiments include time-varying well-mixed greenhouse gases (CO 2 , CH 4 , N 2 O, and chlorofluorocarbons) and aerosols as specified by the Representative Concentration Pathway 8.5 (RCP8.5) scenario [Meinshausen et al., 2011]. This is a high greenhouse gas forcing scenario in which atmospheric CO 2 concentrations increase from ∼380 ppm in 2005 to ∼970 ppm in 2100. HadGEM2-CC does not include interactive chemistry, and thus, ozone is prescribed as a zonally averaged latitude-height-time field using the SPARC AC&C ozone data set [Cionni et al., 2011]. This data set was recommended for use in CMIP5 and includes the recovery of the ozone layer over the 21st century due to declining abundances of ozone-depleting substances and a climate change trend according to the SRES A1b greenhouse gas scenario. The original ozone data set did not include a solar cycle component for the future period, so this was added for the HadGEM2-CC CMIP5 simulations (see section 2.3 for details). Unless otherwise stated, the figures presented in sections 3 and 4 show averages over the three-ensemble members. Specification of TSI and Spectral Solar Irradiance To explore the possible impacts of a future decline into a grand solar minimum, we use the HadGEM2-CC RCP8.5 experiment submitted to the CMIP5 archive as a baseline (denoted RCP8.5_ref ). This experiment assumes a sinusoidal 11 year solar cycle in TSI over the 21st century, with a constant amplitude based on solar cycle 23 and a fixed long-term background (see black line in Figure 1a). The spectrally resolved irradiances are apportioned into the model's radiation bands by integrating the 1 nm fluxes provided for CMIP5 (http://solarisheppa.geomar.de/cmip5), which are derived from the Naval Research Laboratory Spectral Solar Irradiance (NRLSSI) model [Wang et al., 2005]. As specified by the data set, monthly mean TSI and SSI values are used from 1882 onward and annual mean values prior to this. The irradiance in the 200-320 nm spectral band in this experiment is shown by the black line in Figure 1b. For reference, the 11 year solar max-min change in this spectral band over the 21st century is ∼0.7% in the RCP8.5_ref experiment. We note that this represents the smallest change in solar UV irradiance indicated by the current uncertainty range [Ermolli et al., 2013]. The solar perturbation experiment, denoted "RCP8.5_solmin," includes a modified future TSI trend shown by the blue line in Figure 1a. This scenario is equivalent to the most extreme grand solar minimum case examined by Jones et al. [2012] (see their Figure 1) and is based on the analogue forecasts of Barnard et al. [2011]. The spectrally resolved irradiances for this scenario are calculated by extrapolating second-order polynomial regressions of the irradiances in each of the six spectral bands against TSI over the period 1860-2009. The changes in TSI are thus apportioned across the spectrum using the assumption that the NRLSSI spectral data for the historical period would scale for the assumed future solar minimum scenario. The irradiance in the 200-320 nm band in this experiment is shown by the blue line in Figure 1b. The scenario corresponds to an average reduction in UV irradiance over the period 2050-2099 of 0.85% compared to RCP8.5_ref. Treatment of Solar Ozone Response Since HadGEM2-CC does not include interactive chemistry, stratospheric and tropospheric ozone are prescribed using a modified version of the CMIP5-recommended SPARC AC&C ozone data set [Cionni et al., 2011]. The modifications include a vertical extrapolation of the ozone data above 1 hPa to coincide with the upper levels of the model domain. The original CMIP5 ozone data set did not include a solar cycle component in the future. It has therefore been added by regressing the ozone mixing ratios at each latitude and height for the historical period onto the terms representing solar forcing (O sol 3 ), equivalent effective stratospheric chlorine (O Cl 3 ), a seasonal cycle (O seas 3 ), and a residual term (O res 3 ): The solar regression term is then added to the ozone field for the future period. A cosine latitude extrapolation of the solar term is also made over high latitudes since the signal in the original data set only extended to ±60 ∘ latitude. The magnitude of the solar max-min ozone response in the RCP8.5_ref experiment is ∼4% in the tropical upper stratosphere. Although this magnitude is toward the upper end of estimates from observations, it is still within the plausible range [Gray et al., 2009. [see also Schmidt et al., 2013]. No changes are imposed at pressures higher than ∼100 hPa. [Cionni et al., 2011] recommendations, SSI is specified according to Wang et al. [2005]. with solar cycle regression term included [see Osprey et al., 2013]. RCP8.5_solmin Same as in RCP8.5_ref but assumes a large (∼0.12%) Same as in RCP8.5_ref but transient decrease in TSI over the 21st century which is solar cycle regression term altered distributed across the model's six spectral bands. to be consistent with assumed future TSI trend. The solar ozone response term is also included in the RCP8.5_solmin experiment, with the magnitude adjusted to account for the modified future TSI trend. RCP8.5_solmin therefore includes a representation of the ozone response to solar variability [e.g., Haigh, 1994], which amounts to a decrease in ozone at the tropical stratopause of ∼6% for the period 2050-2099 (see Figure 2). A summary of the experimental setups is provided in Table 1. For reference, Table 2 gives the differences in shortwave irradiances in the six spectral bands over the period 2050-2099. The analysis in section 3 focuses on the differences between RCP8.5_solmin and RCP8.5_ref for this period, and unless otherwise stated, significance testing is carried out using a two-sided Student's t test for 3×50years = 150years under the assumption that each data point (e.g., a detrended monthly or seasonal mean) can be considered as an independent sample. In section 3.3.2, the location of the midlatitude jet is identified by a spline interpolation of the seasonal mean zonal mean zonal wind (ū) onto a 0.1 ∘ latitude grid and locating the maximum wind speed at 850 hPa between 30 and 70 ∘ . Jet shifts are then computed as the differences between the latitudes of theū maxima. The changes in Southern Annular Mode (SAM) index in the same section are measured as the difference in zonal mean mean sea level pressure (MSLP) between 40-60 ∘ S and 70-90 ∘ S. Figure 3a shows the differences in annual mean zonal mean shortwave heating rates (K d −1 ) between the RCP8.5_solmin and RCP8.5_ref experiments for the period 2050-2099. The grey shading indicates where the differences are not statistically significant at the 95% confidence level. At the tropical stratopause, the decrease in shortwave heating rates has a peak of ∼−0.4 K d −1 . This localized minimum is partly related to the structure of the imposed ozone changes, which have a peak near the tropical stratopause (see Figure 2). The magnitude of the decrease in heating rate drops off rapidly with increasing latitude and decreasing altitude. There is some hemispheric asymmetry in the heating rate anomalies, with larger changes found in the Northern Hemisphere, and also some small localized increases in heating, both of which are also related to the structure of the imposed ozone changes (see Figure 2). Figure 3b shows the corresponding differences in annual mean zonal mean temperature (T; K). The changes in temperature are closely related to the shortwave heating rate response shown in Figure 3a. There is cooling across most of the stratosphere and mesosphere which peaks at ∼1.5 K near the tropical stratopause. This can be compared to the stratospheric cooling due to climate change in the RCP8.5_ref experiment of ∼18 K at 1 hPa (2060-2099 versus 1960-1999). The upper stratospheric cooling is comparable to the solar max-min temperature change found by Frame and Gray [2010], despite the fact that our TSI perturbation is approximately 1.5 times the typical amplitude of the 11 year solar cycle. However, a recent study by Mitchell et al. [2014] has found differences in the detailed magnitude and structure of the upper stratospheric temperature response to the 11 year solar cycle across multiple reanalysis data sets. There is also the suggestion of a weak secondary temperature maximum in the tropical lower stratosphere, similar to that identified in several reanalysis data sets [Crooks and Gray, 2005;Frame and Gray, 2010;Mitchell et al., 2014]. Gray et al. [2009] suggested that the 11 year cycle in ozone may be an important factor in determining the structure of the temperature response in the tropical lower stratosphere. This would appear to be consistent with the imposed changes in ozone, which include a decrease in tropical lower stratospheric ozone. However, there are substantial uncertainties in estimates of the structure and amplitude of the lower stratospheric temperature signal because this region is strongly influenced by QBO variability and volcanic eruptions and the observational record is not long enough to adequately separate the signals [Chiodo et al., 2014]. Figure 4 shows the vertical profile of differences in annual and global mean T. The maximum cooling occurs at 1 hPa with a magnitude of 1.2 K and decreases rapidly in magnitude above and below this level. From 10 to 50 hPa the cooling is roughly constant in height with a magnitude of ∼0.3 K. There is cooling throughout the troposphere, which increases with altitude from ∼0.1 K at the surface to ∼0.25 K near the tropopause. The change in global mean 1.5 m temperature (T 1.5m ) for the period 2050-2099 is −0.13 K. This is broadly consistent with the energy balance model results of Jones et al. [2012]. Temperature Changes The results in this section show that evolving into a grand solar minimum over the 21st century has the potential to enhance stratospheric cooling trends due to increasing carbon dioxide concentrations, but as has been highlighted in other recent studies [Jones et al., 2011;Meehl et al., 2013], such a decline would have only a small impact on any anthropogenic global warming trend. Stratospheric Changes A solar cycle influence on the high-latitude stratosphere has been identified in reanalysis data and climate models [e.g., Kuroda and Kodera, 2002;Matthes et al., 2006;Ineson et al., 2011;Mitchell et al., 2014]. Given the interhemispheric differences in the generation of planetary wave activity in the troposphere, which partly determines the mean strength and unforced variability of the winter polar vortices, it is perhaps unsurprising that the dynamical responses to external forcings, such as the QBO and solar variability, tend to be different in the two hemispheres [e.g., Anstey and Shepherd, 2014]. In the Northern Hemisphere (NH), studies have shown a time-averaged solar cycle signal in the high-latitude stratosphere consisting of a poleward and downward propagation of zonal wind and temperature anomalies over the winter season [Kuroda and Kodera, 2002]. The main mechanism proposed to explain the propagation and amplification of these anomalies invokes wave-mean-flow interactions [e.g., Kodera et al., 2003;Ineson et al., 2011]. In contrast, the extratropical circulation response to an external forcing in the Southern Hemisphere is often manifested around the time of the spring breakup of the polar vortex [e.g., Kuroda and Kodera, 2005]. We now discuss the stratospheric circulation response to the imposed decline in solar activity. Figures 5a-5e show monthly mean T differences between the RCP8.5_solmin and RCP8.5_ref experiments for October to February averaged over the period 2050-2099. The shading is as in Figure 3. In the NH, there is a relative warming of the Arctic lower stratosphere in February. Since the direct radiative tendency of the decrease in solar irradiance would be to cool the stratosphere, the high-latitude warming is indicative of a dynamical response to the solar perturbation. Further analysis (not shown) shows that there is an increase in wave driving (i.e., Eliassen Palm flux divergence) in the high-latitude middle and lower stratosphere [cf. Ineson et al., 2011], particularly in January, and the associated dynamical heating acts against the radiatively driven cooling. Northern Hemisphere Figures 5f-5j show equivalent plots to Figures 5a-5e for differences in zonal mean zonal wind. The warming of the Arctic polar vortex in boreal winter is coincident with a weakening of the stratospheric westerly jet. There is an easterly anomaly in the region of the jet core (∼1 hPa) of up to 3-4 m s −1 in January-February. A weaker easterly anomaly, more confined to the middle and upper stratosphere and the mesosphere, is also present in October-November, but the differences in December are not highly statistically significant, probably in part due to the large interannual variability during NH midwinter. It has been suggested that changes in solar irradiance may impact on the timing of major sudden stratospheric warming events (SSWs) [see, e.g., Gray et al., 2004], which occur in the Arctic stratosphere during boreal winter. Thus, some of the changes inū in Figures 5f-5j may reflect changes in the frequency or timing of SSWs. Figure 6 shows the wintertime distribution of SSWs in each experiment for the period 2050-2099 using the definition based on a temporary reversal ofū at 10 hPa and 60 ∘ N to easterlies. The results for the RCP8.5_ref experiment have been previously discussed by Mitchell et al. [2012]. The histograms suggest that although the average number of SSWs in RCP8.5_solmin remains similar to that in RCP8.5_ref (0.83 year −1 ), there is a slight decrease in the occurrence of SSWs in March and an increase in January-February. However, these changes are not highly statistically significant according to the t test formulated by , which partly reflects the large interdecadal variability in SSWs, as noted by Butchart et al. [2000]. Consequently, longer simulations would be required to make robust conclusions about whether a decline in solar activity would impact on the frequency or timing of SSWs. Southern Hemisphere In the Southern Hemisphere (SH), the plots of monthly mean T in Figures 5a-5c show a relative warming of the Antarctic lower stratosphere by up to ∼1 K during the SH dynamically active season (October-December). This reflects a decrease in the equator-to-pole temperature gradient and is coincident with a weakening of the climatological westerly jet throughout the stratosphere by up to ∼3 m s −1 (Figures 5f-5h). In midwinter (June, July, and August; JJA), there is an easterly anomaly in the midlatitudes (30-60 ∘ S) of up to ∼3 m s −1 (see Figure 7), which reflects a weakening of the westerlies on the equatorward flank of the stratospheric jet. These changes in circulation which extend throughout the stratosphere are in contrast to the response in austral summer (January-February), where the subtropical easterly anomaly is mainly confined to the upper stratosphere and mesosphere. Tropospheric Changes Circulation changes in the stratosphere during the dynamically active seasons, such as those described in section 3.2, can impact on the underlying troposphere via stratosphere-troposphere dynamical coupling (see, e.g., Gerber et al. [2012] for an overview). Furthermore, a number of studies have shown a potential influence of solar variability on the tropical Pacific Ocean and the El Niño-Southern Oscillation (ENSO) [e.g., Meehl et al., 2009]. We now discuss the changes in the tropospheric state in the grand solar minimum experiment. Figure 8a shows the seasonal mean troposphericū changes in the NH in December, January, and February (DJF). The shading denotes the differences between the RCP8.5_solmin and RCP8.5_ref experiments, and the contours show the climatology of the latter for reference. The hatching denotes where the differences are not statistically significant at the 95% confidence level. There is a barotropic dipole change inū in the region of the midlatitude jet, with a westerly anomaly between 30 and 45 ∘ N and an easterly anomaly between 50 and 70 ∘ N. This feature shows a peak-to-peakū dipole change of 0.36 m s −1 at 850 hPa. The dipoleū response is comparable to the climate change signal in DJF in the NH (2060-2099 versus 1960-1999), which shows a strengthening of the westerlies in the jet core by 0.5 m s −1 and a weakening of the westerlies on the poleward flank by ∼0.9 m s −1 (not shown). Thus, while the impact of the decline in solar activity on global near-surface temperature is relatively small, its effects on the midlatitude circulation amount to a considerable fraction of the uncertainty due to future greenhouse gas trends [see also Ineson et al., 2015]. A significant NH tropospheric u response is not found outside of boreal winter, which suggests a role for a top-down influence of changes in the stratospheric circulation on middle-and high-latitude climate. Figure 8b shows a polar stereographic map of the differences in DJF MSLP (hPa) between the RCP8.5_solmin and RCP8.5_ref experiments. The green lines encompass regions where the differences are statistically significant at the 95% confidence level. A more negative Arctic Oscillation (AO) index is characterized by lower pressure in the midlatitudes and higher pressure over the polar cap, which corresponds to a weakening of the climatological equator-to-pole pressure gradient and anomalously easterly flow across Europe and the Atlantic sector [Thompson and Wallace, 1998]. The pattern in Figure 8b suggests a more negative AO index, although the response is not highly statistically significant and the structure over the North Atlantic does not strongly resemble the NAO. There is a deepening of the Aleutian Low, which has also been identified during solar minimum conditions in observations [e.g., Roy and Haigh, 2010;Gray et al., 2013]. Northern Hemisphere Blocking episodes have been highlighted as an important aspect of variability in the North Atlantic circulation [e.g., Shabbar et al., 2001;Woollings et al., 2010b]. Previous studies have identified variations in blocking frequency associated with the 11 year solar cycle [Barriopedro et al., 2008;Woollings et al., 2010a]. The solar-blocking signal identified in these studies consists of an increase in Euro-Atlantic blocking during solar minimum, with the precise magnitude of the changes being somewhat sensitive to the metric used to define solar activity (e.g., F10.7 cm radio flux or open solar flux), but is typically around ∼8-10% of total blocked days. Figure 9 shows differences in the ensemble mean DJF blocking frequency (as a percent of total blocked days) between the RCP8.5_solmin and RCP8.5_ref experiments. The blocking index used here is based on temporary reversals in the meridional gradient of potential temperature on the dynamical tropopause which must persist for at least 5 days and is identical to that used by Woollings et al. [2010a] and Anstey et al. [2013]. The general pattern of an increase in Euro-Atlantic and Pacific blocking at high latitudes and a decrease over Figure 9. The difference in ensemble mean DJF blocking frequency (as a percent of blocked days) between the RCP8.5_solmin and RCP8.5_ref experiments for the period 2050-2099. Blocking events are defined using a metric based on potential temperature on the dynamical tropopause and is consistent with that used by Woollings et al. [2010a]. The stippling indicates where the differences are significant at the 95% confidence level. the Mediterranean are consistent with the results of previous studies [e.g., Woollings et al., 2010a], but the magnitudes of the differences are several times smaller. Like many CMIP5 models, HadGEM2-CC has biases in its representation of NH blocking, the main features of which are a lack of blocking events at high latitudes and too much blocking at lower latitudes [Anstey et al., 2013]. It is possible that the underlying model biases could impact on the simulation of a solar-blocking connection [e.g., Scaife et al., 2011]. However, the reanalysis-based studies described above have mostly focused on the late twentieth century period, when there was an immediate correlation between solar variability and the NAO (and, by proxy, blocking events), but the solar-NAO relationship at zero lag has been shown to be considerably weaker over a longer record [Roy and Haigh, 2010]. Gray et al. [2013] showed that over the longer period 1870-2010 the strongest correlation was at lags of 3-4 years, but the HadGEM2-CC model was unable to reproduce this behavior, leading Scaife et al. [2013] to suggest that there may be deficiencies in the representation of midlatitude ocean-atmosphere coupling in the model. Despite these outstanding questions, our results are consistent with the findings of other studies which have highlighted a solar influence on NH blocking and the NAO. The results in this section show that there is a coherent change in the NH extratropical circulation in response to the decline in solar activity which extends from the upper stratosphere to the surface [cf., e.g., Ineson et al., 2011;Gray et al., 2013]. Southern Hemisphere As was shown in Figure 5, the SH high-latitude zonal wind anomalies in December extend throughout the stratosphere and are accompanied by dipole changes in the troposphere in the region of the midlatitude jet. Figure 10 shows differences in the seasonal meanū between the RCP8.5_solmin and RCP8.5_ref experiments for JJA (a) and October, November, and December (OND) (b) seasons. In JJA, there is a small poleward shift in the midlatitude jet, with a peak-to-peak dipole change inū at 850 hPa of 0.32 m s −1 . The strongest signal is a weakening of the westerlies on the equatorward flank of the jet. In OND, there is an equatorward jet shift of ∼ 0.5 ∘ latitude, with a peak-to-peakū dipole of 0.87 m s −1 . Projected future trends in the position of the SH midlatitude jet have been shown to be sensitive to the recovery of the Antarctic ozone hole and increases in greenhouse gas concentrations [Son et al., 2008]. In austral winter, the trend in jet position is largely determined by the greenhouse gas forcing [Barnes et al., 2014]. In the baseline RCP8.5 experiment, there is a poleward shift in the jet of 2.5 ∘ (2.7 m s −1ū dipole) in JJA and 2.0 ∘ (3.5 m s −1ū dipole) in OND (2060-2099 versus 1960-1999). The change in jet latitude in OND is coincident with a more negative Southern Annular Mode (SAM) index of ∼1.4 hPa, which offsets the positive SAM trend in this season in the RCP8.5_ref experiment by ∼25% (see Figure 11). Interestingly, the circulation changes in JJA are of the opposite sign to what is typically associated with stratosphere-troposphere dynamical coupling (weaker stratospheric westerlies lead to a more equatorward tropospheric jet). Such a seasonal dependence of the sign of the SH jet shift has been identified in other studies. Varma et al. [2011] found that the response to a constant 2 W m −2 reduction in TSI consisted of a poleward jet shift in JJA and an equatorward jet shift in DJF in a coupled model without a well-resolved stratosphere. The annual mean response was dominated by the signal in DJF (i.e., an equatorward jet shift). Thresher [2002] found observational evidence for a seasonal cycle in the SH surface solar response over the late twentieth century, which would be consistent with the findings of Varma et al. [2011]. However, the findings of modeling studies may be sensitive to, e.g., the inclusion of a well-resolved stratosphere, and thus, this highlights the Figure 11. The difference in SAM index (hPa) between 2060-2099 and 1960-1999 for the RCP8.5 and RCP8.5_solmin experiments compared to the historical experiment. The SAM index is defined as the difference in zonally averaged MSLP between 40-60 ∘ S and 70-90 ∘ S. The whiskers show 5-95% confidence intervals. need for further research to better understand the response of the SH circulation to stratospheric changes and its dependence on season. Tropics In addition to the proposed top-down mechanisms for an amplified surface response to solar forcing in the midlatitudes, some studies have proposed an additional bottom-up mechanism operating in the tropical Pacific. This involves coupled air-sea feedbacks in response to small changes in surface heating and results in an anomalous sea surface temperature (SST) pattern that resembles ENSO [White et al., 1997, Meehl et al., 2008]. However, there has been some disagreement as to whether the observed response corresponds to a warm or cold ENSO phase at solar maximum, and Roy and Haigh [2010] further showed that the apparent solar-ENSO connection could be due to aliasing onto unconnected ENSO variations. Figure 12. The difference in DJF sea surface temperature (K) for the period 2050-2099 between the RCP8.5_solmin and RCP8.5_ref experiments. The solid black contours denote 0.1 K intervals. The light grey shading denotes regions that are not statistically significant at the 95% confidence level. As described in section 1, Meehl et al. [2013] identified an ENSO-like response in model simulations of a persistent solar minimum, with relatively warm East Pacific SSTs during the first decade after a reduction in TSI was imposed, followed by colder SSTs in the second decade. However, the interdecadal changes were not found to be highly statistically significant. Figure 12 shows the differences in DJF sea surface temperatures between the RCP8.5_solmin and RCP8.5_ref experiments. There is weak cooling (∼0.1 K) across much of the tropical Pacific; the change in area-averaged (15 ∘ N-15 ∘ S, 150 ∘ E-90 ∘ W) temperature is −0.075 K. However, there is no indication of a local amplification in the ENSO region. Our simulations therefore do not lend support to the existence of a solar-ENSO connection. This is in contrast to the results of Meehl et al. [2013], although their simulated ENSO-like response to a persistent solar minimum was weaker than that found for 11 year cycle variations [Meehl et al., 2009]. However, the experiments do show enhanced cooling over the North Pacific, which is consistent with the deepened Aleutian Low [e.g., Schneider and Cornuelle, 2005]. There is also a band of stronger cooling across the SH midlatitudes, which may be partly related to changes in the midlatitude jet (see section 3.3.2). Sensitivity to UV Forcing The RCP8.5_solmin experiment shows enhanced regional surface responses to a decline in solar activity, particularly in the middle and high latitudes. As described in section 1, one proposed mechanism for such localized effects involves the impact of changes in shortwave heating rates on the stratospheric circulation and subsequent surface impacts via stratosphere-troposphere coupling. The potential for this mechanism to contribute to the response to solar forcing in the Northern Hemisphere was demonstrated by Ineson et al. [2011]. They imposed a perturbation in 200-320 nm radiation (i.e., in UV radiation alone) in a model and found a more negative NAO index under solar minimum conditions. To identify whether a similar mechanism may also be operating here, we conduct a further experiment (RCP8.5_uvmin) in which the 200-320 nm irradiance is reduced by 6.4% in isolation of any other changes (e.g., ozone and visible radiation); this enables a separation of a pure top-down influence from a decline in solar activity. This is a highly idealized experiment, in which the imposed UV perturbation is considerably larger than in RCP8.5_solmin, and other effects, such as the solar ozone response, are neglected. Nevertheless, it allows us to make an assessment of at least one pathway that may be contributing to the RCP8.5_solmin results discussed in the previous sections and to elucidate more generally the role of the top-down pathway for solar-climate coupling. The same experimental protocol as described in section 2 is carried out, with a reduction in 200-320 nm radiation (and by definition in TSI) of 1.75 W m −2 over the 2050-2099 period. Figure 13a shows the December-February mean difference inū between the RCP8.5_uvmin and the RCP8.5_ref experiments over 2050-2099. There is a weakening of the stratospheric jet by up to 3 m s −1 and poleward shift in the tropospheric jet. This is qualitatively similar to the response in the RCP8.5_solmin experiment (Figures 5 and 8), but about 20-30% larger, and is consistent with a top-down pathway which contributes to the amplified regional climate responses. Figure 8 but for the differences between the RCP8.5_uvmin and RCP8.5_ref experiments. Note that data are only shown for the Northern Hemisphere. Figure 13b shows the DJF mean sea level pressure differences between the RCP8.5_uvmin and RCP8.5_ref experiments. The changes in troposphericū are commensurate with a more negative NAO index and a deepening of the Aleutian Low, again with the amplitude of the changes being slightly larger than in RCP8.5_solmin. Conversely, in the SH (not shown), there is no indication of an enhanced tropospheric response during OND, as was found in RCP8.5_solmin in section 3.3.2. These results suggest that changes in UV irradiance and a top-down influence are likely to be contributing to the enhanced NH surface response discussed in section 3.3 but that the enhanced response in the SH in OND may be related to other processes, such as the ozone response or changes in visible irradiance. Future studies should therefore aim to elucidate the roles of these mechanisms in driving the SH response to solar forcing. Summary and Discussion A comprehensive coupled atmosphere-ocean global climate model with a well-resolved stratosphere (HadGEM2-CC) has been used to investigate the possible impacts of evolving into a period of very low solar activity over the 21st century. The assumed scenario is akin to what may have occurred during the Maunder Minimum (MM) in the late seventeenth century. The RCP8.5_solmin experiment assumes a mean decrease in TSI of ∼0.12% over the second half of the 21st century and includes a decrease in UV irradiance (200-320 nm) of 0.85%, along with a representation of the solar cycle impact on stratospheric ozone. The key conclusions of the study for projections of global mean climate are as follows: 1. A return to MM-like levels of solar activity would enhance the anticipated stratospheric cooling trend due to increasing atmospheric carbon dioxide concentrations. The maximum cooling at the stratopause is ∼1.2 K, which can be compared to the projected cooling due to climate change in the RCP8.5 scenario of ∼18 K. 2. The change in global mean near-surface temperature over the second half of the 21st century is O(0.1 K), confirming the findings of earlier studies which have shown that a large decrease in solar activity would do little to offset the projected anthropogenic global warming trend [cf. Feulner and Rahmstorf , 2010, Jones et al., 2011, Meehl et al., 2013, Anet et al., 2013. In the NH during boreal winter, the main features of the response to the solar minimum consist of the following: 1. A warmer polar lower stratosphere and slight weakening (<4 m s −1 ) of the polar vortex, with the largest changes occurring in January-February. 2. Dipole changes in NHū in DJF in the region of the midlatitude jet. The changes in the large-scale circulation suggest a more negative Arctic Oscillation index, but the pattern over the North Atlantic does not strongly resemble the NAO. 3. Changes in the occurrence of NH tropospheric blocking events, with an increase over Northern Europe and the North Pacific and a decrease over Southern Europe. The magnitude of this change is smaller than has been suggested in studies using reanalysis data for the recent past, but the patterns are similar [Woollings et al., 2010a]. 4. A further sensitivity experiment which only included changes in 200-320 nm (UV) radiation indicates that the enhanced NH regional responses are at least partly driven by changes in UV irradiance and a top-down pathway. A separate paper [Ineson et al., 2015] describes the European wintertime surface response and land surface temperature changes in more detail. In the SH during austral winter and spring, we find that the decrease in solar activity leads to the following: 1. A relative warming of the Antarctic stratosphere during June-December. This is coincident with a weakening of the background stratospheric westerly jet of up to 3 m s −1 . 2. An equatorward shift in the Southern Hemisphere (SH) tropospheric midlatitude jet by ∼ 0.5 ∘ and a more negative Southern Annular Mode index of ∼1.4 hPa in October-December (OND). Finally, in contrast to earlier studies [e.g., Meehl et al., 2009], we find no evidence of an enhanced sea surface temperature response over the tropical Pacific that would be suggestive of an impact on ENSO. Our experiment therefore does not lend support to the existence of a solar-ENSO connection. It is projected that over the 21st century there will be significant changes in the tropospheric circulation due to the combined effects of ozone recovery and increasing greenhouse gas concentrations [e.g., Wilcox et al., 2012;Scaife et al., 2012]. Our experiment has shown that although any impact on global mean surface temperature can be expected to be small, uncertainties in future solar forcing should be considered in projections of regional high-latitude climate change. It is also important to note that although some studies have presented arguments for a future decline in solar output [e.g., Barnard et al., 2011;Abreu et al., 2008], the CMIP5 integrations assumed no trend in solar activity in the future. It is therefore important that more scenarios which reflect the range of possible future changes in solar activity should be generated for use in studies of 21st century climate. We further emphasize that the recommended representation of spectral solar irradiance in CMIP5 was based on the Wang et al. [2005] data set, which is at the lower end of the estimated range for UV variability [Ermolli et al., 2013]. We therefore highlight the need for alternate scenarios which better reflect the current understanding of SSI variability for use in future model intercomparisons.
v3-fos-license
2016-06-21T16:27:47.316Z
2016-03-11T00:00:00.000
8350513
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2016.00026/pdf", "pdf_hash": "dbc305c7886b5901854d04c613e4c3b3784d1cdf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44654", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "dbc305c7886b5901854d04c613e4c3b3784d1cdf", "year": 2016 }
pes2o/s2orc
Life Cycle Environmental Impacts of Electricity from Biogas Produced by Anaerobic Digestion The aim of this study was to evaluate life cycle environmental impacts associated with the generation of electricity from biogas produced by the anaerobic digestion (AD) of agricultural products and waste. Five real plants in Italy were considered, using maize silage, slurry, and tomato waste as feedstocks and cogenerating electricity and heat; the latter is not utilized. The results suggest that maize silage and the operation of anaerobic digesters, including open storage of digestate, are the main contributors to the impacts of biogas electricity. The system that uses animal slurry is the best option, except for the marine and terrestrial ecotoxicity. The results also suggest that it is environmentally better to have smaller plants using slurry and waste rather than bigger installations, which require maize silage to operate efficiently. Electricity from biogas is environmentally more sustainable than grid electricity for seven out of 11 impacts considered. However, in comparison with natural gas, biogas electricity is worse for seven out of 11 impacts. It also has mostly higher impacts than other renewables, with a few exceptions, notably solar photovoltaics. Thus, for the AD systems and mesophilic operating conditions considered in this study, biogas electricity can help reduce greenhouse gas (GHG) emissions relative to a fossil-intensive electricity mix; however, some other impacts increase. If mitigation of climate change is the main aim, other renewables have a greater potential to reduce GHG emissions. If, in addition to this, other impacts are considered, then hydro, wind, and geothermal power are better alternatives to biogas electricity. However, utilization of heat would improve significantly its environmental sustainability, particularly global warming potential, summer smog, and the depletion of abiotic resources and the ozone layer. Further improvements can be achieved by banning open digestate storage to prevent methane emissions and regulating digestate spreading onto land to minimize emissions of ammonia and related environmental impacts. The aim of this study was to evaluate life cycle environmental impacts associated with the generation of electricity from biogas produced by the anaerobic digestion (AD) of agricultural products and waste. Five real plants in Italy were considered, using maize silage, slurry, and tomato waste as feedstocks and cogenerating electricity and heat; the latter is not utilized. The results suggest that maize silage and the operation of anaerobic digesters, including open storage of digestate, are the main contributors to the impacts of biogas electricity. The system that uses animal slurry is the best option, except for the marine and terrestrial ecotoxicity. The results also suggest that it is environmentally better to have smaller plants using slurry and waste rather than bigger installations, which require maize silage to operate efficiently. Electricity from biogas is environmentally more sustainable than grid electricity for seven out of 11 impacts considered. However, in comparison with natural gas, biogas electricity is worse for seven out of 11 impacts. It also has mostly higher impacts than other renewables, with a few exceptions, notably solar photovoltaics. Thus, for the AD systems and mesophilic operating conditions considered in this study, biogas electricity can help reduce greenhouse gas (GHG) emissions relative to a fossil-intensive electricity mix; however, some other impacts increase. If mitigation of climate change is the main aim, other renewables have a greater potential to reduce GHG emissions. If, in addition to this, other impacts are considered, then hydro, wind, and geothermal power are better alternatives to biogas electricity. However, utilization of heat would improve significantly its environmental sustainability, particularly global warming potential, summer smog, and the depletion of abiotic resources and the ozone layer. Further improvements can be achieved by banning open digestate storage to prevent methane emissions and regulating digestate spreading onto land to minimize emissions of ammonia and related environmental impacts. Keywords: agricultural waste, anaerobic digestion, biogas, electricity, life cycle assessment, renewable energy inTrODUcTiOn The need to mitigate climate change and improve security of energy supply is driving a growing interest in renewable energy sources, with many world regions and countries setting ambitious targets. For example, the EU directive on the promotion of the use of energy from renewable sources (EC, 2009) sets the target of achieving a 20% share of energy from renewable resources by 2020, including biogas produced by anaerobic digestion (AD) of agricultural feedstocks. Production of biogas is expanding rapidly in Europe. According to EurObserv'ER (2014), about 13.4 million ton oil equivalent (Mtoe) of biogas primary energy was produced in the EU during 2013, a 10% increase on the 2012 levels. Germany is the largest producer of biogas, not only in Europe but also in the world. In 2013, it had 7874 AD plants with a total installed electrical capacity of 3384 MW, which generated 27 TWh/year (EurObserv'ER, 2014;Fuchsz and Kohlheb, 2015). By comparison, the second largest world producer -China -generates just over one-quarter of that (7.6 TWh/year in 2009) (Chen et al., 2012). Italy follows closely in third place at 7.4 TWh of electricity per year produced by 1300 AD plants with a total installed capacity of 1000 MW (Brizzo, 2015). The plants are fed largely with maize grown specifically for this purpose, which in Italy occupies 10% of the total maize cultivation area (1,172,000 ha) (Casati, 2011). However, this is still only half the area in Germany (2,282,000 ha) where it covers one-third of the total maize land (Dressler et al., 2012). The rapid expansion of biogas production in Europe is largely due to the feed-in-tariffs (FiT) schemes available in 29 countries (Whiting and Azapagic, 2014). For example, electricity generators in Italy using biogas produced in AD plants smaller than 1 MW are paid €280/MWh generated. In the UK, the subsidies are significantly lower, ranging from €130 to 210/MWh, depending on the plant size (Whiting and Azapagic, 2014). This perhaps explains why the deployment of AD was initially slower than in Italy, with only 180 AD plants installed so far, but with a further 500 projects currently under development (NNFCC, 2015). However, the FiT scheme in Italy has recently been changed, reducing the subsidy for electricity by 15-30% and introducing payments for utilization of heat and other coproducts (Ministero dello Sviluppo Economico, 2012). In the US, the growth of biogas production has also been slower than elsewhere, with only 244 AD plants currently in operation (Ebner et al., 2015); this is largely due to the absence of adequate subsidies. Biogas produced by AD is considered to have a high saving potential with respect to greenhouse gas (GHG) emissions (EC, 2009). However, beyond that, other environmental implications of biogas production are still unclear despite quite a few life cycle assessment (LCA) studies having been carried out. This is due to several reasons. First, most previous studies of biogas have either focused on climate change or considered a limited number of impacts; for a summary, see Table 1. As far as the authors are aware, out of 26 studies found in the literature, only five have considered a full suite of impacts normally included in LCA studies, two of which are based in the UK (Mezzullo et al., 2013;Whiting and Azapagic, 2014), one in Argentina (Morero et al., 2015), one in Italy (Pacetti et al., 2015), and one in China (Xu et al., 2015). It is also apparent from Table 1 that the goal, scope, life cycle impact assessment (LCIA) methodology, feedstocks, and geographical regions covered by the studies vary widely. Most studies are based in Europe with several in China and one each in Argentina, Canada, and the US. All plants have a capacity below 1 MW, with the majority being around 500 kW (where reported); some are electricity only and others combined heat and power (CHP) installations. Most studies have excluded the impacts of constructing and decommissioning the AD and power plants. Maize is the most commonly considered feedstock, followed by animal slurry. The functional unit is largely based either on a unit of feedstock used to generate biogas or a unit of energy (biogas, heat, or electricity). Most studies have relied on secondary foreground data to estimate the impacts or used only limited primary data. However, the greatest variation among the studies is found in the number of impacts considered and the methodologies used to estimate them. The former range from 1 to 18 and the latter cover almost all known LCIA methods, including EcoIndicator 99 (Goedkoop and Spriensma, 2001), CML 2001(Guinée et al., 2002, Impact 2002+ (Olivier et al., 2003), and ReCiPe (Goedkoop et al., 2009). These and the other differences, including the credits for coproducts, have led to very different results among the studies, making it difficult to compare them, and draw any generic conclusions on the environmental sustainability of biogas. This study aims to make further contributions to the discussion on the environmental sustainability of biogas. The paper considers life cycle environmental impacts of electricity generation in five real AD-CHP systems using biogas produced from differing mixes of four types of feedstock. The plants are situated in Italy. The novel aspects of the work compared to previous studies include: • estimation of impacts associated with electricity generated from biogas using different feedstocks, including dedicated maize crops, their mixture with animal slurry, and agricultural waste as well as a mixture of slurry and waste; • use of primary data for both the feedstock production and operation of the AD-CHP systems; • consideration of the influence of different scales of the AD-CHP systems on the environmental impacts; • inclusion of construction and decommissioning of AD and CHP plants; • estimation of the avoided emissions from using the digestate instead of slurry as fertilizer; and • comparison of impacts with grid electricity, natural gas, and renewable sources of electricity. MaTerials anD MeThODs The environmental impacts of biogas electricity were estimated using LCA as a tool. The study was carried out in accordance with the ISO 14040/44 methodology for LCA (ISO, 2006a,b). The systems were modeled using Gabi LCA software V6.11 (Thinkstep, 2015). The CML 2001 method (Guinée et al., 2002), April 2013 update, was followed to estimate the following 11 impacts considered in this method: abiotic depletion potential of elements (ADP elements), abiotic depletion potential of fossil fuels (ADP fossil), acidification potential (AP), eutrophication potential (EP), freshwater aquatic ecotoxicity potential (FAETP), global warming potential (GWP), human toxicity potential (HTP), marine aquatic ecotoxicity potential (MAETP), ozone layer depletion potential (ODP), photochemical oxidants creation potential (POCP), also known as summer smog, and terrestrial ecotoxicity potential (TETP). For further details on the estimation of the impacts, see Supplementary Material. The next sections detail the goal of the study, the assumptions, and data used in the study. goal and scope of the study The main goal of the study was to estimate the environmental impacts of electricity generated by different AD-CHP systems utilizing maize silage and agricultural waste. The results were compared with electricity from the grid, natural gas, and different renewables to help evaluate the environmental sustainability of biogas electricity relative to other available options. Five real AD-CHP systems were considered using differing combinations of the following feedstocks: maize and maize ear silage; pig and cow slurry; and tomato peel and seeds ( Table 2). The volume of the AD digesters ranged from 1650 to 2750 m 3 and the installed electrical capacity of the CHP plants from 100 to 999 kW. The plants are located at farms producing the feedstocks in Lombardy in Northern Italy, where the majority of the country's biogas plants are situated (Negri et al., 2014). As indicated in Figure 1, the scope of the study was from "cradle to grave, " including: • production of maize silage (where used), comprising cultivation, transport from fields to the farm (1 km), and the ensiling; • collection of slurry and tomato waste and delivery to the AD plants; • construction and decommissioning of AD and CHP plants; • production of biogas in the AD plants and its treatment (filtration, dehumidification, and desulfurization); • cogeneration of electricity and heat in the CHP plants; the heat, except that used for heating the digesters, is considered as waste as it is not used; • storage and subsequent use of digestate as fertilizer; note that all plants but no. 2 use open storage of digestate. Electricity distribution and consumption were excluded from the system boundary. The functional unit was defined as "generation of 1 MWh of electricity to be fed into the grid. " Although heat is cogenerated with electricity, all the impacts were allocated to the latter as the excess heat not utilized in the system is discharged as waste. inventory Data Feedstock Production The inventory data for the production of maize silage are detailed in Tables S1 and S2 in Supplementary Material. As indicated in the tables, data for field operations were collected directly from the farms. The background data were sourced from Ecoinvent (Nemecek and Kägi, 2007) and modified to match the characteristics of the machinery used for maize cultivation in Lombardy, based on information in Bodria et al. (2006). No environmental impacts were considered for tomato waste and slurry as they are waste. Ammonia and nitrous oxide emissions as well as nitrate leachates from the application of the digestate and urea as fertilizers were estimated according to Brentrup et al. (2000). Phosphate leachates and run-offs were calculated based on Nemecek and Kägi (2007). To estimate pesticide emissions to the environment, several factors need to be considered, such FigUre 1 | system boundaries considered in the study. No environmental impacts are considered for the tomato waste, pig and cow slurry as they are waste. All the impacts are allocated to electricity as heat is not exported from the system. as the way in which a pesticide is applied, the soil type, and the meteorological conditions during application (EMEP/EEA, 2013). However, considerations of these parameters is often impractical in LCA studies due to a lack of detailed data (Milà i Canals, 2007). Thus, pesticide emissions to air, water, and soil were determined in accordance with Margni et al. (2002) and Audsley (1997), assuming the following partitioning of the active pesticide components: 85% of the total amount applied remains in the soil, 5% in the plant, and 10% is emitted into the atmosphere; furthermore, 10% of the applied dose is lost as a run-off from the soil into the water. This method is also recommended for use by Curran (2012) and was applied in some other LCA studies [e.g., Boschiero et al. (2014), Falcone et al. (2015), and Fantin et al. (2015)]. Land use change was not considered as the maize feedstock is grown on land previously used to cultivate cereals. The transport and packaging of pesticides and fertilizers were not included in the system boundaries because of a lack of data. This is not deemed a limitation as some other studies found that their contribution was insignificant [e.g., Cellura et al. (2012)]. AD and CHP Plants In all the AD plants evaluated in this study, the digestion takes place in continuously stirred reactors under mesophilic conditions at a Frontiers in Bioengineering and Biotechnology | www.frontiersin.org temperature of 40°C (±0.2°C), which is controlled and monitored continuously. Therefore, the digesters are operated at the top end of the temperature scale, which for mesophilic digestion ranges from 30 to 40°C (Weiland, 2010). The digesters are made from iron-reinforced concrete and have an expanded polyurethane external insulation. The biomass is fed into the digesters every 90 min in small amounts and heated using the heat generated by the adjacent CHP. As indicated in Table 2, the dry matter content in the digester varies from 8.5 to 10.6%, and the organic loading rate from 0.58 to 3.4 kg/day m 3 . The biogas composition is similar across the plants with the methane content ranging from 52 to 56% of the biogas volume. The biogas is stored on top of the digesters in a gasometer dome with a spherical cap. Before being fed into the CHP plant, the biogas is filtered through a sand filter, dehumidified in a chiller, and then desulfurized using sodium hydroxide (NaOH). NOx emissions are controlled by a catalytic converter. The digestate is pumped from the bottom of the digesters and stored in open tanks in all the plants except for Plant 2, where it is stored in a covered tank. The biogas is fed into the CHP plant to generate electricity and heat. Electricity is sold to the national grid while the heat is used for heating the digesters and the excess is dissipated by fan-coolers. The electricity consumption for operating the AD plants is sourced from the national grid to ensure continuous operation during the CHP downtimes. The amount of electricity used by the system ranges from 8.5 to 11% of the total electricity generated ( Table 2). Detailed inventory data for the AD and CHP plants can be found in Tables 2 and 3. The operational data (feedstock production, consumption of electricity and heat, electricity generation) were obtained from the owners. Chemical characterization of different types of feedstock and their biogas production potentials were determined by laboratory tests (Fiala, 2012;Negri et al., 2014; and used to calculate the biogas production by the AD plants. The emissions from the CHP plants were calculated based on NERI (2010). The useful lifetime of the AD plants was assumed to be 20 years (Nemecek and Kägi, 2007). For the CHP plants, the lifespan is shorter, between 8 and 10 years because of the high content of hydrogen sulfide (Fiala, 2012). At the end of a plant's useful lifetime, its construction materials were assumed to be landfilled, except for plastic materials, which were incinerated; the influence on the impacts of recycling is explored in a sensitivity analysis later in the paper. The background data on the construction materials, their transport (120 km by rail and 35 km in 20-28 ton trucks) and landfilling were sourced from the Ecoinvent database v2.2 (Ecoinvent, 2010). Since the data for construction materials for the AD and CHP plants in Ecoinvent correspond to a different plant size (300 m 3 for the AD and 160 kWel for the CHP plants), the environmental impacts from their manufacture were estimated by scaling up or down their capacity to match the sizes of the AD and CHP plants considered in this study. This was carried out following the approach used for cost estimation in scaling up process plants (Coulson et al., 1993) but instead of costs, estimating environmental impacts as follows (Whiting and Azapagic, 2014): (1) where E2 environmental impacts of the larger plant (AD or CHP); E1 environmental impacts of the smaller plant (AD or CHP); C2 capacity of the larger plant (volume for the AD plant and installed power for the CHP plant); C1 capacity of the smaller plant (volume for the AD plant and installed power for the CHP plant); 0.6 scaling factor. Digestate Use and Methane Emissions Credits In all the plants except no. 4, the digestate is used as fertilizer on the farms, replacing pig or cow slurries applied previously as part of a traditional slurry management method (see Figure 2). Both digestate and the slurry from Plants 1, 3, and 5 are stored in open tanks before application, during which they emit methane. However, the emissions from digestate are lower than from slurry storage (Amon et al., 2006;Wang et al., 2014), and the AD systems were credited for the avoidance of the emissions. Note that in Plant 2, the digestate is stored in covered tanks, with no emissions of methane (IPCC, 2006); thus, the net emissions from this system are negative (Table 3). At Plant 4, a closed maize cycle is practiced, whereby the digestate is used as fertilizer for the maize which is fed into the same plant (Figure 3). The digestate at this plant is stored in open tanks. alternative electricity sources Grid electricity was considered here as the main alternative to electricity from biogas. This is due to the latter being fed into the national grid, displacing an equivalent amount of grid electricity. The Italian electricity mix is shown in Figure S1 in Supplementary Material. Given that the electricity mix is dominated by natural gas (53%) (IEA, 2011), biogas electricity was also compared to this option. Furthermore, as biogas is a renewable resource, it was also compared to the other renewables contributing to the Italian mix (see Figure S1 in Supplementary Material). The system boundary for all the alternatives was from "cradle to grave, " and all the data were sourced from Ecoinvent (2010). As for the biogas electricity, distribution and consumption of electricity were not considered. resUlTs The results suggest that biogas electricity generated by Plant 5 is environmentally the best option among the five plants considered (Figure 4), largely because it does not use maize silage as a feedstock. The exceptions to this are the MAETP and TETP for which Plant 1 is slightly better because these impacts are not affected by maize silage (as discussed further below). Plant 1 is also the second best option for all other impacts apart from GWP and POCP, for which Plant 2 is better because of the lower methane emissions from digestate. The differences in the impacts for Plants 2 and 4, which are fed with approximately the same amount of maize silage, are due to the differences in the digestate emissions and the capacities of the AD and CHP plants. Despite the highest biogas production, Plant 3 is the worst option across all the impact categories because of the maize ear silage, which has impacts twice as high as maize silage owing to its lower yield (Table S2 in Supplementary Material). The exceptions to this are GWP and POCP, for which Plant 4 is worst because of the higher net methane emissions (Table 3). FigUre 4 | The environmental impacts associated with the generation of biogas electricity. All impacts expressed per megawatt hour of electricity generated. Impacts nomenclature: ADP elements, abiotic depletion potential for elements; ADP fossil: abiotic depletion potential for fossil fuels; AP, acidification potential; EP, eutrophication potential; FAETP, freshwater aquatic ecotoxicity potential; GWP, global warming potential; HTP, human toxicity potential; MAETP, marine aquatic ecotoxicity potential; ODP, ozone depletion potential; POCP, photochemical oxidants creation potential; TETP, terrestrial ecotoxicity potential; DCB, dichlorobenzene. Fusi et al. Environmental Impacts of Biogas Electricity Frontiers in Bioengineering and Biotechnology | www.frontiersin.org The following sections discuss in more detail the impacts from the different plants (Figure 4) and the contributions of different life cycle stages (Figures 5A-E). abiotic Depletion Potential (aDP elements and aDP Fossil) Abiotic depletion of elements and fossil resources range from 142 to 243 mg Sb eq./MWh and from 1010 to 1570 MJ/MWh, respectively, with Plant 5 being the best and Plant 3 the worst option for both impacts. As indicated in Figures 5A-D, the depletion of elements for Plants 1-4 is mainly due to the cultivation of maize and is associated with the materials used for agricultural machinery. For Plant 5, on the other hand, the major contributors are construction materials for the AD and CHP plants ( Figure 5E); the latter is also a hotspot for Plant 1. This is due to economies of scale: they have smaller CHP plants and thus a higher consumption of resources per megawatt hour electricity generated. As also shown in Figures 5A-D, the major contributors to fossil depletion for Plants 1-4 are the fuel used in the agricultural machinery for maize cultivation and the electricity for the AD plants. For Plant 5, the grid electricity used to operate the AD plant accounts for the majority of this impact ( Figure 5E). acidification and eutrophication Potentials The estimated AP varies from 2.6 to 5.5 kg SO2 eq./MWh and EP from 0.2 to 1.9 kg PO4 eq./MWh. As for ADP, biogas electricity generated by Plant 5 is the best and by Plant 3 the worst option for these two impacts. For Plants 1-4, maize cultivation is responsible for the large majority of AP and EP (Figure 5A-D), whereas for Plant 5 (Figure 5E), it is the ammonia emitted during the digestate storage as well as the emissions of acid gases and nutrients in the life cycle of the grid electricity used for AD. global Warming Potential (gWP 100 years ) The values for GWP range from −395 to 408 kg CO2 eq./MWh, with electricity from Plant 5 being the best option and from Plant 4 the worst. The vast majority of GWP (64%) is due to methane emissions from the digestate during its storage. For Plant 2, GWP is mainly from the maize silage ( Figure 5B). The negative contributions shown in the figure are due to the methane credits for the avoidance of the traditional slurry management, as described in Section "Digestate Use and Methane Emissions Credits. " For Plant 5, the methane credits are higher than the methane emissions from the digestate, leading to a negative impact of −395 kg CO2 eq./MWh (Figure 5E). Note that carbon dioxide emissions from biogas combustion in the CHP plant are not considered as they are biogenic in nature. human Toxicity Potential This impact is lowest for electricity generated by Plants 1 and 5 [79 kg dichlorobenzene (DCB) eq./MWh] and highest for Plant 3 (114 kg DCB eq./MWh). For Plants 1-4, the main contributor is the production of maize silage and the emissions from biogas combustion, in particular chromium and thallium (see Table 3). For Plant 5, HTP is mainly affected by CHP operation, followed by AD operation and plant construction ( Figure 5E). ecotoxicity Potentials (FaeTP, MaeTP, and TeTP) The lowest FAETP is estimated for Plant 1 (198 kg DCB eq./ MWh) and the highest for Plant 3 (413 kg DCB eq./MWh). The Table 2. Negative values represent the credits for the avoidance of methane emissions by using digestate as fertilizer instead of animal slurry. production of maize silage and the plant operation are the main contributors to this impact for Plants 1-4. This is mainly due to the emissions of pesticide used for maize cultivation (Table 3) and metals (nickel, beryllium, cobalt, and vanadium) emitted in the life cycle of the grid electricity. It can be noted that Plant 1 has lower MAETP and TETP, which is due to the efficiency associated with economies of scale as these impacts are mainly influenced by the plant operation (Figures 5A,E). Unlike HTP, the best option for MAETP is Plant 5 at 55 ton DCB eq./MWh but, as for HTP, Plant 3 has the highest impact (77 ton DCB eq./MWh). The main hotspot is grid electricity used for AD because of the emissions of beryllium and hydrogen fluoride in the life cycle of electricity generation. The same trend is found for TETP, with Plant 5 being the best option (2 kg DCB eq./MWh) and Plant 3 the worst (2.5 kg DCB eq./MWh). Maize silage and CHP operation are the main contributors to TETP for Plants 1-4. Like HTP, the latter is mainly due to the emissions of chromium and thallium from biogas combustion. For Plant 5, CHP operation is the main hotspot (biogas combustion), followed by AD operation and plant construction. Ozone layer Depletion Potential At 7 mg R11 eq./MWh, Plant 5 has the lowest ODP and, as for most other impacts, Plant 3 the highest (11.3 mg R11 eq./MWh). The main contributors are halons emitted in the life cycle of grid electricity used in AD (related to natural gas transportation), followed by the emissions from diesel used in the machinery during maize cultivation (Plants 1-4). Photochemical Oxidants creation Potential The POCP ranges from −73 g C2H4 eq./MWh for Plant 5 to 70 g C2H4 eq./MWh for Plant 3. For Plants 1, 3, and 4, the impact is FigUre 7 | heat map of environmental impacts from biogas electricity and the alternatives considered in this study. The worst option is set at 100% and the others are expressed as a percentage of impact relative to the worst option. Waste, municipal solid waste; MSW, municipal solid waste; wood, wood chips in a CHP plant; solar PV, solar photovoltaics. For impacts nomenclature, see Figure 5. comparison with alternative electricity sources The biogas electricity is compared to electricity from the grid, natural gas, and renewables in Figure 6 and the ranking of different options with respect to each impact is summarized in the heat map in Figure 7. As can be seen in Figure 6, grid electricity has higher impacts than electricity from biogas for seven out of 11 categories: ADP fossil, FAETP, GWP, HTP, MAETP, ODP, and POCP. This is mainly due to the high contribution of fossil fuels in the Italian electricity mix. An exception to this is Plant 3 which has a higher HTP than the grid because of the toxic emissions in the life cycle of maize ear silage. Electricity from the grid also has lower AP (by 10-57%) and EP (32-72%) than biogas electricity; this is due to maize cultivation which contributes significantly to these two impacts (see Figure 5). The exception to this is Plant 5 which has lower impacts than grid electricity (by ~60%) because it does not use maize silage. Two further impacts are lower for grid electricity: depletion of elements and TETP. This could be explained by the greater economies of scale of the plants on the grid, which require a lower amount of resources and thus have lower toxic emissions on a life cycle basis per unit of electricity generated than the agricultural machinery and the AD-CHP plants. Unlike grid electricity, electricity from natural gas is environmentally more sustainable than biogas for most categories, except ADP fossil, GWP, ODP, and POCP (Figure 6). In comparison to the renewables, biogas electricity has mostly higher impacts, with a few exceptions. For example, biogas has a lower AP than geothermal power across all the AD-CHP plants considered. Furthermore, Plant 5 has lower GWP and Plant 2 lower POCP than any other renewable option. Biogas is also better than solar PV in terms of ADP elements, HTP, FAETP, MAETP, ODP, and POCP. It also has a lower MAETP than electricity from municipal solid waste and it outperforms wood for HTP, POCP, and TETP. With a specific reference to GWP, the main driver for biogas production, Plant 5 is the best option overall, sequestering 395 kg CO2 eq./MWh. All other plants generate higher GHG emissions than any of the renewable options considered here. The only other impact for which biogas electricity is a better option than any other is POCP, but again only for Plant 5; however, this plant has the highest TETP than any other alternative. These results are summarized in Figure 7, which shows the percentage difference between the worst option and the rest of the alternatives for each impact. Overall, assuming equal importance of all the impacts, hydropower could be considered the best option and grid electricity the worst, with biogas being on average a middle-ranking option. comparison with Other studies As discussed in the Section "Introduction, " comparison of the results from different studies is not easy for the reasons outlined there. The only studies for which comparison is possible are those by Blengini et al. (2011), Dressler et al. (2012, Meyer-Aurich et al. (2012), Bacenetti et al. (2013), Whiting and Azapagic (2014), and Ingrao et al. (2015); for a summary of these studies, see Table 1. As can be inferred from Figure 8, the results from the current study compare favorably in terms of AP, EP, GWP, and POCP, given the different assumptions, system credits, and geographical locations across the studies. However, the average GWP estimated in this work appears to be lower than in the other studies, mainly because of Plant 5 which has a negative value for this impact. Nevertheless, the impact for the AD-CHP system using pig slurry reported by Bacenetti et al. (2013) compares well with Plant 5 which uses cow slurry (−368 and −395 kg CO2 eq./MWh, respectively). The GWP in Blengini et al. (2011) is consistent with that estimated for Plant 4, while the values found by Dressler et al. (2012), Meyer-Aurich et al. (2012, Bacenetti et al. (2013), andIngrao et al. (2015) agree well with the results for Plants 1 and 3. It should be noted that, unlike other studies, Meyer-Aurich et al. (2012) have considered land-use change (associated with maize cultivation), finding that it increases GWP by 20%; however, differences in other assumptions cancel out this effect and, consequently, the results still agree with those in the current study. The comparison of the other impacts is only possible with the study by Whiting and Azapagic (2014), since the other authors did not consider them. As can be seen in Figure 8, the results agree for HTP but differ for ADP, FAETP, MAETP, ODP, and TETP. The reason for these differences could be due to the different updates of the CML method and Gabi software, as well as the different assumptions, credits for fertilizers, and geographical locations. On the other hand, both studies are in agreement that the contribution of the AD and CHP plants construction is significant for ADP elements and the toxicity-related impacts. sensitivity analysis Because of their significant contribution to the impacts, the following parameters are considered in the sensitivity analysis: (i) maize yield; (ii) heat utilization; (iii) recycling of AD and CHP construction materials; and (iv) covered storage of digestate in Plant 4. The results are discussed in the following sections. Maize Yield To explore the effect of this parameter on the impacts, the maize yield was varied by ±15% against the baseline shown in Table S2 in Supplementary Material. The results in Figure 9 suggest that the overall effect of maize yield on the environmental impacts is small for most impacts, except for AP and EP which change by up to 14%. This is to be expected given the high contribution of maize cultivation to these categories. The ADP elements and FAETP results are also affected for Plant 4, varying by up to 12%, because of the change in the resource requirements for the agricultural machinery and the related toxicity of the construction materials. Despite these changes, the variation in the maize yield considered here does not affect the comparison of biogas with the alternative electricity sources discussed in Section "Comparison with Alternative Electricity Sources. " Heat Utilization This part of the sensitivity analysis considers a scenario in which the net heat produced by the CHP plants is used instead of being wasted. This is motivated by the introduction of subsidies for heat (see Introduction), which aim to stimulate its utilization. It was assumed that the heat generated by the CHP substitutes a gas boiler for which the AD-CHP systems were credited. The LCA data for the boiler were sourced from Ecoinvent (2010). As indicated in Figure 10, if the heat were utilized all of the impacts would be reduced, some of them significantly, across the different plants: ADP fossil would be lower by four to six times, GWP up to nine times, ODP by five to eight times, and POCP two to four times. This means that biogas electricity from all five plants would have lower impacts for these categories than any other renewable option considered here. However, there would be no change in ranking with respect to grid electricity because ADP elements, AP, EP, and TETP remain higher for biogas electricity. Recycling of Construction Materials As mentioned earlier, it was assumed that all the construction materials apart from plastics are landfilled after decommissioning of the plants. Since the construction of the plants has a significant contribution for some impacts, particularly for Plants 1 and 5 (Figures 5A,E), the sensitivity analysis considers if and how they would change if concrete, steel, iron, and platinum (in the CHP catalytic converter) were recycled. For these purposes, the recycling rates for the former three materials were assumed equal to current recycling rates in Italy: 60% for concrete (UNI, 2005) and 74% for steel and iron (Fondazione per lo sviluppo sostenibile, 2012). As there are no data for platinum recycling, a recovery rate of 90% was assumed. Plastic materials were not considered for recycling as their quantity is small. The results are presented in Figure 11 for the impacts that are affected by the recycling. The greatest reduction would be achieved for ADP elements (up to 39%) and POCP (up to 13.5%), followed by AP and FAETP (~8%); MAETP would also go down (~5%). The effect on the other impacts is small (<2%). Covered Storage of Digestate As discussed in Section "Results, " biogas electricity from Plant 4, which uses maize silage as the AD feedstock, has higher GWP and POCP than any other plant. Given that much of that is due to methane emissions from the open storage of digestate Environmental Impacts of Biogas Electricity Frontiers in Bioengineering and Biotechnology | www.frontiersin.org (Figure 5D), it is important to consider by how much the impacts would change if the digestate were stored in covered tanks, as in Plant 2. The results in Figure 12 suggest that both impacts would decrease significantly: GWP by two times and POCP threefold. In that case, Plant 4 would have lower impacts than Plant 1 and 3 but still higher than Plant 2. The AP and EP results would also be reduced, by 7 and 5%, respectively, because of the avoided ammonia emissions. This would make Plant 4 a better option than Plant 2 for these two impacts. With respect to grid electricity, Plant 4 would have half the GWP. It would also be a better option for POCP with respect to solar PV and waste power plants. cOnclUsiOn The aim of this study was to evaluate the life cycle environmental impacts associated with generation of electricity from biogas produced by AD of agricultural products and waste. Five real AD-CHP plants situated in Italy were considered and compared to electricity from the national grid, natural gas, and different renewable technologies. The results suggest that the main contributors to the impacts from biogas electricity are the production of the maize silage and the operation of the anaerobic digester, including open storage of digestate. Therefore, the system using animal slurry (Plant 5) is the best option among the five plants considered, except for marine and terrestrial ecotoxicity potentials for which the best system is the one utilizing slurry, agricultural waste, and a small amount of maize silage (Plant 1). The plant fed with maize ear silage (Plant 3) is the worst option because of the high impacts of the feedstock, which are almost double that of maize silage. In reference to the size of AD-CHP plants, larger capacity does not appear to have a positive effect on environmental impacts despite the higher efficiencies typically associated with economies of scale. This is due to the larger plants requiring a high organic load to make them viable, which can only be achieved with cereal feedstocks as they have much higher biogas yield than slurry or agricultural waste. For example, a 1 MW CHP plant requires around 50 ton of maize silage per day but 400-800 ton of slurry. As this amount of slurry cannot be supplied by a single farm, it would have to be collected from different farms and transported to the plant which would not be economically and environmentally viable. Furthermore, the digester would be impractically large (20,000-40,000 m 3 assuming a hydraulic retention time of 50 days) and thus expensive. Therefore, as the results of this work suggest, it is better to have smaller plants using slurry and waste rather than bigger installations: the latter may be more efficient but require cereal silage, which in turn leads to higher environmental impacts. On the other hand, smaller plants require more resources for construction per unit of electricity generated, so there are some trade-offs. The results also suggest that utilizing the heat generated by the CHP plant would reduce all the impacts, some of them significantly (specifically depletion of fossil fuels and the ozone layer, global warming, and summer smog), making biogas electricity a better option for these categories than any other renewable alternatives considered here. Recycling the AD and CHP construction materials would reduce the depletion of elements, acidification, freshwater, and marine toxicity as well as summer smog. The latter would also improve in addition to global warming if digestate was stored in covered tanks. Biogas electricity is environmentally more sustainable than electricity from the grid for seven out of 11 impacts considered. This is due to the high contribution of fossil fuels in the Italian electricity mix. The remaining four impacts, for which grid electricity is a better option, are depletion of elements, acidification, eutrophication, and terrestrial ecotoxicity. Thus, biogas electricity reduces GHG emissions compared to the grid, as intended by government and the European Commission, but aggravates some other impacts. However, in comparison with natural gas, seven out of 11 impacts are higher for electricity from biogas. It also has mostly higher impacts than the renewables, except for solar PV for which six out of 11 impacts are higher than biogas. Furthermore, biogas is a better option than geothermal power for acidification across all the feedstocks considered. If only slurry is used (Plant 5), it also has lower global warming and summer smog potentials than geothermal. Moreover, marine ecotoxicity is greater for electricity from municipal solid waste than that from biogas. Focusing on global warming potential which drives biogas production, using slurry as a feedstock (Plant 5) is the best option across all the electricity options considered here, sequestering 395 kg CO2 eq./MWh. All the other biogas systems generate higher greenhouse emissions than any of the renewable options considered here. The only other impact for which biogas electricity is a better option than any other is summer smog, but only for the slurry feedstock; however, it also has higher terrestrial ecotoxicity than any other electricity alternative. In summary, biogas electricity can help reduce GHG emissions relative to fossil-intensive grid electricity such as that of Italy; however, some other impacts are increased. On the other hand, if mitigation of climate change is the main aim, then other renewables have a greater potential to reduce GHG emissions. If, in addition to this, other impacts are considered, then hydro, wind, and geothermal power are better alternatives to biogas. However, if the subsidies for heat utilization are successful, the environmental sustainability of biogas electricity would improve significantly, particularly for global warming, summer smog, and depletion of the ozone layer and abiotic resources. Further policy changes should include a ban on open digestate storage to prevent methane emissions and regulation on digestate spreading on land to minimize emissions of ammonia and related environmental impacts. Finally, it should be noted that the results obtained in this study correspond to mesophilic digestion at 40°C and may differ from the results for other operating conditions. Furthermore, the analysis did not consider other environmental aspects, such as habitat destruction and biodiversity loss, as they are outside the scope of LCA. These and other impacts could be evaluated in future research alongside economic costs and social impacts as part of a broader sustainability assessment. aUThOr cOnTriBUTiOns AA and MF conceived and supervised the work; JB collected the data; AF carried out the LCA study; AA, AF, and JB wrote the paper. acKnOWleDgMenTs This work was funded by the UK Engineering and Physical Sciences Research Council (EPSRC), grant no. EP/K011820/1. This funding is gratefully acknowledged. The authors are also grateful to the editor and the reviewers for their comments that helped to improve the paper. We would also like to thank Dr. Laurence Stamford and Ellen Gleeson at the University of Manchester for proofreading the manuscript and Dr. Martyn Jones, also at Manchester, for his assistance with the figures. sUPPleMenTarY MaTerial The Supplementary Material for this article can be found online at http://journal.frontiersin.org/article/10.3389/fbioe.2016.00026
v3-fos-license
2014-10-01T00:00:00.000Z
2010-10-01T00:00:00.000
8821477
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/15/10/7035/pdf", "pdf_hash": "8005aaebc10a7c1d9778dade2ef0042e37e11dad", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44655", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8005aaebc10a7c1d9778dade2ef0042e37e11dad", "year": 2010 }
pes2o/s2orc
Potential of the Dietary Antioxidants Resveratrol and Curcumin in Prevention and Treatment of Hematologic Malignancies Despite considerable improvements in the tolerance and efficacy of novel chemotherapeutic agents, the mortality of hematological malignancies is still high due to therapy relapse, which is associated with bad prognosis. Dietary polyphenolic compounds are of growing interest as an alternative approach, especially in cancer treatment, as they have been proven to be safe and display strong antioxidant properties. Here, we provide evidence that both resveratrol and curcumin possess huge potential for application as both chemopreventive agents and anticancer drugs and might represent promising candidates for future treatment of leukemia. Both polyphenols are currently being tested in clinical trials. We describe the underlying mechanisms, but also focus on possible limitations and how they might be overcome in future clinical use – either by chemically synthesized derivatives or special formulations that improve bioavailability and pharmacokinetics. Introduction With more than 3 million new cases and 1.7 million deaths each year, cancer is the most important cause of death and morbidity in Europe after cardiovascular diseases. According to the WHO, it accounts for 20% of all deaths in Europe, a rate which is even believed to increase in the future. The OPEN ACCESS most recent cancer statistics resource, GLOBOCAN 2008, counted 47,500 new cases of leukemia and more than 32,000 deaths in European men. Thereby, leukemia reached seventh place on the list of the ten most frequent cancer types in men. In women, leukemia is less frequent, currently being in place 11. Leukemia develops from hematopoietic stem cells that escape the normal control mechanisms thereby interrupting their capacity to differentiate in mature blood cells [1][2][3]. As a result of uncontrolled proliferation of hematological progenitor cells, excessive number of malignant cells accumulates in the bone marrow where they replace normal marrow tissue and affect physiological production of blood cells. In this review we will focus on the role of oxidative stress in leukemogenesis and how the natural antioxidants resveratrol and curcumin interfere with and prevent this process. We provide evidence for their impressive chemopreventive and chemotherapeutic potential by delivering insight in the detailed action of both compounds, their additional cellular targets beside free radicals and the signaling pathways affected. We also highlight the role of their pro-oxidant effects and present an overview about the efforts that were undertaken in order to improve bioavailability and pharmacokinetics of resveratrol and curcumin. Role of Oxidative Stress and Cellular Antioxidant Defense Mechanisms Reactive oxygen species (ROS) including superoxide (O 2  − ), hydroxyl (OH  ) and peroxyl radicals (ROO  ) or hydrogen peroxide (H 2 O 2 ), even if they have some physiological functions, exert deleterious effects when present at high concentrations. They cause oxidative damage on cellular DNA, proteins and lipids [4]. Beside ROS, reactive nitrogen species (RNS), including nitric oxide (NO  ), play a role in oxidative damage of proteins via nitrosylation reactions. More importantly, NO  can act in combination with superoxide to produce a highly reactive peroxynitrite anion (ONOO − ), which subsequently triggers DNA fragmentation and lipid peroxidation [5]. Such permanent modifications of cellular macromolecules might ultimately result in carcinogenesis [6][7][8][9][10][11][12]. In order to prevent harmful accumulation of damaged DNA, lipids and proteins and subsequent initiation of carcinogenesis the cell possesses a complex and highly effective system of antioxidant defense that allows an immediate response to oxidative stress. Various enzymatic antioxidants like superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) as well as non-enzymatic antioxidants act together to render ROS/RNS and H 2 O 2 harmless. Cytosolic or mitochondrial forms of SOD catalyze the conversion of superoxide anion (O 2  − ) to H 2 O 2 and O 2 . The resulting H 2 O 2 is subsequently removed by the enzymatic activity of GPx or CAT. The latter is localized in peroxisomes and converts H 2 O 2 with an impressive turnover rate to water and molecular oxygen [13]. GPx on the other hand acts in both cytosol and mitochondria where it counteracts oxidative stress by reducing peroxides to water with simultaneous oxidation of glutathione (GSH) to glutathione disulfide (GSSG) [14]. GSH is a non-enzymatic antioxidant present in cytosol, nucleus and mitochondria that consists of three amino acids and represents the major thiol-disulfide redox buffer, responsible for maintenance of the overall redox balance in the cell [15]. It plays an important role in the regulation of redox sensitive cysteine-containing enzymes [16,17] and serves as cofactor for detoxifying enzymes like GPx, which in turn prevent lipid oxidative damage by reducing lipid peroxides [18]. Moreover, GSH serves as direct ROS scavenger and functions in the regeneration of oxidized forms of antioxidant vitamins C and E [13]. Due to the fact that GSH modulates activation and binding of transcription factors [17] and as the cellular concentration of the reduced GSH is up to 100 fold higher than GSSG, minor increase in GSH oxidation can significantly affect the GSH:GSSG ratio and consequently influences signal transduction and cell cycle progression [19]. The balance can be restored by NADPH-dependent glutathione reductase or the thioredoxin/glutaredoxin systems that catalyze the inverse reaction by reducing GSSG to GSH or by elimination of the oxidized GSSG from the cell [20]. Beside glutathione, the thioredoxin system significantly contributes to the intracellular redox environment. Two cysteine residues within the active site of thioredoxin (TRX) are responsible for its ability to reduce disulfide bonds within GSH or multiple oxidised proteins, including several transcription factors [21]. Thioredoxin reductase (TR) afterwards catalyzes the NADPH-dependent reduction of oxidized TRX into its active form. Another essential part of the non-enzymatic antioxidant defense is the ascorbate system. Vitamin C (ascorbic acid), a diacid with two ionizable hydroxyl groups, exists at physiologic pH mainly in its AscH − form. Interaction with ROS leads to ascorbate-derived products that are less reactive and the resulting Asc  − radical represents the terminal small-molecule antioxidant [13]. Vitamin C is a hydrophilic antioxidant that acts together with the membrane-localized vitamin E in protecting membrane lipids from peroxidation as it regenerates oxidized vitamin E [22]. Figure 1b gives an overview about the cellular antioxidant defense and simultaneously indicates molecular targets of resveratrol and curcumin. Role of Oxidative Stress in Development and Evolution of Leukemia Over the last years evidence has accumulated that oxidative stress might be an important player in hematological malignancies. In patients suffering from different kinds of leukemia the balance between free radicals (ROS/RNS) and cellular antioxidant defense mechanisms is disturbed. A study performed on 20 patients with chronic leukemia indicated that leukemic cells from these patients produce more ROS than non-leukemic cells. Moreover, the total antioxidant activity in these cells was not sufficient to antagonize the harmful effects of free radicals [23]. In lymphocytes of chronic lymphocytic leukemia (CLL) patients for example, Oltra et al. [24] measured lower SOD and CAT activities, progressively decreasing within four years of CLL disease without chemotherapeutic treatment. Since oxidative stress causes stable chromosome modifications and mutagenesis represents an important trigger of cancer development, such redox imbalance is tightly associated with oncogenic stimulation. Mutagenesis is induced in response to a moderate level of oxidative stress. Weak oxidative conditions on the other hand play a role in tumor promotion, whereas high levels of free radicals are involved in apoptosis [25]. Accordingly, oxidative damage on DNA and lipids was accumulating in CLL patients over time, shown by raised levels of the oxidation products 8-oxo-dG and malondialdehyde (MDA) [24]. These results are in agreement with earlier findings that revealed higher levels of DNA base lesions and lower levels of the antioxidant enzymes SOD and CAT in patients with childhood acute lymphoblastic leukemia (ALL) compared to levels in healthy children [26]. Oxidative DNA damage afterwards might influence transcriptional regulation and introduce replication errors, cause modulation of signaling pathways or lead to genomic instability thereby promoting cancer development [27]. GPx activity was also reduced in lymphocytes of ALL patients compared to control lymphocytes [26]. This observation seems to be specific for acute forms of leukemia as GPx activity was actually elevated in CLL [24,28], possibly as part of an adaption mechanism against hydrogen peroxide. A clinical study from Zhou et al. indicated that there is also a strong correlation between oxidative stress and the development of acute myeloid leukemia (AML) as well as the incidence of disease relapse [29]. In 102 leukemic patients in the primary condition or at relapse, oxidative stress levels were significantly increased compared to 102 healthy volunteers tested. Many AML patients successfully treated with chemotherapeutics undergo a subsequent relapse that is responsible for poor survival rates. The mean plasma level of the lipid peroxidation product and most popular marker for oxidative stress, MDA, was found significantly higher at relapse stage whereas the total antioxidant capacity was decreased to half its initial level [29]. These findings allow the conclusion that oxidative stress persists in relapsing AML patients. A further study on pediatric patients with ALL provided evidence that increases in oxidative damage on proteins and lipids in these patients, measured by protein carbonylation and MDA level respectively, was not caused by chemotherapy. Their results indicated that the accumulation of oxidative lesions in non-treated patients and patients in the first phase of treatment was higher than in those treated for longer times or out-of-treatment. Antioxidant activities of CAT and SOD on the other hand were simultaneously decreased [30]. In a previous study, Zhou et al. tested 92 AML patients, from which 48% were suffering from clinical depression. Compared to the control group, in depressive patients significantly elevated serum concentrations of ROS, NO and MDA could be detected. This increase of free radicals and oxidative damage was accompanied with decreased total antioxidant capacity and SOD levels [31]. All these observations argue for a link between decreased levels of the cellular antioxidant defense and accumulation of oxidative damage in this type of cancer. As the biggest differences in the activity of antioxidant enzymes have been detected in early stages of the disease, it is supposable that persistent oxidative stress plays a role in the development of leukemia [30]. This theory is corroborated by results of an in vivo study performed on high leukemic mice. During the development of natural lymphatic leukemia in these mice, the activities of SOD, GPx and CAT decreased significantly [32]. Moreover, persistent oxidative stress could be observed in depressive and relapsing patients of AML, indicating its importance also at these stages [29,31]. It is worth to mention that oxidative stress is associated with inflammation. A few years ago, a connection between the long-term intake of anti-inflammatory drugs, such as aspirin as well as some natural compounds, and a reduced incidence of cancer has been demonstrated [33][34][35]. These data confirm results from epidemiological and experimental studies indicating that chronic inflammation is linked to carcinogenesis [36,37]. An inflammatory stimulus causes oxidative stress via accumulation of ROS and induces lipid peroxidation as well as miscoding DNA adducts, which are then directly implicated in the initiation of carcinogenesis [38,39]. Simultaneously, ROS leads to the modulation of redox-sensitive transcription factors and thereby activates NF-κB-and other signaling pathways. As a consequence, aberrant expression of their target genes creates a dangerous feedback loop. Proinflammatory cytokines and chemokines for example are known to play a role in tumorigenesis [40,41]. They lead not only to an increased production of ROS in phagocytic and non-phagocytic cells but also recruit further inflammatory cells [39,42]. Natural Antioxidants in the Treatment of Hematologic Malignancies As the level of the total cellular antioxidant capacity, including antioxidant enzymes as well as nonenzymatic antioxidants, is decreased during the etiology of leukemia as explained above, administration of antioxidants might represent a successful way to overcome hematologic malignancies. In nature, manifold antioxidants are produced in dietary plants and many of them are already evaluated in vitro or in vivo as potent anti-carcinogenic agents in different kinds of tumors. They are tested alone or in combination with other antioxidants or classical chemotherapeutics and some of them even revealed promising results in clinical trials. As classical chemotherapy is often associated with severe side effects and hardly affordable due to its high costs, researchers are focusing their attention now on the search for alternative medicines. In this respect, natural compounds are again of growing interest and became intensively investigated within the last years. For many plant derived compounds, epidemiological studies indicated that there is a correlation between the dietary intake and reduced incidence of inflammation and cancer. Interestingly, some of them are used since centuries in traditional medicine all over the world. Compared to classical treatment, they possess several important advantages. Beside lower costs and good availability of natural compounds, it is noteworthy that they do not exert serious side effects on normal tissues and most strikingly, these compounds can, in contrast to classical chemotherapeutics, even be used for chemoprevention. In this review we present the two most promising naturally derived antioxidants, which belong to the group of polyphenols and have been intensively studied over the last years [ Figures 1(a) and (b)]: resveratrol and curcumin. We deliver detailed insight in their antioxidant but also pro-oxidant effects and provide evidence for the potent antileukemic action of these compounds and some derivatives. Resveratrol The non-flavonoid polyphenolic compound resveratrol (1) is well known from the so-called "French paradox", describing the epidemiologic observation of an inverse correlation between red wine consumption and the incidence of cardiovascular disease [43]. Since 1976 it is known that resveratrol is a phytoalexin produced in high amounts in the skin of grape berries in response to infection or mechanical injury [44], subsequently accounting for concentrations up to 7.7 mg/L resveratrol in red wine [45]. Besides in grapes this trans-3,5,4'-trihydroxystilbene is abundant in more than 70 plant species including berries, peanut and several herbs [46,47] and confers many health beneficial activities, acting also against inflammation and carcinogenesis [48][49][50]. It is well known that carcinogenesis is a multi-step process subdivided in three stages: tumor initiation, promotion and progression. The chemopreventive potential of resveratrol is based on its adverse effects on processes involved in every single step of tumor development [46,50,51]. Antioxidant Effects of Resveratrol Prevent Initiation of Carcinogenesis ROS and other free radicals are able to interact with DNA to induce mutations and DNA base modifications. This oxidative damage represents the initial step of carcinogenesis when cellular repair mechanisms fail to fix these lesions [13,52,53] and result in either modulation of gene expression through epigenetic effects or in permanent somatic mutations and chromosomal rearrangements. The antioxidant activity of resveratrol is mainly responsible for inhibition of tumor initiation by preventing free radicals from interacting with cellular DNA. In leukemic HL-60 cells for example resveratrol has been shown to anticipate free radical formation induced by 12-O-tetradecanoylphorbol-13-acetate (TPA) [50]. Jang et al. provided evidence that resveratrol strongly inhibits ROS production in human monocytes and neutrophils [54]. Recently, Burkitt and Duncan described the powerful antioxidant action of resveratrol in the presence of the ascorbate or glutathione system. Their results revealed that resveratrol protects cells from DNA damage via classical hydroxyl-radical scavenging activity in the ascorbate system and by a novel mechanism including the inhibition of glutathione disulfide bond formation [55]. Shamon et al. further reported that resveratrol prevents mutagenesis in 7,12-dimethylbenz(a)anthracene (DMBA)-treated Salmonella typhimurium [56]. An in vivo study performed on a mouse skin cancer model indicated that tumorigenesis is significantly reduced in DMBA-treated mice when 1 to 25 μmol resveratrol are administered together with the phorbol ester 12-O-tetradecanoylphorbol-13-acetate (TPA) and in addition, no toxicity due to resveratrol could be observed [50]. Another mentionable feature linked to tumor initiation is the ability of resveratrol to induce phase II detoxification enzymes like NAD(P)H: quinone oxidoreductase [57] which has been shown to protect cells from toxicity and neoplasia [58,59]. Some years ago Chen et al. found that likewise heme oxygenase-1 (HO-1), another component of the cellular antioxidant defense, is upregulated by resveratrol. Deeper investigation of the underlying mechanism revealed that Akt and ERK1/2 kinases are activated in response to resveratrol treatment and that these signaling pathways finally increase levels of NF-E2-related factor 2 (Nrf2) [60]. This redox-sensitive transcription factor represents a potent activator of anti-oxidant response element (ARE)-depending genes [61] including various phase II detoxifying enzymes. The antioxidant activity of HO-1 is due to catabolism of heme, which is converted into biliverdin. The latter is afterwards metabolized into the antioxidant bilirubin [62]. Evidence was provided that resveratrol specifically inhibited cyclooxygenase and hydroperoxidase activities of cyclooxygenase (COX)-1 and likewise inhibits COX-2 [63,64]. This finding is particularly important as COX can convert carcinogens into DNA-damaging forms [50] and suppression of COX function leads to disruption of prostaglandin biosynthesis [51]. As prostaglandins are important players in pathogenesis of both inflammation and cancer, the latter effect is an important example for the antitumor promotion activity of resveratrol. Beside the possibility of a direct DNA damage caused by ROS, mutations and genotoxicity can also result indirectly from lipid peroxidation. Phospholipids of cellular membranes are extremely susceptible to oxidation due to their high content of polyunsaturated fatty acids, which participate in free radical chain reactions. Initial products, such as lipid hydroperoxides, afterwards produce reactive aldehydes and epoxides in the presence of metals [13]. The most prominent one is MDA, which is not only mutagenic and carcinogenic in mammals [65,66] but further reacts with DNA bases to form deleterious adducts like M1G, M1A or M1C. Other DNA adducts caused by lipid peroxidation are exocyclic etheno adducts [13]. Among these, especially etheno-dA and etheno-dC are of great importance since they act mutagenic on monkey kidney cells [67]. To protect phospholipids from oxidation therefore represents another important strategy to counteract tumor initiation. Like vitamin E, resveratrol is a lipid antioxidant that has the ability to prevent lipid peroxidation by scavenging peroxyl radicals within the membrane [68]. Murcia et al. even found that its ability to avoid lipid peroxidation was higher compared to the one associated with vitamin E, just as its HOCl scavenging property [69]. This is in agreement with findings from Stojanovic et al. who analyzed reactions of lipid peroxyl radicals with natural antioxidants. They reported that the radical scavenging activity of resveratrol is comparable to that of the flavonoids epicatechin and quercetin, yet exceeds that of vitamins E and C [70]. Moreover, it has been shown that trans-resveratrol is able to defend low-density lipoprotein (LDL) from copper-mediated oxidation via scavenging free radicals and -more importantly -through its potential to chelate copper [71]. Compared to other polyphenols one advantage of resveratrol worth mentioning is that this compound does not chelate iron, hence it does not affect iron absorption [72]. Taken together, the potent antioxidant activity of resveratrol is mainly responsible for its important cancer chemopreventive effects since free-radical induced lipid peroxidation and oxidative damage of DNA are causative factors in cancer development [73]. Regulation of Cell Cycle, Proliferation and Apoptosis by Resveratrol Affects Cancer Promotion Beside its ROS scavenging activity, resveratrol has been shown to interact with many cellular targets. It interferes with different signaling pathways and even exhibits some pro-oxidant activities under certain conditions that are linked to antitumor promotion and progression. By using 2',7'-dichlorfluorescein (DCFH) measurements, De Salvia found a slight ROS accumulation in resveratrol treated CHO cells, whereas resveratrol did not induce primary DNA damage. Further results indicated resveratrol-mediated induction of chromosome aberrations in a dose-dependent manner [74]. Gautam et al. demonstrated that resveratrol caused apoptotic DNA fragmentation in three leukemia cell lines (32Dp210, L1210, HL-60) but not in normal bone marrow cells [75]. Oxidized products of resveratrol were generated in leukemia cells following the resveratrol-catalyzed reduction of Cu 2+ to Cu + that increased generation of DNA strand breaks [76]. Long-term administration of resveratrol on HCT-116 cells in sub-apoptotic concentrations resulted in growth arrest caused by a chronically enhanced ROS level and activated the DNA damage checkpoint [77]. These data revealed a new link between the pro-oxidant activity of resveratrol and induction of cell cycle arrest. In fact, several studies demonstrate antiproliferative effects of resveratrol on various leukemic cell lines. For instance, Bernhard et al. have shown that resveratrol induces S-phase arrest in T-cell derived acute lymphocytic leukemia cell line CEM-C7H2 followed by Fas-independent apoptosis [78]. S-phase arrest occurs also in AML cells in response to resveratrol treatment by reducing interleukin (IL)-1β expression and subsequent suppression of NF-κB activation [79]. One plausible mechanism by which resveratrol might mediate cell cycle arrest in the S-phase involves inhibition of DNA synthesis. Previous studies reported that resveratrol affects DNA replication through inhibition of ribonucleotide reductase and DNA polymerase activity [46,80]. In detail, it has been shown that the 4'-hydroxy group of trans-resveratrol is required for the antiproliferative effect and probably interacts with DNA polymerases α and γ [81,82]. The pro-apoptotic action of resveratrol on T-cell lymphotrophic virus-1-infected cell lines further correlates with suppression of survivin expression [83]. Beyond, trans-resveratrol induced cell death in B-CLL cell lines in vitro as well as in ex vivo models, that was associated with typical apoptotic features like activation of caspase-3, loss of mitochondrial membrane potential and downregulation of the two antiapoptotic proteins Bcl-2 and inducible nitric oxide synthase (iNOS) [84,85]. INOS is spontaneously expressed in leukemic cells and tumor cells in which the iNOS pathway is blocked are progressively driven into apoptosis. Although the underlying mechanisms responsible for the antiapoptotic role of  NO are not known in detail, they might involve inhibition of caspase activation and loss of mitochondrial membrane potential [86]. As mentioned before, resveratrol is also known to affect signaling pathways. Bhardwaj and colleagues for example found both NF-κB and STAT3 pathways inhibited in human multiple myeloma cells treated with resveratrol. Concomitantly, several antiapoptotic proteins like Bcl-2, Bcl-xL, XIAP and survivin were downregulated, consequently sensitizing cells to caspase-dependent apoptosis [87]. By this mode of action resveratrol might be able to overcome chemoresistance, which is closely associated with constitutive activation of these two signaling pathways [87]. These results are in line with observations of Youn et al. who found that resveratrol affected activation of NF-κB, STAT3 and ERK and moreover downregulated iNOS expression in mouse colitis [88]. Beside NF-κB, Kundu et al. identified another transcription factor, AP-1, as cellular target of resveratrol. The latter inhibited DNA binding of AP-1 and prevented expression of some AP-1 components in the nucleus of mouse skin cells upon TPA stimulation [89,90]. Tili et al. found that resveratrol reduced AP-1 activity in human THP-1 monocytic cells and blood monocytes by upregulating non-coding, tumor-suppressor micro-RNA miR-663. Simultaneously, this upregulation of miR-663 leads to a decrease of miR-155, which is highly expressed in many human cancers. Upregulation of miR-663 might therefore represent a strategy to improve the positive effects of resveratrol as anti-cancer agent [91,92]. As a result of disturbing these pro-inflammatory signaling pathways (TPA-induced) expression of the tumor promoter COX-2 was suppressed [89,93]. The inhibition of NF-κB is not restricted to leukemias as this pathway was also impaired in human pancreatic cell lines, in which resveratrol caused apoptosis with inhibition of Bcl-2, Bcl-xL, COX-2 and cyclin D1. In addition, it was even reported to synergize with the antitumor activity of gemcitabine [94]. Several years ago, it has been reported that resveratrol represents a strong activator of SIRT1, a NAD + -dependent histone deacetylase with antiapoptotic, antiinflammatory as well as transcription and cell cycle regulating activities [95]. Activating SIRT1 might be another strategy of resveratrol to inhibit NF-κB-and/or AP-1 signaling pathways [96]. Recent data instead raise doubts about the SIRT1 activating activity of resveratrol. A fluorescent-based in vitro assay was used to demonstrate SIRT1 activation by resveratrol [95,97]. However, it has been revealed that this Fluor de Lys-SIRT1 peptide represents an artificial substrate, since resveratrol failed to increase SIRT1 activity in the absence of the fluorophor [98,99]. Using recombinant SIRT1, deacetylation of an acetylated p53-derived peptide or peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α) could be observed in vitro. Whereas both reactions could be prevented by incubation with a SIRT1 inhibitor, resveratrol did not change the acetylation level of these substrates [99]. Similarly, by using NMR techniques, a more recent study provided evidence that resveratrol indeed interacted with fluorophore-containing peptide substrates but that it was unable to activate SIRT1 in presence of native substrates, such as full-length protein substrates p53 and acetyl-CoA synthetase1 [100]. Consequently, resveratrol seems to be no direct activator of SIRT1. On the other hand, Boily and coworkers demonstrated that the antitumor activity of reveratrol at least partly depends on SIRT1. They have shown that resveratrol strongly prevented the development of induced skin papillomas in mice, while this protective effect was almost eliminated in SIRT1-null mice [101]. Recently, it has been reported that SIRT1 underlies a redox regulation. S-nitrosoglutathione (GSNO) for example modified cysteine residues of SIRT1. Instead of affecting its deacetylase activity, such S-glutathiolation reactions prevented SIRT1 activation by resveratrol [102]. Moreover, reduced SIRT1 levels were observed in aged and atherosclerotic vessels in vivo [103]. Downregulation of SIRT1 was also observed in response to cigarette smoke-induced oxidative stress in bronchial epithelial cells and is rather caused by lipid peroxidation byproducts than directly by ROS [104]. The authors provided evidence that post-translational modifications on cysteine residues induced by reactive aldehydes could lead to inactivation and proteasome-dependent degradation of SIRT1 [104]. Counteracting oxidative stress might therefore mediate the upregulation of SIRT1 mRNA expression, as observed in human umbilical vein endothelial cells (HUVECs) upon resveratrol treatment [103] and explain the positive effects of resveratrol on SIRT1 activation. Resveratrol also suppresses growth of myeloid cells. Findings of Lee et al. demonstrated that resveratrol, although inhibiting proliferation of both, promyelocytic leukemia cells (HL-60) and non-malignant B-cell lymphoblastoid (WIL2-NS) cells, by blocking cell cycle progression in G0/G1 phase, induced apoptosis selectively in HL-60 cells. WIL2-NS cells might anticipate cell death due to their ability to repair DNA damage and restore cell cycle progression [46]. In summary, resveratrol seems to have only marginal cytotoxic effects on nonmalignant cells. In contrast to HL-60 cells that are killed via the CD95-CD95 ligand pathway [105,106], resveratrol has been reported to drive apoptosis also in CD95-signaling-resistant ALL cell lines by activating the intrinsic apoptotic pathway whereas normal peripheral blood mononuclear cells (PBMCs) are not affected [107]. Equally, leukemic lymphoblasts isolated from pediatric patients with ALL undergo apoptosis when treated with resveratrol [45]. Resveratrol Derivatives and Their Potential as Anti-leukemic Agents As the chemopreventive potential of resveratrol (1) is of remarkable interest, more and more researchers focus their attention also on resveratrol derivatives, which either originate from nature or are chemically synthesized in order to improve its biological activities. The chemical structures of resveratrol and its most potent derivatives are illustrated in figure 2. Investigations of the structureactivity relationship revealed some structural determinants responsible for biological activity of resveratrol and its derivatives. Thus, it has been reported that the number and position of hydroxyl groups as well as intramolecular hydrogen bonding are essential features [108][109][110]. Compared to trans-resveratrol, trans-stilbene compounds containing 4-hydroxy group, double bonds and ortho-or para-diphenoxyl functionalities exert significantly higher activity [108]. Similarly, Fang et al. found that the position of hydroxyl groups and the oxidation potential of the molecule determine the antioxidant activity of resveratrol analogues. In this respect, especially derivatives with orthodihydroxyl (3,4-dihydroxy-trans-stilbene) and/or para-hydroxyl functionalities showed the highest antioxidative effects against 2,2'-azobis(2-amidinopropane) hydrochloride (AAPH)-initated peroxidation of linoleic acid in sodium dodecyl sulfate (SDS) and cetyl trimethylammonium bromide (CTAB) micelles [109]. Their findings revealed that the tested resveratrol analogues act by scavenging lipid peroxyl radicals and were also able to regenerate vitamin E from the α-tocopheroxyl radical back to its active form [109]. In a more recent study, the radical-scavenging activity of nine chemically synthesized resveratrol analogues was analyzed by the reaction kinetics with galvinoxyl (GO  ) and 2,2-diphenyl-1-picryl-hydrazyl (DPPH  ) radicals in ethanol and ethyl acetate using UV-vis spectroscopy. The results confirmed previous observations that 3,4-dihydroxy-trans-stilbene is the most active resveratrol derivative and the authors suggested that the 4'-hydroxyl group is more favorably oxidised compared to 3-OH or 5-OH [110]. In this respect and in agreement with other studies, radical scavenging activity of resveratrol derivatives can be improved by introduction of methyl, methoxy or hydroxyl groups in the ortho-or para-position of 4-OH [110]. Previous findings indicated that 4-4'-dihydroxy-trans-stilbene (2) as well as 3,4-dihydroxy-trans-stilbene possess significantly higher antiapoptotic potential on human promyelocytic leukemia cells (HL-60) than resveratrol itself or other tested derivatives and that the latter compound effectively inhibited ROSinduced DNA damage and enhanced DNA damage in presence of cupric ions [111,112]. The antiproliferative activity of such hydroxystilbenes carrying ortho-hydroxyl groups on HL-60 leukemic cells was shown to be more than three-fold higher than those with other groups [113]. This cytotoxic effect could be explained by the observation that ortho-hydroxystilbenes are converted into oxidized intermediates (ortho-semiquinones), for which is known that they undergo redox-cycling and thus generate further oxygen radicals [113]. Similarly, a 6,600-fold higher antioxidant activity and stronger antileukemic effects compared to resveratrol have been demonstrated for hexahydroxystilbene (3). This analogue acts via inhibition of NF-κB activation and induction of cell cycle arrest in HL-60 cells [113,114]. Moreover, this compound also successfully blocked the H 2 O 2 -mediated formation of DNA single-strand breaks in HL-60 cells [113,115]. Another strategy exerted by hexahydroxystilbene is the modulation of the cellular redox balance by decreasing SOD and GSH levels [116]. A polymethoxylated variant (N-hydroxy-N'-(3,4,5-trimethoxyphenyl)-3,4,5-trimethoxy-benzamidine; KITC (4)) likewise displays significant activity against HL-60 cells [117]. Roberti et al. evaluated the antiproliferative and pro-apoptotic potential of 49 synthesized resveratrol derivatives on HL-60 cells, including multidrugresistant (MDR) HL-60R cells. In general, they found that cis-isoforms are more active than the corresponding trans-isoforms, with exception of trans-resveratrol. Moreover, derivatives with 3'-hydroxy-4'-methoxy groups showed higher activity than 4'-hydroxy-3'-methoxy compounds [118]. Amongst them, especially two compounds exhibited remarkable pro-apoptotic properties at nanomolecular concentrations: the cis-3,5-dimethoxy analogues of rhapontigenin (7) and its 3'-amino derivative. This effect was even stronger than the cytotoxicity of classical chemotherapeutic drugs including etoposide and 5-fluorouracil [118]. Both compounds were even able to cause apoptosis in HL-60 cells that display a MDR phenotype. This finding is of particular interest because the MDR reversing agents currently tested in clinical studies have some limitations due to toxic side effects or alterations in pharmacokinetics of cytotoxic agents. These resveratrol analogues might therefore be a promising compounds for the treatment of MDR expressing malignancies [118]. Acetylation of resveratrol, a modification that is likely to improve its absorption, leads to a derivative with similar ability to arrest the cell cycle in S phase. This resveratrol-triacetate (5) further exhibits synergistic effects with 5-fluorouracil in colon cancer cells highlighting its possible role as chemosensitizer [119]. Recently, experimental docking studies of several derivatives revealed that most (Z)-isomers fit to the colchicin binding site of tubulin [120]. For (Z)-3,5,4'-trimethoxystilbene (6), the most powerful analogue, it is already known to act via tubulin depolymerization [121]. Accordingly, this compound and other methylated derivatives cause mitotic arrest instead of S phase arrest induced by resveratrol itself and increase the level of polyploidy. Methylation is considered to stabilize the compounds and increase their bioavailability [120]. Beside these synthetic compounds, two natural polyphenols, structurally related to resveratrol, have been reported as potent anti-leukemic agents: dimethylated pterostilbene (9) from blueberries and gloriosaol C (8) isolated from Yucca gloriosa both arrest the cell cycle at G1 phase and induce apoptosis in leukemia and lymphoma cell lines [122][123][124]. The former compound, although scavenging peroxyl-radicals to a similar extent than resveratrol, is of special interest as it causes apoptosis even in MDR-resistant hematologic malignancies [124]. Possible Negative Effects of Resveratrol Many studies provided evidence that resveratrol possesses a huge chemopreventive potential in rodent models as well as in human cancers without causing severe side effects. Findings from Lee et al. indicated that resveratrol affected cell cycle progression not only of malignant HL-60 leukemia cells but also of a transformed, non-malignant B-cell lymphoblastoid cell line. However, the observed effect was only marginal since solely leukemic cells ended up in irreversible cell death [46]. It has been shown that resveratrol binds to DNA in the presence of Cu 2+ ions and consequently induces DNA strand breaks. In complex with Cu 2+ , resveratrol reduces Cu 2+ to Cu + while the emerging oxidized resveratrol products further enhance the genotoxicity [76]. Such pro-oxidant activity, inducing apoptotic DNA fragmentation in cancerous cells, displays an important antitumor promotion mechanism of chemotherapeutic agents. On the other hand, genotoxic DNA cleavage might affect healthy cells. Lee et al., however, have shown that resveratrol is unable to damage chromosomes in malignant or non-malignant cells [46] and De Salvia et al. demonstrated that resveratrol does not lead to primary DNA damage [74]. Only at highest concentrations, chromosomal aberrations are slightly increased. In fact, incubation with resveratrol before H 2 O 2 application is even able to reduce oxidative DNA damage to control levels [74]. From several studies it is known that resveratrol has remarkable selective growth-inhibitory effects on human tumor cell lines including hematologic malignancies in vitro. Observations of Gao et al. confirmed the strong antiproliferative effect of resveratrol in vitro on 32Dp210 leukemia cells [125]. But, unexpectedly, when mice were inoculated with these leukemia cells and afterwards treated with 8 mg/kg body weight of resveratrol, no antileukemic effect could be detected. Even if resveratrol was administered at much higher doses only a negligible number of mice could be protected from leukemia [125]. This weak in vivo response might partly be explained by an improvable bioavailability or fast metabolism of the compound -problems that can be overcome in future by the synthesis of optimized resveratrol derivatives. Curcumin Another natural product belonging to the group of polyphenols is curcumin (10), a yellow pigment derived from rhizomes of turmeric (Curcuma longa). This lipid-soluble compound is mainly used in Asian dietary as spice and food-coloring agent. In this respect, it is responsible for the typical yellow color of curry. Curcumin, as part of Ayurvedic medicine, has been subject to a multitude of investigations over the last five decades that demonstrated various health benefits ranging from antiinflammatory [126,127] and antioxidant [128][129][130] to anticarcinogenic properties [131,132]. Furthermore, antidiabetic [133] and anti-HIV [134] activities were also described. Numerous in vitro and in vivo studies confirmed its antiproliferative and pro-apoptotic activity in a panel of tumor cells [131,132,[135][136][137][138][139][140][141][142][143]. The potent anticancer property of curcumin is attributed to its antioxidant effects that inhibit free radicals from mediating peroxidation of membrane lipids or oxidative DNA damageboth important initiators of cancer development. However, a rising number of recent studies revealed that curcumin exerts its anticancer activity also by acting as a pro-oxidant. Comparison of curcumin and its naturally occurring analogues delivered insight into structure-activity relationships of these compounds. While its high radical scavenging potential is due to a high number of ortho-methoxy substitutions as well as a high level of hydrogenation of the heptadiene moiety [144,145], its antiinflammatory and anticancer activity, in contrast, depends on low hydrogenation and a high level of methoxylation [146]. Owing to its high antioxidant and anti-inflammatory activity as well as its negligible toxic side effects to rodents and humans (when administered at doses up to 10 g/day) [147], growing attention has focused on curcumin as promising anticancer agent [130]. Antioxidant Effects of Curcumin Prevent Initiation of Carcinogenesis As described before, inhibition of lipid peroxidation is one mechanism by which antioxidants act as chemopreventive agents. It has been reported that curcumin and some of its analogues inhibit free radical-induced LDL peroxidation [148]. Because of its high lipid-solubility, curcumin physically interacts with the cellular membrane where it is converted into a phenoxyl radical in response to quenching of lipid radicals [149]. Since those curcumin analogues bearing no phenolic group are unable to inhibit AAPH-and Cu 2+ -induced LDL oxidation, it has been concluded that this phenolic group rather than the central methylenic group represents the proton donor and is necessary for the activity [149]. Wei and coworkers further demonstrated that the phenolic group is of great importance for the antioxidative effects of curcumin as lipid and protein oxidation of rat liver mitochondria treated with AAPH and Fe 2+ /ascorbate (VC) could be prevented by curcumin and its analogues [150]. Further evidence came from a more recent in vitro study in which SDS and CTAB micelles were used to analyze the antioxidative effects of curcumin against the free-radical-induced peroxidation in linoleic acid. The results verified previous findings that curcumin and its analogues act via proton abstraction from its phenolic group [151]. Recent observations indicated that curcumin at a concentration of 20 mM prevented lipid peroxidation of linoleic acid emulsion by 97.3%. By performing detailed in vitro antioxidant assays the authors also demonstrated effective radical scavenging properties of curcumin including the 1,1-diphenyl-2-picryl-hydrazyl (DPPH  ) radical, 2,2'-azino-bis(3-ethylbenzthiazoline-6sulphonic acid (ABTS  + ) and N,N-dimethyl-p-phenylenediamine dihydrochloride (DMPD  + ) radical cations, O 2  − and H 2 O 2 [152]. Metal ions play an important role as inducers of ROS formation and due to its huge reactivity, iron is the major pro-oxidant among transition metals involved in lipid damage [13]. Ak and Gülçin could show that curcumin has a high binding affinity for ferrous ions (Fe 2+ ) and they conclude that this chelating ability might be the main mechanism by which curcumin inhibits lipid peroxidation [152]. The antioxidative properties of curcumin act not only against lipid peroxidation but also prevent DNA damage. Thus, a field trial with West Bengalian patients indicated that three months of curcumin administration prevented ROS generation as well as subsequent DNA damage and lipid peroxidation in people exposed to arsenic contamination of groundwater compared to untreated persons [153]. Further studies revealed that the antioxidant properties of curcumin positively influenced antioxidant and phase II metabolizing enzyme activity in mice and moreover diminished iron-induced oxidative damage of lipids and DNA in vitro and in mice treated with ferric nitrilotriacetate (Fe-NTA) [154,155]. In detail, curcumin was able to prevent Fe-NTA-induced lipid peroxidation, DNA damage and protein carbonylation in the kidney of these mice [130]. Curcumin Exerts its Anticancer Properties also as Pro-oxidant Interestingly, although curcumin is considered as antioxidant there is a growing number of evidence that curcumin can act as pro-oxidant under certain conditions, exerting its anticancer activity by inducing ROS generation [156-158]. Chen et al. recently investigated the antioxidant and anticancer properties of curcumin on HL-60 human leukemia cells by measuring cell proliferation, viability and ROS generation. Interestingly, they found that the anticarcinogenic mechanisms of curcumin differ depending on its concentration. Whereas low concentrations of curcumin (<20 μM) decrease ROS production, higher concentrations have the opposite effect and favor ROS generation [157]. The authors furthermore investigated the influence of three water-soluble antioxidant compounds on antioxidant and anticarcinogenic activity of curcumin and found that ascorbic acid, N-acetyl cysteine as well as GSH augmented both activities of low concentrations of curcumin [157]. Hence, a combination of lower levels of curcumin with water-soluble antioxidants might represent an adequate strategy to improve its anticancer property without increasing harmful ROS accumulation. Recently, it has been reported that curcumin exerts cytotoxic activity towards CCRF-CEM human T-cell leukemia but hardly affects normal cells. By using gel electrophoresis analyses Kong et al. found that oxidant curcumin-Cu(II) ions induced DNA damage in plasmid DNA whereas curcumin alone failed [159]. Free radicals can induce epigenetic effects at the DNA level by loosening the chromatin structure and consequently enhancing the accessibility for transcription factors that finally regulate expression of genes involved in proliferation [13,160]. Thus, oxidative DNA damage plays an important role in the development of carcinogenesis. The chromatin structure is known to be opened due to histone acetylation, which stimulates gene transcription of silenced genes and can be modulated by ROS [160,161]. A few years ago, Kang et al. presented a new cellular target of curcumin. They proved that ROS, generated upon treatment of Hep3B cells with higher curcumin concentrations, significantly reduced histone acetylation by inhibiting histone acetyl transferase [160]. Thioredoxin reductase (TR) was discovered to be another main target molecule of curcumin. The ability of TR1 to catalyze the reduction of the disulfide at the active site of TRX was irreversibly inhibited in rat in the presence of curcumin, simultaneously affecting redox functions of TRX. Alkylation of both cysteine and selenium-cysteine residues at the catalytically active site resulted in a modified curcumin-TR enzyme. This enzyme had lost TRX reducing activity but gained a strongly increased NADPH oxidase activity, leading to ROS generation. In essence, curcumin was able to convert TR into a pro-oxidant [162]. Oxidized TRX serves as electron donor for scavenging enzymes, such as thioredoxin peroxidases and methionine sulfoxide reductases. Inhibition of TR function will therefore suppress the cellular antioxidant defense. As a consequence, raised ROS levels induce direct damage to DNA and impair the NF-κB mediated survival mechanism of cancer cells [162]. Syng-ai et al. demonstrated that curcumin induces ROS generation in MCF-7, MDAMB and Hep2 cell lines followed by apoptosis, while normal rat hepatocytes were not affected. They found that GSH levels were increased in response to curcumin treatment and that GSH depletion by buthionine sulfoximine (BSO), an inhibitor of γ-glutamylcysteinyl synthetase, enhanced curcumin sensitivity and cell death rate [163]. In K562 leukemic cells Awashi et al. detected modulation of γ-glutamylcysteinyl synthetase activity by curcumin and the presence of glutathiolated curcumin products that caused GSH efflux and subsequent increase of GSH synthesis [164]. Thus, endogenous GSH interferes with curcumin and counteracts ROS production. From these observations the authors concluded that ROS is the main trigger of apoptosis [163]. Earlier studies observed GSH efflux simultaneous with the onset of apoptosis and demonstrated that this redox imbalance by GSH depletion is both essential and sufficient for activation of cytochrome c release as key event of a damage-induced apoptotic pathway [165,166]. Armstrong et al. used BSO to investigate the chronological order and importance of mitochondrial events and apoptotic signals. They identified GSH as key regulator of apoptosis in PW cells, since the loss of mitochondrial GSH early activated apoptosis. NF-κB activation as well as cytochrome c release from mitochondria of PW cells occurred in response to GSH depletion but before increased levels of ROS were detected [167]. Similarly, Franco et al. revealed that induction of apoptosis by GSH depletion is independent of ROS [168]. Various groups suggested that curcumin influences GSH levels via the antiapoptotic protein Bcl-2, since a positive correlation was observed between Bcl-2 and GSH [169,170]. Here, following ROS accumulation, downregulation of antiapoptotic Bcl-2 was observed in MCF-7 and MDAMB cells, leading to sensitization of the cells to apoptosis [163]. Bcl-2 expression is likewise known to depend on NF-κB activation, which for its part is also inhibited by curcumin [171]. On the other hand, Piwocka et al. provided evidence that curcumin-induced increase of GSH levels is responsible for the induction of a nontypical apoptotic death pathway in lymphoid Jurkat cells. These cells showed internucleosomal DNA fragmentation whereas neither caspase-3 nor mitochondria are involved. Bcl-2 expression levels did not decrease, even after GSH depletion, indicating that GSH acts downstream of Bcl-2 and upstream of mitochondrial events [172]. It is further worth mentioning that some transcription factors, including NF-κB, are redox-sensitive due to their cysteine-thiols. Therefore, changes in cellular redox state caused by curcumin with increased GSH efflux will influence the activation of such redox-sensitive transcription factors [173]. Subsequent expression of their gene products consequently affects cellular signaling pathways. This mode of action might finally contribute to the antitumor promotion activity of curcumin [174]. Regulation of Cell Cycle, Proliferation and Apoptosis by Curcumin Affects Cancer Promotion Inhibition of cell proliferation and induction of apoptosis represent the two strategies of chemotherapeutic agents to prevent tumor promotion. Curcumin possesses manifold ways to impair this stage of cancer development as well as invasion, metastasis and angiogenesis of tumors by disturbing different signaling pathways. Recently, Ravindran et al. gave an excellent overview of the different ways utilized by curcumin for killing tumor cells [175]. Among more than 30 known cellular targets of curcumin are transcription factors like NF-κB, growth factors, cytokines, enzymes and genes with a role in cell growth and programmed cell death. Syng-ai et al. found a loss of c-myc in MCF-7, MDAMB and Hep2 cancer cell lines that might be a hint for cell cycle arrest at G1/S transition as preliminary event before the cells undergo apoptosis [163,176]. Additionally, treatment with curcumin or its derivatives bis-demethoxycurcumin (BDMC) and di-acetylcurcumin (DAC) lead to cell cycle arrest in G0/G1 and/or G2/M phase. The latter is caused by disturbance of microtubule dynamics that finally prevents chromosome segregation and results in cell cycle arrest at early anaphase [175,177]. Sun and coworkers analyzed the effect of curcumin on human B cell non-Hodgkin's lymphoma. Their findings indicated that curcumin selectively inhibited proliferation of human Burkitt's lymphoma Raji cells by arresting cell cycle at both G0/G1 and G2/M phase and subsequent apoptosis. In contrast, proliferation of normal peripheral blood mononuclear cells (PBMCs) was not inhibited [178]. Recently, it has been demonstrated that curcumin exerts likewise pro-apoptotic activity on both leukemia cell lines K562, a Philadelphia-positive CML, and Jurkat T-cell leukemia as well as in follicular lymphoma cell lines [179,180]. In ALL cells, curcumin causes programmed cell death by inhibiting the PI3'-kinase/AKT pathway [181]. Findings from Harikumar et al. have further shown that curcumin acts by decreasing the expression of antiapoptotic Bcl-2 and proto-oncogene Raf-1 and concomitantly activates p53 expression in BALB/c mice suffering from retrovirus-induced erythroleukemia [47]. Curcumin treatment improved significantly the survival time of these mice. Furthermore, incidence of anemic conditions as well as leukemic cell infiltrations in the spleen were decreased indicating suppression of cancer progression [47]. In the same way curcumin administration enhanced the survival of mice with acute lymphoblastic leukemia. Cultured BCR-ABL B-cell ALL cells were killed by apoptosis in response to curcumin and again p53 levels were elevated whereas NF-κB was decreased [182]. The intrinsic apoptotic pathway was implicated in curcumin-induced cell death of HL-60 leukemia cells with caspase-8, BID cleavage, subsequent caspase-3 activation and cytochrome c release [183]. As various signaling pathways are constitutively activated in most malignant phenotypes, our laboratory performed a Kinexus phosphosite screen on nuclear extracts derived from human chronic myelogenous leukemia K562 cells before and after 48h treatment with curcumin in order to investigate its effect on cellular phosphoproteins. We found that curcumin induced protein phosphorylation of six and dephosphorylation of seven phosphoproteins, each of them playing an important role in signal transduction [184]. Of special interest is the regulation of signal transducers and transcriptional activators (STATs), proteins with multiple roles in differentiation, cell growth and apoptosis as well as inflammation and immune response that are constitutively activated in chronic myeloid leukemia but also in other cancers. When we checked for nuclear expression of different STATs in K562 leukemia cells, we found STAT3, -5a and -5b expression significantly decreased in the nucleus of curcumin treated cells with a maximal reduction after 48 hours. However, significant changes in their phosphorylation levels could not be detected [185]. Instead, simultaneously with reduced nuclear expression, the level of truncated isoforms of STAT5 resident in the cytoplasm was elevated. These isoforms serve as negative regulators of the native STAT5 because they still exert their DNA-binding ability and compete with native forms for its DNA-binding sites. Curcumin therefore might represent a powerful tool to fight STAT5 overexpressing cancers [185]. In accordance with these findings, Rajasingh et al. reported that curcumin exerted its anti-proliferative and proapoptotic functions on T cell leukemia similarly by inhibiting the JAK/STAT pathway [186]. Additionally, based on inhibition of kinase Jak 1 and its effect on STAT3 activity, growth arrest and subsequent apoptosis were induced in primary effusion leukemia upon curcumin treatment [187]. Beside STATs, transcription factors NF-κB and AP-1 are well known targets inhibited by curcumin [188]. Accordingly, Ghosh et al. demonstrated that STAT3, AKT and NK-κB were inhibited in curcumin-induced apoptosis in CLL B cells together with antiapoptotic proteins Mcl-1 and XIAP [189]. Curcumin likewise mediated caspase-dependent apoptosis in cutaneous T-cell lymphoma (CTCL) by downregulating STAT3 and NF-κB signaling [190]. The inhibiting activity of curcumin on NF-κB signal transduction is of special importance as we previously observed that this pathway is involved in TNF-αmediated induction of γ-glutamyltransferase (GGT), an enzyme whose overexpression is implicated in cancer drug resistance and inflammatory leukotriene synthesis [191]. We performed a real-time PCR array study to analyze the effect of curcumin on transcription of NF-κB controlled genes in K562 cells. We found that mRNA expression of 39 of 84 genes involved in different NF-κB signaling pathways was modulated by curcumin [192]. Some genes could be identified for the first time as cellular targets of curcumin, amongst them AGT, CSF3, TICAM2, TNFRSF7, which were activated, whereas CD40, a member of TNF receptor superfamily, represented the most inhibited gene. By genome-wide microarray analysis performed under the same conditions we demonstrated that especially cell cycle genes and genes from the JAK/STAT signaling pathway were downregulated whereas heat shock proteins are among the 10 most upregulated genes [193]. Consequently we investigated the induction of heat shock response by curcumin in K562 cells in detail. We demonstrated that the heat shock transcription factor (HSF)-1 was translocated to the nucleus and activated hsp70 promoter through binding to a heat shock regulatory element (HSE) in response to curcumin treatment [194]. The subsequent upregulation of Hsp70 is associated with inhibition of NF-κB activation [195][196][197] and plays a key role in the anti-inflammatory activity of this compound. Previous work of our laboratory has demonstrated that curcumin blocks DNA interaction of transcription factors AP-1 and NK-κB with the glutathione S-transferase (GSTP1-1) promoter region in K562 leukemia cells and consequently prevents transcription of GSTP1-1 gene [198]. This effect seems to depend on the serum level of the culture medium as 10% FCS in the medium induced an opposite effect compared to medium containing 0.1% FCS and an increase of GSTP1-1 mRNA expression could be observed. We suggested that binding of curcumin to biological molecules could explain this observation [199]. Such interactions could interfere with cellular uptake or degradation of curcumin as reported by Ravindranath and Chandrasekhare in rats [200]. Increased levels of this enzyme are present in chemotherapy-resistant cancer cell lines and it is known that GSTP1-1 functions in the export of xenobiotic drugs after their conjugation to GSH [201]. As a result of curcumin treatment, K562 cells undergo caspase-dependent apoptosis with activation of both initiator caspases-8 and -9 [198]. Like resveratrol curcumin also exerts anti-inflammatory activity that is associated with development of cancers. It has been reported in the literature that molecular targets of curcumin include not only transcription factors but moreover ROS-generating enzymes like cyclooxygenase-2, lipoxygenase (LOX) as well as iNOS [147,174,202,203]. Overcoming Complications/Chemotherapy Resistance of Leukemia by Curcumin Treatment Due to interactions with their microenvironment, especially with integrin-binding ligands expressed by marrow stromal cells, CLL B cells become resistant against the most common chemotherapeutic treatments [204][205][206]. This effect, also kown as stromal protection, prevents apoptosis occurring either spontaneously or induced by drugs. Results of Ghosh et al. indicated that both soluble factors and direct cell contact in a coculture of CLL B cells with stromal cells enhance activation of STAT3 and levels of some antiapoptotic proteins. Consequently, these cells are protected from apoptosis. However, evidence has been provided that a higher dose of curcumin is efficient in blocking this stromal protection [189]. The authors expanded their study on a second polyphenolic compound extracted from green tea: epigallocatechin-3 gallate (EGCG), which is also known to induce apoptosis in CLL B cells. A combinational treatment with curcumin and EGCG resulted in more than additive effects if both compounds were administered sequentially [207]. In this respect, better results could be achieved when cells were pretreated with EGCG. This sequential treatment not only potentiated apoptosis but also allowed curcumin to overcome stromal protection, even at lower concentrations [189]. This synergistic effect was confirmed by Somers-Edgar's group in breast cancer cells. Treatment with a combination of curcumin and EGCG significantly enhanced the percentage of cells arrested in G2/M. This finding was verified in vivo, where simultaneous administration of both compounds reduced tumor volume in athymic nude female mice by 50% [208]. One problem of standard chemotherapy is the development of resistance. In this respect, curcumin was reported to act as chemosensitizer since combinational treatment efficiently enhanced both the pro-apoptotic and NF-κB inhibitory potential of capecitabine in human colorectal cancer. Combinational administration in nude mice even drastically decreased tumor volume and metastasis [141]. The efficiency of radiation therapy is likewise restricted by resistance mechanisms, which underlies stimulation of NF-κB activity. Because of its ability to suppress this pathway by preventing phosphorylation and subsequent degradation of IκB-α, curcumin modulates the radiosensitivity of colorectal cancer cells [209]. Another known complication of classical chemotherapy is the late appearance of secondary cancers. Siddique et al. recently reported that curcumin protects human blood lymphocytes from genotoxic effects caused by mitomycin C, an antineoplastic agent, thereby improving the outcome of the cancer therapy when used in combination with traditional chemotherapeutic drugs [210]. Another severe problem of leukemia is that patients are often immunosuppressive and are thus more susceptible to infections. In addition to all these pathways that curcumin interferes with and on which its chemopreventive activity is based on, a very recent report suggested that curcumin has great potential to act as an effective modifier in therapy of leukemia and as an immunopotentiator due to its differentiation-stimulating ability [211]. The authors provided evidence that curcumin significantly activated the O 2 − -generating activity in leukocytes during retinoic acid-induced differentiation of U937 leukemia cells by accumulating two cytoplasmic components, p47-phox and p67-phox. This O 2 − -generating system, which is necessary for activation of phagocytosis, is typically absent in human monoblastic leukemia cells. As a consequence, leukemia patients might be protected from life-threatening infectious diseases [211]. Evaluation of Natural and Synthetic Curcumin Derivatives and Other Strategies to Improve Bioavailability of Curcumin Despite its huge antioxidant and anticancer potential, the use of curcumin as chemopreventive and chemotherapeutic agent is limited mainly by its weak bioavailability due to poor absorption and rapid metabolism as well as its low water solubility. Currently, several strategies are being tested to overcome these limitations. During the last years, not only natural analogues from turmeric have been tested for their anti-carcinogenic activity compared with that of curcumin. Moreover, a huge amount of structurally modified curcumin derivatives has been chemically synthesized and evaluated in order to create a curcumin-derived molecule with better in vivo bioavailability and improved selectivity. Figure 3 provides an overview of the chemical structures of the curcuminoids and a selection of the most important curcumin derivatives. Beside chemical modifications of the compound, another tactic is the development of formulations that should enhance the absorption of curcumin by lowering its hydrophobicity and increasing membrane permeability. In this respect, the most promising applications worth mentioning include nanoparticles [212], liposomes [213], micelles [214] and phospholipid complexes [215]. Anand et al. for example designed poly (lactide-co-glycolide) (PLGA) nanoparticles with encapsulated curcumin. These nanoparticles confer not only better bioavailability and longer halflife to curcumin but also accelerate its uptake and enhance its antiproliferative and proapoptotic potential in leukemia cells [216]. Recently, Yadav et al. successfully improved cellular uptake of curcumin by novel cyclodextrin-complexes [217]. In addition, simultaneous application of special adjuvants can prevent the rapid metabolism of curcumin and thereby increases its biological activity [218]. Rhizomes of turmeric contain a mixture of three compounds referred as curcuminoids: curcumin (10) being the most abundant compound (77%), followed by demethoxycurcumin (11) (DMC, 17%) and bisdemethoxycurcumin (12) (BDMC, 3%) [144]. These compounds, together with the curcumin metabolite tetrahydrocurcumin (13) (THC) have been the subject of many structure-activityrelationship studies. Somparn et al. recently demonstrated the key role of methoxy groups on the phenyl ring for the antioxidant effects of curcuminoids. Decreasing activities have been observed in the order curcumin>DMC>BDMC since curcumin showed the strongest effects on scavenging DPPH radicals, preventing lipid peroxidation and protein oxidation [145]. Curcumin was likewise more effective in suppressing TNF-induced activation of NF-κB than DMC, BDMC and THC, whereas all curcuminoids inhibited growth of various cancer cell lines including T-cell leukemia (Jurkat), histiocytic leukemia (U937) and CML (KBM-5) to a similar extent. Only THC exhibited less activity in this respect. Further, no significant difference was found in GSH production [144]. The lacking inhibitory effect of THC emphasized the importance of double bonds in the modulation of NF-κB activation. Further findings indicated that BDMC was most effective in ROS generation [144,219] and the authors therefore concluded that the GSH status but not ROS level is related to antiproliferative effects of curcuminoids [144]. It has further been reported that BDMC possesses higher antitumor, antipromotion and anticarcinogenic potential than curcumin or DMC [220,221] whereas another group found that curcumin and not BDMC was the most effective pro-oxidant causing ROS-induced DNA cleavage [222]. Anuchapreeda et al. focussed their research on the effect of curcuminoids on leukemic cells, especially on gene expression of oncogene Wilm's tumor1 (WT1) [223], as the corresponding protein is known to be overexpressed in immature leukemia cells and represents a crucial player in leukemogenesis [224,225]. They provided evidence that among all curcuminoids, curcumin most efficiently decreased WT1 mRNA expression and protein levels in K562 and Molt4 cells [223]. In human hepatocytes, curcumin is metabolized to different hydrogenated derivatives including THC, hexa-and octahydrocurcumin (14,15) (HHC, OHC) [226]. In contrast to the natural curcuminoids, the antioxidant properties of THC, HHC and OHC were significantly higher than that of curcumin stressing the fact that hydrogenation of the heptadiene moiety of curcumin is responsible for an improved antioxidant function [145,146] while this modification simultaneously decreases its antitumor and anti-inflammatory properties [220]. These results are in line with previous findings that THC was more potent to inhibit lipid peroxidation than curcumin [146]. It might therefore be concluded that these metabolites are mainly accountable for the in vivo radical scavenging activity of curcumin. Beside these naturally occurring compounds, a multitude of chemical modifications have been introduced into the curcumin molecule in order to find a selective anticancer molecule with better pharmacokinetics. In summary, the most important outcomes from these studies are the following: C-3' and C-4' atoms of both phenyl groups should be substituted with 3'-4'-dimethoxy or better 3'-methoxy-4'-hydroxy units to reach a high antioxidant [149] and antiproliferative [227] activity. Since 3,5-bis(4hydroxy-3-methoxy-5-methylcinnamyl)-N-ethylpiperidone exhibits stronger radical scavenging activity on several leukemic cells than curcumin, Youssef et al. assumed that the stabilization of the generated phenoxy radical is responsible for the high antioxidative potential and that such stabilization can be achieved by either a para-hydroxy phenyl moiety or ortho-substitutions [228]. Fuchs et al. reached stabilization and subsequent increase of antiproliferative ability of derivatives carrying a heptadienone moiety through conversion of their 4-OH group into methoxy, acetate or sulfamate groups [227]. Introduction of ortho-alkoxy groups was linked to substantial increases of ROS scavenging and anticancer properties [229][230][231]. The minimal structural prerequisites of an effective curcumin derivative involve two hydroxyphenyl units connected through an unsaturated linker region, while additional oxygroups can further improve the antioxidant potential [220,232]. From a biological analysis of around 50 curcumin analogues, tested on a panel of cancer cell lines, it turned out that the 18 most active compounds contained a 1,5-diarylpentadienon skeleton [229]. Amongst them, the three most promising compounds GO-Y016 (18), GO-Y030 (17) and GO-Y031 (16), which exceeded the antiproliferative effect of curcumin by far, inhibited growth of most cancer cells lines even more successfully than classical chemotherapeutic agents like 5-fluorouracil. On the other hand, they did not suppress primary hepatocytes nor exert any severe side effects when administered to mice [229]. Fuchs et al. found likewise a pentadienone analogue to be the most potent one with the highest selectivity against prostate and breast cancer cell lines (19) [227]. Concerning the anticancer activity, bisbenzylidenepiperidone, pyrone, cyclohexanone derivatives and especially 2,6-bis(2-fluorobenzylidene) piperidone (EF24) (20) have been reported to induce cell cycle arrest and apoptosis of cancer cells much more efficiently than curcumin [233]. EF24 was even proven to possess satisfactory oral bioavailability and acceptable pharmacokinetics in mice [234]. Moreover, Lin et al. identified bis(3pyridyl)-1,4-pentadien-3-one (21) out of 72 curcumin analogues as the most potent inhibitor of TNFinduced NF-κB activation [232,235]. We tested recently heterocyclic cyclohexanone analogues of curcumin synthesized by the Larsen and Rosengren groups for their NF-κB inhibiting potential in K562 cells. Amongst them, 3,5-bis(3,4,5-trimethoxybenzylidene)-1-methylpiperidin-4-one (22) displayed the strongest effect with an EC 50 value of less than 7.5 μM. This compound and other cyclohexanone derivatives moreover efficiently induced apoptosis in estrogen receptor-negative breast cancer cells [236,237]. A very important finding and a promising strategy to improve the bioavailability of curcumin is that water-solubility of curcumin largely increases upon glycosylation of its aromatic ring structure [238]. Disadvantages and Possible Negative Effects of Curcumin Despite its huge proved potential as anti-inflammatory, chemopreventive and anti-carcinogenic agent, there is currently one major disadvantage for the use of curcumin in cancer therapy: its low bioavailability and fast metabolism. Everett et al. assessed the potential of curcumin in the treatment of B-CLL, also in combination with common chemotherapeutics. They found that curcumin efficiently induces apoptosis in B-CLL cells within 24-48 hours and even enhances the apoptotic effects of vincristine and other agents at a concentration of 1 μM [239]. It is hardly possible to achieve this concentration by using oral administration even at large curcumin doses. Intravenous infusions might be required. The clinical treatment of patients suffering from advanced pancreatic carcinoma with curcumin in combination with gemcitabine resulted in a response in only 10% of the patients [240]. However, this bioavailability problem is not unsolvable and many efforts have been undertaken to improve the solubility of curcumin and its absorption by either drug optimization studies, curcumin formulations or by simultaneous administration of adjuvants like the black pepper ingredient piperine, which inhibits curcumin glucoronidation and thus boosts its bioavailability by 2000% [241]. Most preclinical and clinical studies reported beneficial effects of curcumin against tumors in animals and human beings [136] and clinical trials clearly demonstrated that curcumin is well tolerated and safe at doses of 12 g/day [242]. Nevertheless, one study found that curcumin causes promotion of doxycycline-induced lung tumors in mice with increased tumor multiplicity and oxidative damage in lung tissue [243]. This might rather be an organ-specific effect of curcumin and the authors subsequently recommended that (ex-) smokers should be excluded from chemopreventive trials in order to prevent further damage of lung tissue by curcumin-induced ROS generation [243]. A comprehensive toxicity prediction study of turmeric-derived compounds revealed that curcumin, DMC and BDMC are non-mutagenic but can be carcinogenic in rodents and might possibly exhibit hepatotoxicity in a dose-dependent manner when taken for prolonged periods of time [244]. Based on a toxicology study of curcumin from 2003, the Joint FAO/WHO Expert Committee on Food Additives (JECFA) defined an Acceptable Daily Intake (ADI) for curcumin of 0-3 mg/kg bodyweight [244]. Conclusions Resveratrol and curcumin represent two particularly important polyphenolic antioxidants with respect to prevention and treatment of human cancers, including hematological malignancies. The incidence of leukemia, with approximately 48,000 new cases per year in European men, is quite high and even if there is currently a panel of various chemotherapeutic agents in use or in clinical trials, they fail to prevent the 32,000 deaths, which occur each year in Europe due to this disease in men. Because of adverse side effects of these drugs on normal tissue and taking into account that a therapy, which might successfully inhibit cancer promotion in the beginning, often ends up in a relapsing stage with enhanced mortality, it becomes clear that we are far away from healing leukemia and that the search for alternative treatments is indispensable. It has been reported that oxidative stress and a decreased level of cellular antioxidant defense are linked to the etiology of leukemia. Moreover, leukemic relapse is likely to result from accumulated oxidative damage following chemotherapeutic treatment. Thus, administration of antioxidants represents a promising strategy to overcome hematologic malignancies even before they start to develop. In this review we clearly demonstrated the remarkable potential of both natural antioxidants resveratrol and curcumin in chemoprevention and chemotherapy of leukemia. Even if in both cases the clinical use is currently limited by poor bioavailability, some new derivatives have already been synthesized that promise a good response in patients. As both substances and their derivatives mediate their multiple effects at quite low doses, one important strategy might be a synergistic combination with classical chemotherapy or other natural antioxidants, which has been already demonstrated for human leukemia cells in vitro [189,245,246]. Especially curcumin is of interest for the treatment of leukemia because of its ability to block stromal protection and thus prevent resistance to chemotherapy. On the other hand, it also acts as immunopotentiator and further protects lymphocytes from genotoxic effects, which are known inducers of secondary cancers. Nevertheless, one should always bear in mind that effects of antioxidants differ depending on the stage of carcinogenesis and the concentrations used. Lower concentrations of antioxidants strongly prevent initiation of carcinogenesis, but administration during cancer progression rather prevents apoptosis of tumor cells. In contrast, higher concentrations of these polyphenols often exhibit pro-oxidant and thus remarkable pro-apoptotic anticancer activity. Figure 4 summarizes the anti-and pro-oxidant effects of both compounds and their role for chemoprevention and chemotherapy.
v3-fos-license
2021-10-19T13:10:49.242Z
2021-09-01T00:00:00.000
239467090
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/70695-performance-of-health-care-workers-in-doffing-of-personal-protective-equipment-using-real-time-remote-audio-visual-doffing-surveillance-system-its-implications-for-bio-safety-amid-covid-19-pandemic.pdf", "pdf_hash": "4ff273e975290f5460c9092f96ab52b1a78abb99", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44656", "s2fieldsofstudy": [ "Medicine" ], "sha1": "33f1947b08246692c47fd9ffc2753862803f2b69", "year": 2021 }
pes2o/s2orc
Performance of Health Care Workers in Doffing of Personal Protective Equipment Using Real-Time Remote Audio-Visual Doffing Surveillance System: Its Implications for Bio-Safety Amid COVID-19 Pandemic Background Very little has been reported about health care workers' (HCWs) adherence to the Centers for Disease Control and Prevention (CDC) guidelines of doffing personal protective equipment (PPE) amid the COVID-19 pandemic. Real-time remote audio-visual doffing surveillance (RADS) system for assisting doffing might reduce the risk of self-contamination. We used this system to determine the incidence of the breach in biosafety during doffing of PPE among HCWs involved in the care of Covid-19 patients. Methods A total of 100 HCWs were enrolled in this observational study who performed duties in the COVID intensive care unit (ICU) of our tertiary care centre. With a real-time RADS system, trained observers from remote locations assisted HCWs during doffing of PPE and noted breach at any step using the CDC doffing checklist. The breach was considered major if committed during removal of gloves/gown/N-95 or if ≥3 errors occurred in any other steps. Results Overall, 40% of the HCWs committed a breach during doffing at least one step. The majority of the errors were observed during hand hygiene (34%), followed by glove removal (12%) and N-95 removal (8%). Nineteen percent of HCWs committed the major breach, out of which 37.5% were done by house-keeping sanitation staff (p = 0.008 and RR 2.85; 95% CI of 1.313-6.19), followed by technicians (22.5%), nursing staff (16.7%) and resident doctors (6.5%). Conclusions Performing doffing using a real-time RADS system is associated with a relatively low incidence of a breach in biosafety compared with earlier studies using an onsite standard observer. Overall adherence of HCWs to the CDC guidelines of doffing PPE was satisfactory. This study highlights the importance of the RADS system during doffing of PPE in a health care setting amid the COVID-19 pandemic. Introduction Prompted by the menace of Coronavirus disease 2019 (COVID-19) and its implications all over the world, the Centers for Disease Control and Prevention (CDC) augmented efforts to provide safe care for patients with suspected or confirmed COVID-19 [1]. COVID-19 endangers the health of all, but especially that of the health care workers (HCWs) involved in patient care. Personal protective equipment (PPE) protects HCW from the risk of exposure to COVID-19 and enables them to deliver safe and effective patient care [2]. PPEs suggested by the CDC comprise N95 mask, eye protection, gloves, and gowns [3]. Incorrect doffing of PPE by HCWs could potentially cause a breach in bio-safety and lead to self-contamination [4]. At present, the CDC recommends proper sequences for donning and doffing of PPE and safe practices to limit the spread of contamination [5]. Reinforcing them during training in PPE use can hone technical skills and reduce selfcontamination risk among HCWs while doffing. Observational studies have shown that lapses do happen while doffing [6,7] and lead to self-contamination, even though HCWs presume they are competent in 1 1 doffing of PPE [8]. Literature suggests higher self-contamination rates of 46%-100% among HCWs while doffing of PPE [9][10][11][12]. During doffing, HCWs frequently self-contaminate while taking off gloves [12,13], gowns [7,14], respirator and hood [15,16]. Other contributing factors include incorrect doffing sequences, difficulty in distinguishing between dirty and clean surfaces, rushed movements [17,18] and suboptimal PPE training. Additional interventions beyond training in PPE use may be necessary to limit deviations from the standard protocol further. Surveillance [19], simulation-based training and assisted doffing, strictly following the checklists can minimize the cognitive load among HCWs and increase performance while doffing PPE amid the current COVID-19 pandemic. The Healthcare system should function at the highest standards, utilizing the best available technology and resources for better patient care, staff safety and communication. A health care setting with multiple doffing areas usually requires varying levels of assistance. In such a setting, ensuring HCW adherence to PPE doffing protocol with the help of onsite standard observers is laborious and may not be feasible always. Taking advantage of simple technologies leveraged to ensure the safety of frontline HCWs during doffing is pivotal. Technology should comprise a video surveillance system integrated with a communication platform so that necessary stakeholders can be instantly apprised. For the reasons stated above, we implemented real-time remote audio-visual doffing surveillance (RADS) system utilizing high-definition closed-circuit television (CCTV) surveillance cameras installed in several doffing areas to remotely monitor and assist the HCW in doffing PPE [20]. In the present study, we aimed to observe a breach in biosafety among different HCWs during doffing of PPE using a real-time RADS system. The types and frequencies of the breach in biosafety observed in the CDC doffing sequence were also determined. Materials And Methods This prospective observational study was conducted at the COVID block of a tertiary care institute in the northern part of India. The study was approved by the Institutional Ethics Committee PGIMER Chandigarh and registered in clinical trials registry India (CTRI), with reference number CTRI/2020/05/025274. This study was conducted between June 1, 2020, and July 3, 2020. A total of 100 HCWs were enrolled for the study. Participation was voluntary. HCWs were informed about the characteristics and scope of the study. All the participants signed an informed consent form. Study participants were all members of the COVID-19 care team of our institute. They were involved in the care of confirmed COVID-19 patients in ICU. As per our hospital policy, all the participants had undergone mandatory training in PPE use, including donning and doffing practices based on CDC recommendations, before performing duties in COVID ICU. The training was given according to the CDC guidelines at the time by trained faculty. This training was mandatory for every novice health care worker and was done under direct supervision. The training took place over a period of seven days before the health care worker was posted in their respective work area. HCWs were excluded as team members if they were pregnant, immunocompromised or had inflammatory skin conditions. HCWs who refused to participate as well as the investigators in this study, were excluded. The doffing process was visualized remotely in a console room utilizing CCTV cameras installed in the doffing area and verbally communicated using the audio platform ( Figure 1). A doffing checklist was developed based on the CDC recommendations (Appendix 1) and was used by trained observers for ensuring HCW adherence to doffing sequence [21]. Using this audio-visual communication system, a trained observer from the console room guided the HCWs throughout the doffing process, marking the CDC doffing checklist. The trained observers were registered nurses and certified infection preventionist, who monitored and assisted the HCW doffing of PPE round the clock. A silent observer (intensivist) in the console room who was not a part of this study monitored the doffing process of the HCW and noted any breach or error in biosafety at any step in the doffing checklist. RADS: Remote Audio-visual Doffing Surveillance A major breach in the doffing process was considered if the HCW committed any error while removing (1) outer gown, (2) gloves (both outer and inner pair), (3) N-95 mask, or (4) if an error occurred at least three or more times during the remaining steps of CDC doffing sequence. If there was no breach in biosafety at any step, then the doffing process was considered error/breach-free. If any HCW committed a major breach in biosafety while doffing PPE, it was notified to the concerned authorities as per the hospital protocol. Statistical analysis As the previously published studies on doffing of PPE in a clinical setting are limited by the number, fewer data and the types of data analyses, no formal sample size calculation was done in this observational study. We intended to include a minimum of 100 volunteers, by convenience sampling, working in the designated COVID ICU during the study period. Univariate analyses were performed, and the p-value of ≤ 0.05 was considered significant. We used χ2 (chi-square) for categorical variables. Relative risks were calculated, and descriptive data has been presented. Analyses were performed with SPSS version 25 (IBM Corp., Armonk, NY). 2021 In total, 100 HCWs were observed during the doffing of PPE through a real-time RADS system. Of these participants enrolled in the study, 31% (n = 31) were resident doctors, 36% (n = 36) were nursing staff, 24% (n = 24) were housekeeping sanitation staff, and 9% (n = 9) were technicians. The demographic parameters and shift timings have been detailed in Table 1. The majority of the HCWs (60%) did not commit breach in biosafety in any step of CDC doffing protocol, while 40% of them deviated from protocol in at least one step. The major deviation occurred during hand hygiene (multiple steps) (34%), removal of outer and inner gloves (12%) and while N-95 removal (8%). Protocol deviation in 1-2 steps was committed by 30% HCWs, in 3-4 steps by 7% HCWs and only 3% HCWs deviated from protocol in more than four steps. Characteristics HCW (n = 100) Number (%) a The major breach in biosafety was committed by 19% of HCWs, out of which 37.5% (i.e., nine of 24) were sanitation staff, followed by 22.5% technicians (two of nine) and 16 (30) Deviation in 3-4 steps 7 (7) Deviation in more than 4 steps 3 (3) HCWs who made major breach/error a 19 (19) Type of HCWs committing major errors *Considered as major step and error/deviation in any of these steps was considered significant. a Chi-square test; b P < 0.05 is considered significant Discussion The ongoing COVID-19 pandemic has had a significant impact on global healthcare services. India has the third-highest number of confirmed cases globally after the United States and Brazil, with more than 1.2 million total confirmed COVID-19 cases and more than 30,000 total deaths, as of July 23, 2020 [22]. Government strategies and population responses have not resulted in flattening the epidemic curve, which implies an expected spread over a longer period, for many months or even years. For the frontline HCWs involved in the patient care, the greatest risk factors for getting infected with COVID-19 is (1) exposure from an infected patient during care and (2) self-contamination during the doffing of PPE [4]. The importance of preventing exposure to SARS-CoV-2 during patient care cannot be understated, as more than 90,000 HCWs have developed COVID-19 worldwide as of May 7, 2020 [23]. There is no central registry of confirmed cases of HCWs existing in India; however, New Delhi alone has reported more than 2000 infected HCWs, as of June 20, 2020 [24]. The appropriate use of PPE is essential to reduce the number of infected healthcare workers caring for patients with COVID-19 [4]. The most overlooked aspect of the alarming COVID-19 case numbers is whether HCW is doffing PPE properly without self-contamination. Experiencing the formidable outlook of the COVID-19 pandemic with a deluge of sick patients getting admitted to ICUs, HCWs responsible for their care are often busy and doff PPE frequently. Providing roundthe-clock assistance to them while doffing PPE is of utmost importance. Attempts to meliorate doffing of PPE should include both prudence and safety, utilizing innovative or improved methods, training practices, and organizational policies [28]. An imminent crisis is opportunities for innovation, wherein a conventionally slow-moving healthcare facility can be improvised in response to the pandemic [29]. To stem the risk of self-contamination during doffing of PPE, we utilized the real-time RADS system to assist HCW while doffing [20]. With this system, a trained observer from an offsite location can easily collaborate through a visual screen, imparting crystal-clear communication while assisting doffing. The immediacy and ease of this system are crucial in guiding HCW during doffing PPE. With the growing burden on the hospital with increasing ICU units to care for COVID-19 patients on a large scale, hospitals tend to increase the number of designated doffing areas. With these areas generally spanning several buildings or locations, the integration potentiality of such simple and unified audio-visual technologies is essential. The real-time RADS system is convenient and can bring great value for HCWs doffing in many doffing areas simultaneously ( Figure 2). The flexibility of such a platform allows the observer to communicate multiple messages with ease. For example, with multiple doffing areas requiring varying access levels, this system has the added advantage of ensuring the disposal of used PPEs from these areas. With PPE shortage posing a major challenge to the healthcare facilities in the ongoing pandemic, CDC has recommended conservation strategies for optimizing its use [1]. PPE use by on-site trained observers in different duty shifts for assisting doffing can be minimized with this novel surveillance system. And also, by using this system, a calm off-site observer may be better able to guide the doffing process than the exposureprone anxious onsite observer. RADS: Remote audio-visual doffing surveillance To our knowledge, this is the first study to use the real-time RADS system to inspect and assist HCW in doffing PPE and monitor their adherence to CDC doffing protocol. Our organisation operating this system so far has already begun perceiving the benefits. Overall, 40% of HCWs committed a breach in biosafety at any step in the doffing protocol, and 19% made a major breach in biosafety in our study. Kwon et al. reported 100% incidence in the breach at any step among HCWs during doffing PPE with the help of an onsite standard observer [10]. Okamoto et al. reported a 39.2% incidence of multiple doffing errors despite prior training [30]. Errors during hand hygiene and removal of gloves/gown/N-95 were the most common during the doffing of PPE [10,17]. Our study demonstrated reduced incidence rates of breaches during hand hygiene (34%), removal of gloves (12%), N-95 removal (8%) and gown/apron removal (2%). Previous studies have observed violations in doffing protocol at either one step or multiple steps in simulated environments using surrogate markers like fluorescent materials and/or bacteriophages [9][10][11][12]. The findings of our study supplement these observations to the actual world of a tireless clinical setting where it is probable that HCWs could deviate from PPE doffing protocols; nevertheless, they have received training [18]. However, despite committing errors while doffing, none of our HCWs had developed any symptoms of COVID-19 or were tested positive with RT-PCR testing at the end of seven-day post-duty sampling. Our study has the following limitations. First, the data of previous simulation studies using surrogate markers may not be comparable to our observational study performed in a clinical setting. Second, swab samples from PPE of HCWs during/after doffing were not collected for RT-PCR testing to detect/confirm the presence of the virus, and the only deviation from protocol was observed. However, all the tested PPE swab samples may not always be positive, suggesting all breaches in biosafety may not lead to self-contamination. Further studies are required to confirm our preliminary results. Third, our results may not be reflective of HCW populations at large. Based on our observations, we speculate that performing doffing using a real-time RADS system is associated with a low incidence of the breach in biosafety and decreased protocol deviations compared with doffing assistance using onsite standard observers or posters in the doffing area. Utilizing this surveillance system to assist doffing can replace indistinct or inconsistent practices with trouble-free intercommunications that almost replicate real-life face-to-face meetings. The ability to guide the HCW throughout the process of doffing can reduce their anxiety levels and provide a pleasant experience overall. Systems like real-time RADS should be widely implemented to reduce healthcare-associated COVID-19 in HCWs. Our methods and results lay the foundation for future research in a larger population. The current pandemic is a tremendous opportunity for health care planners to strengthen the existing health systems or search for innovative methods to ensure the safety of HCWs caring for COVID-19 patients. Conclusions The ongoing COVID-19 pandemic poses a remarkable burden on the health care system. Appropriate doffing of PPE remains crucial to decrease the infection rate among healthcare workers. With several HCWs requiring assistance while doffing PPE in multiple doffing areas simultaneously, trained observers can coordinate with them efficiently round the clock using a real-time RADS system. This system helps by reducing the requirement of a donned observer, thus conserving PPE and potentially reducing exposure to the observer while preserving the standard of safety while doffing. This study highlights the benefits of a real-time RADS system in lowering the probability of committing a breach in biosafety during doffing of PPE. Hence transforming into a lesser illness burden amongst HCWs involved in the care of COVID-19 patients, round the clock. SI. No Step Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Institutional Ethics Committee PGIMER Chandigarh issued approval INT/IEC/2020/SPL-572. Institute's ethics committee approved the research. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any
v3-fos-license
2019-05-13T13:08:58.162Z
1997-11-01T00:00:00.000
151097107
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5070/g31710267", "pdf_hash": "5f37790f594c4bf0dec9f2ce041e7435fac27bb3", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44661", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "487cee9ece328fce82dd8bc628618a57aac861a9", "year": 1997 }
pes2o/s2orc
Editorial: Turn, Turn, Turn... Author(s): Stoss, Frederick W. | Abstract: Everything is changing instantly.A tribute to John Denver I am overlooking a rather panoramic vista, as I jot down the first notes that will eventually become this issue's guest editorial. The landscape of Western New York is a blaze of colors --bright reds and oranges from the abundance of sugar maples, crimson from the sumacs, warmer hues of browns from the oaks, yellows from the beeches and birches. The scene is like that of stained glass windows in a cathedral of autumnal splendor. The image evokes the existence of a truly great Creator. With such a setting and inspiration my thoughts drifted to the words in the Old Testament "For everything there is a season, and a time for everything under heaven" (Ecclesiates 3:1). This essay addresses fourteen pairs of life activities. Each element of each pair can be done only independent of the other. They appear to contradict each other. Together they represent a continuum. So, as the Preacher wrote thousands of years ago, our lives are subject to change. Common wisdom holds that change is the one constant in our lives: things will change. These days we are certainly reminded of the word change whenever we look at our children, read the newspaper, talk to a colleague, attend a professional meeting. The reminders all point to a single dramatic conclusion: we can no longer rely on what used to work in the past and that all aspects of our lives are becoming more unpredictable. I would venture a bold idea and suggest that change itself is changing at a rate faster than ever before, and that most of us have experienced aspects of our lives in simpler times. Newspaper headlines, news magazine covers, nightly news broadcasts and weather reports have dramatically brought a new Spanish phrase into our collective lexicon, El Nino, to remind us of the impacts that we might expect from changing climates. The Third Conference of the Parties (COP-3) to the United Nations Framework Convention on Climate Change convenes in Kyoto, Japan, December 1-10, 1997. This major gathering of international environmental policymakers may be the most important international conference on the environment, surpassing the importance of the 1992 Earth Summit in Rio de Janeiro. The ramification of policies discussed and potentially implemented at the Kyoto Conference may require us all to make substantial changes in our life styles. Changes that will require all citizens of the planet Earth to examine the global impacts of our daily lives. Other areas of environmental change have likewise been in the news of late, as international environmental and conservation organizations have had to reduce their staffs, services, and publications to stave off complete collapse. The causes of these changes are complex and controversial, just as the environmental issues they have struggled to advocate. Recent headlines in too many local papers continue to report on the social phenomenon of the 1990s: corporate downsizing, re-engineering, right-sizing, out-sourcing, strategic realignment. These are the new buzzwords describing the changes that are taking place in the workplace, from small businesses to large corporations, from public to special libraries, from research facilities to national and international environmental groups. Hardly any private or public organization has been immune to these factors. Shrinking workforces, soaring executive salaries, diminished services and surging stock markets seem as juxtaposed as the Ecclesiates fourteen pairs of life's activities. These socio-economic and socio-political forces tearing at the fiber of the work place have taken a particularly hard toll on many of the positions found in the soft underbellies of businesses, organizations, and institutions: the divisions, sections, and groups serving as stewards for information and communication. This element of change provides tremendous challenges and begs us to stimulate our most creative talents to assure that the essence of information and communication is not lost along the paths of change. Today's effective librarian and information professional must embrace creativity as the primary tool to manage change in their lives. Creativity fosters a sense of entrepreneurship (and intra-preneurship!), innovation, and newness as means to control the changes we face. We will find our role as risk-takers will be stimulated and sustained by the changes we encounter and the changes we create. We should look at change as a means to find and develop new means of effective, efficient, and equitable access to information. Technological innovations and changes will be the driving forces behind our future changes. We should look at change as an opportunity to improve our services, products, publications, and our profession. Tribute to John Denver This is written with a profound sense of loss at the October 12th of 1997, death of singer, songwriter, actor, and environmental activist, John Denver. Denver was killed when an experimental aircraft he was flying crashed into Monterey Bay in California. I actually had the opportunity of meeting him in 1969, when he was launching his career as a single performer. One of the more popular circuits frequented by many performers was the State University of New York Coffee Houses. I remember watching some of his early performances at the SUNY College at Oneonta, the neighboring campus of my undergraduate alma mater, Hartwick College, in Oneonta, New York. I doubt that he had as vivid a memory of our meeting as I do! Over the next 28 years I had the opportunity of seeing him perform at the Saratoga Performing Arts Center, the Finger Lakes Performing Arts Center, the Rochester War Memorial, and Thompson Bolling Arena at the University of Tennessee. John Denver's songs brought us rays of hope and sunshine, moments of peacefulness and calmness, times of exuberance and reflection. During the turbulent times of the late 60s and early 70s, John Denver brought to us a music that was wholesome and positive. Even during his later years of personal problems and the darker days of his life, he continued to inspire and entertain. He simply proved that like the rest of us, he was a simple human being, capable of making mistakes, subject to living a less-than-perfect life, able to be hurt, and, like the rest of us, he was also capable of inspiring and mending the human spirit. Next to his musical talents, John Denver was a devout advocate for a better and more peaceful world. Many of his songs were about the environment and were compassionately written because of his strong feelings about nature and the surroundings identified best by his songs "Rocky Mountain High," "Calypso," and "Take Me Home, Country Road." Several years after I met John Denver in the SUNY College at Oneonta Coffee House, he established the Windstar Foundation, serving as its President until his tragic death. The Windstar Foundation was created in 1975 and officially established in 1976. Located in the Central Colorado Rocky Mountains on a 1,000 acre tract of breath-taking landscape, the foundation has sought to inform citizens about the need for maintaining an ethic for the the environment. After ten years of hosting the educational "Choices for the Future" series of symposia, the Board Of Trustees recently decided to focus Windstar's energies in new areas, including the exciting prospects of the Windstar Land Conservancy. Since its inception, the Windstar Foundation has sought to "create opportunities for individuals to acquire the knowledge, skills, experiences, and commitment necessary to build a healthy and sustainable future for humanity." To accomplish this task, the foundation publicizes steps that individuals can take to improve environmental quality. It also conducts environmental and nature education programs in global resource management, food production technologies, and further stimulates and sustains the development of the human spirit by fostering an awareness and appreciation of the beauties and bounties of nature and the environment. For those of you who may want to make a tribute to John Denver's efforts to make us more aware about environment, please consider a memorial gift to the Windstar Foundation. "Now that John Denver is 'Rocky Mountain High,' he can 'Thank God he is a Country Boy' in person." Contributions can be made to:
v3-fos-license
2021-09-28T01:10:12.676Z
2021-07-01T00:00:00.000
237783258
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://www.shanlaxjournals.in/journals/index.php/sijash/article/download/3914/3137", "pdf_hash": "75c910a6507ead377192276b7c2ff1999fe6853c", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44662", "s2fieldsofstudy": [ "History", "Sociology", "Political Science" ], "sha1": "780956ec9ce744fecacaf7dd5be197826a3c10c5", "year": 2021 }
pes2o/s2orc
Sree Narayana Movement and Social Transformation of Modern Kerala: A Bird’s - Eye View Among the various factors that contributed for the transformation of Kerala into a modern democratic society the role played by Sree Narayana movement was most significant. Realising that the political power was the master key to social progress, the leaders of the movement came into tacit understanding with non Hindus, made permutations and combinations with them to maintain and strengthen their position in the society. Through their protests, incessant conflicts and assertions, they succeeded in transforming the pyramidal social structure of Kerala into pillar structure. From the position of caste victims they could elevate themselves to the makers of their own destinies. They also succeeded in politicising the social relations. The philosophies and pragmatic approaches propounded by Narayana Guru for the material and spiritual advancement of the backward caste people of Kerala was found successful that contributed for the social transformation from structural relations to human relations and from caste hierarchical structure to inter-personal relations. Introduction It is interesting to investigate the various factors that contributed for the progressive transformation of modern Kerala into a democratic society from the clutches of caste and class differences, feudalistic patterns and outmoded customs that instigated the sages like Swami Vivekananda to characterise as land of lunatic asylum. Till the beginning of the 19th century the modern state of Kerala presented a picture of the existence of caste taboos, untouchability, unapproachability, complicated inheritance laws, irrational customs and manners and unscientific land ownership. The Hindu society was pyramidal in structure where the numerically insignificant population enjoyed the privileges where as the non caste Hindus who constituted the rank and file of the society were denied all civic rights. Kerala was the most caste ridden part of India where pollution was observed in its vulgar form. 1 The Aryanisation brought about chathurvarnya hierarchy in the society, but unlike the greater part of India, below the Brahmins and Ambalavasis or temple servants, all other caste groups were treated outside the pale of chathurvarnya system. In the case of Non Hindus like Christians and Muslims, the caste hierarchy was not numerous. As per the Census report of 1901 there were 192 principal castes and 1070 sub castes among the Hindus where as there were only 14 Christian divisions and 47 Muslim divisions. 2 In the Kerala society the lower caste people were subjected to slavery, humiliation and exploitation and were forced to live with no voices. They were not permitted to enter into some areas, prevented from constructing big houses, carry umbrellas, wear shoes, use descent languages and study modern science and arts. Their women folk were not permitted to cover the upper parts of their body and wear gold ornaments, but have to use stone chains known then in Malayalam as kallumala. Several feudal taxes including breast tax, hair tax etc. was collected from the lower caste people by the state which has been legitimised by the Brahmin law givers. The lower caste Hindus were neither permitted to enter into the public offices like post office and village office nor walk through the approach roads of the temples due to the obnoxious practice of pollution. All public amenities built out of public funds were reserved for the use of upper castes and hence the lower caste people could not use rest houses, public wells and so on and so forth. A system of forced labour known as uzhiyam and free labour known as viruthy were imposed on the lower caste people. The practice of brahmadeya, agrhara and devadana land grant systems made the socio-economic condition of the higher castes very comfortable. The rulers of the state appeased the higher castes, especially Brahmins through the uttupuras or free feeding houses for the Brahmins and ceremonies like murajapam, thulabharam etc. It was King Marthandavarma, the maker of modern Travancore who introduced the practice of murajapam in 1749. It was a ceremony of chanting verdict manthras which consisted of sahasranamajapam, mantrajapam, murajapam and jalajapam-the whole ceremony lasted for 56 days. Huge amount was spent from the treasury for appeasing the Brahmins who engaged in the murajapam ceremonies. In practice the socio-economic system that prevailed in Kerala was nothing but theocratic feudalism. The structure and working of the Kerala society was determined by the caste status and not economic status, where the birth not money that determined the status of an individual, a typical pattern that prevailed in the rest of the country. The purity-pollution dichotomy was the core philosophy that ruled the hierarchical division of the Hindu society of Kerala. Every aspect of life was determined by the caste; whether it is political, social, religious or economic and the class division of the society was only a later development. In traditional Hindu society, this 'divine inequality' was held high as the order of the day. 3 The system that prevailed in the state was like that of the system of slavery practiced in America and apartheid prevailed in African continents. Kumaran Asan, a poet and social revolutionary of Kerala stated that, "the cruelty and ruthlessness shown to the lower castes of Kerala by the higher castes were comparable to the cruelty shown to the aborigines of America by the settlers from Spain. It would have been no wonder if the people of the lower castes decided to leave their villages and go to the forests and live like animals reversing the process described in Darwin's theory of evolution." 4 The institutionalised oppression prevailed in the Kerala society was so deep that it was impossible to identify it as oppression. Winds of Change in Kerala Society Towards the second half of the 19th century winds of change began to appear in the Kerala society, due to the growth of national consciousness and colonial interventions. Introduction of English education and intervention of Christian missionaries brought about changes of great magnitude. The emergence of a powerful leadership from among the people, its perception of the situation and the capacity to act accordingly, the formation of a counter ideology, change in the material conditions of production, in certain situations interface with external forces-visa-vis colonial power for instance-may accelerate or even inaugurate the process of change. The social change that took place in Kerala during the19th and 20th centuries was a by-product of all these factors. 5 The colonial rule was instrumental for the introduction of modern education, modern technology and economic reforms. The British as part of their colonial motives like domination of Indian states and spread of Christian religion attempted to expand their political supremacy over the princely states of Travancore and Cochin as well as British Malabar. The capitalist inroads were made into the feudal social structure of Kerala by the British from the second half of the 18th century. They have introduced drastic infrastructural changes in Kerala suitable to the growth of a mercantile economy. 6 It was with this objective in mind that the British appointed Residents in the Princely states of Travancore and Cochin as a controlling mechanism over the Kings. They have promoted progressive land reforms, changed the agrarian system based on feudalism and advised the Kings to introduce social and economic reforms like abolition of slavery and land ownership rights through the pandarappattam proclamation and jenmi-kudiyan proclamation. Another important offshoot of the colonial intervention was the growth of public sphere which helped the slave castes to develop their public opinion. Emergence of a public sphere can be considered as a facilitating agency for the modern nationalising project. 7 The missionary works, especially the British and European evangelical missionaries brought about significant impacts in the society. Through the establishment of educational institutions and imparting educational facilities to the poor low caste people, starting printing presses and publishing news papers and magazines and opening hospitals in different places, the missionaries succeeded in making drastic changes in the social fabric of Kerala along with ensuring good number of converts to their faith. It was the work of L.M.S among the Shanars in South Travancore that sparked off the famous controversy in 1835 about the right of Shanar women to wear an upper cloth above the waist which Hindus claimed was the right of high caste women only. 8 The interventions of the missionaries were instrumental in destabilizing the caste structure in Kerala. People of Kerala responded differently to the changes brought about by the colonial intervention. The upper caste Hindus had both beneficial and adverse impacts where as to the lower castes it produced great opportunities for their social and economic advancement. A new spirit of enquiry and criticism as well as civic consciousness developed among the lower castes. 9 The colonial intervention was capable enough to challenge the hierarchical Hindu social system, changing social conception and developing democratic ideals. It contributed for the transformation of family relationships and providing more spaces for representations in educational and representative institutions and government employment. Vivekodayam, the official mouth-piece of the Sree Narayana Dharma Paripalana Yogam argued, "The peace and freedom that we enjoy now are not experienced by us under any other dispensation. The education drives away the darkness hidden in every nook and corner of our country and transforms millions who were forced to live like animals into human beings. The administrative policy of the British has granted us the freedom, which was beyond our reach within the framework of Sublime religion. It has abolished the monstrous practices and corruption and extirpated the fangs of the venomous serpent of the inhuman caste system." 10 Another notable impact of the colonial inroads into Kerala society was the political awakening that developed among various castes and communities. The innumerable studies and census reports released by the British rule helped for the identity formation and caste solidarity among them. The desire for social mobility was articulated through caste groups. Associations sprang up for each and every caste and these associations stood for the social and economic advancement of their members. In those days the socially backward classes had to look to the European masters for the redemption of 9 M.N. Srinivas writes, The lower castes wanted a share in the new opportunities and they were also stirred by new equalitarian winds blowing across India. The movement assumed a particularly vigorous form in Peninsular India where the non Brahmin castes succeeded in obtaining for themselves concessions and privileges; M.N. Srinivas, Social Change in Modern India, Orient Longman, 1972, p. 73. 10 Vivekodayam, Vol. V, nos. 7, 8, 1909 their lost rights as human beings and as citizens. 11 Further the social and religious reform leaders were largely inspired from the changes brought about by the colonial interventions. It furthered competitive spirit among castes and communities in Kerala and in the long run the communities that took advantages out of modernisation benefited from it and those who lagged behind were necessarily handicapped in various ways. Emergence of Social Movements in Kerala Like the greater part of India, the state of Kerala gave birth to several social reform movements, both reformative and transformative. Such movements were originated and progressed in various parts of the country. But due to the pluralities and diversities of religions and social customs the possibility for a unified reformist or revivalist movement was rather limited. A country-wide organised transformation of Hinduism or Islam would be as miraculous as agreement on a single spoken language for the entire country. 12 Among the social movements started in India, except the movements of Jyothiba Phule of Maharashtra, and Sree Narayana Guru and Ayyankali of Kerala, all others belonged the higher castes. 13 Unlike the greater part of India, the social movements that began in Kerala was different because of the peculiar caste-class structure of the region. The peculiarity was that in this region the movements were firstly spearheaded by men belonging to lower strata of Hindu society. The central personages of the movements were not high caste intellectuals inspired by the European Enlightenment but low caste Ezhavas and later Pulayas, Parayas and members of some smaller caste groups. 14 The state of Kerala before its linguistic reorganisation witnessed revolutionary social movements that touched the higher level of ideology and contributed for the progressive transformation of society into modern democratic society. Even though the first reformer who succeeded in making some influences in the society of Kerala was Vaikunda Swamikal, the Saint and reformer who was instrumental in making revolutionary transformation in the Kerala society was Sree Narayana Guru and the movement that he and his disciples carried on in the fertile soil of Kerala was known as Sree Narayana movement. Their movement came out victorious in setting the base in the society for deconstructing the caste and religious ideology. Emergence of Sree Narayana Movement Sree Narayana Guru who hailed from the Ezhava caste of Hinduism was the pioneering figure of social movement in Kerala. The movement initiated by him was calculated with the objective of social revolution and social transformation using religion as an effective channel. Born in 1856 at Chempazhanthy, Thiruvananthapuram as the youngest son of Madan Asan and Kutty, the first revolutionary act of Narayana Guru was the installation of a Siva idol at Aruvippuram in 1888, breaking the right denied to the non caste Hindus. When his sanskritisation act was questioned by the caste Hindus, his answer was powerful enough to solemn, his opponents. 15 It was Dr. P. Palpu of the Ezhava community of Travancore who firstly realized the value of starting an organization for the transformation of society, annihilating the caste taboos and guaranteeing the basic human rights for the downtrodden castes. It was he who provided the necessary background and inspiration for the works of both Sree Narayana Guru and Kumaran Asan. 16 Being a victim of the caste tyranny existed in Travancore Dr. Palpu started preparation for an organization for his community men immediately after receiving unsatisfactory response from the government to the Ezhava Memorial submitted under his leadership. He formulated bylaws for an organization named by him as Ezhava Maha Jana Sabha and started campaign among his community along with his close associates. When this initiative failed to achieve its desired goal, he met Swamy Vivekananda at Mysore and sought his advice. Vivekananda advised him the select a saint to spiritualise and industrialise the masses for social transformation because the social reform movement in Indian context could go deep among the masses only if it should have a religious foundation. Dr. Palpu had no other option than to meet Narayana Guru, who had, by this time earned a high reputation as a great sanyasin. Guru who constructed a temple at Aruvippuram after the famous installation of the Siva deity constituted an eleven member committee known as Aruvippuram Vavoottu Yogam to look after the affairs of the temple administration. The temple and its roerties were registered with P.Parameswaran, the brother of Dr. Palpu as its manager. It was P. Parameswaran who facilitated the meeting between Narayana Guru and Dr. Palpu and after initial discussions it was decided to start an organization by the name Sree Narayana Dharma Paripalana Yogam that came into being on 15 May 1903 with Sree Narayana Guru as permanent President, Kumaran Asan as Secretary and Dr. Palpu as Vice President. In order to propagate the ideals of Guru and Sree Narayana Dharma Paripalana Yogam an official mouthpiece was started known as Vivekodayam, a name selected in memory of Swami Vivekanada and started publication with Vivekananda suktham as its caption 'Uthishtatha Jagratha Prapyayan Nibodhitha.' It was for the propagation of the ethics of Narayana Guru and uplifting all the lower caste people that the SNDP Yogam was founded. Narayana Guru had contributed a lot of ideals and philosophical discourses for humanity irrespective 16 Kerala Kaumudi, 8 January 1972. of caste barriers. 17 But the Ezhava community that produced the Great Guru largely benefitted out of the pragmatic and spiritual teachings of the Guru. He took initiative in constructing a large number of temples for the lower castes to save them from the exploitation of higher castes, introduced sanskritised form of worship, advocated financial control in the personal and private life and taught the people to abandon outmoded customs and practice reforms. Sanskritisation of the Ezhava community was the first and foremost aim of Guru. At the same time it contained an element of defiance against the caste order in the act of constructing parallel temples. 18 The practical principles advocated by him were later emulated by the social reformers of other communities of Kerala. The principles and messages of Narayana Guru were a war cry against all kinds of exploitation and a pragmatic solution against a stagnant society. His teachings produced not only spiritual effects but also material impacts. Even though he laid the foundation of temples and muts, the building that came up was one that of social equality. Even though he sowed the seeds of spiritualism, what grew in the field were socio-political rights. 19 Sree Narayana Guru treated temples as rallying points of solidarity and centres of all round activity. In order to ensure the collective gathering of the people irrespective of caste differences, Narayana Guru exhorted to establish monasteries, schools, lecture halls, banks, dispensaries, libraries, rest houses and gardens in the surroundings of the temples constructed by him. One can witness in him not a mere sanyasin preaching his ideals but a karmayogi propagating pragmatic philosophy of actions. Narayana Guru were a move towards the reforming the religion and not its demolition or annihilation. In that sense Sree Narayana Guru can be hailed as a Hercules who purified Hinduism. 20 In order to modernize his community men and all those who do not belong to the purview of chathurvarnya system Sree Narayana Guru proposed pragmatic changes in social and economic spheres. He discouraged costly marriage ceremonies and polygamy, suggested inter-dining and inter-marriage, exhorted to abandon the traditional occupation of toddy tapping by the Ezhavas of Kerala and propagated the vedantic teachings in simple and lucid language. His interventions benifitted not only the Ezhava community but several other communities of the state and by emulating his preaching, different castes and communities came up with new organizations and programme of reforms. The social movement started by Guru produced revolutionary changes in the social, economic, political and religious life of Kerala as a whole. S.N.D.P.Yogam and Ezhava community were fortunate to get the services of a group of dedicated young men who spread over the whole areas of Kerala. Prominent among them were Dr.Palpu, Kumaran Asan, Sahodaran Ayyappan, T.K.Madhavan, C.Krishnan, Murkothu Kumaran, Paravur Kesavan Asan, C.R.Kesavan Vaidyar, C.Kesavan, C.V.Kunjuraman, K.R.Narayanan and N.Kumaran. These dealers effectively used their pen and platform for a transitional change in the society, a transition from change resistant sacred outlook to change ready secular outlook. Modern Kerala society witnessed their interventions in all social and political protests for transforming the society from its pyramidal structure to pillar structure and ensure political participation and establishment of democratic institutions because they realized that political power is the master key for social progress. Role of Sree Narayana Movement in the Protest for Civic Rights The self confidence created by Sree Narayana, influence of colonial modernity and the impact of 20 Kerala Kaumudi, 2 January 1972. See also M.C. Joseph. "Adhunika Yugathile Mahanaya Prayogika Thathwajnani." Deena Bandhu, Onam Sree Narayana Jayanti Special, 1962;Vivekodayam Special Supplement, January 1967, p. 170. freedom and revolutionary movements progressed in national and international level had its deep impact on the non Hindus and non caste Hindus of Kerala at the dawn of the 20th century. This led to a new political alliance among the principal communities of Kerala; the Ezhavas, Muslims and Christians who pleaded for the civic rights denied to them by the existing regime. The Non Hindus like Christians and Muslims demanded due representation in the government services including the revenue department where they were misrepresented due to the attachment of that department with devaswoms or temple properties. Christians and Muslims were debarred from the appointment in the Revenue department on religious grounds while the Ezhavas and other low castes on caste grounds. In addition to this demand the non caste Hindus pleaded for entry into all public amenities in the state and recognition as citizens instead of subject people. In the place of the concept of Praja (the subject people) the word Pauran (citizen) got wide currency and consequently Paura Samathva Vadam (Civic Rights) gained significance in Travancore context. 21 The Civic Rights movement constituted a major chapter in the political history of modern Kerala. The aggrieved communities of Ezhavas, Christians and Muslims demanded the separation of Devaswoms from the land revenue department for ensuring their entry into the services of the land revenue department. More over the Revenue was the most important department of administration and the heads of several non-technical departments were chosen from the higher-grade posts in that department. 22 The demands for the separation was raised in the popular assemblies like Sree Mulam Popular Assembly, Sree Chitra State Council and Travancore Legislative Council by the representatives of various non Hindus and non caste Hindus. The demands raised in the popular assemblies by M.D.Arumanayakam, C.Thomas, Varkey John, G.Idichandy, Thomas 21 The modern concepts of citizenship and leis size fare were purely western which developed after the French Revolution. The educated Indians also acquainted with these concepts. In Travancore by the beginning of the 20th century the marginalized communities realised that there was politics even in 'wordings' 22 Caste and Citizenship in Travancore, Travancore Civic Rights League, 1919, p. 8. Krishna Ayyangar, the Forest Settlement Peishkar to study and report on the issue of the separation of Devaswom Department. He recommended the separation of all government and private Devaswoms from the Land Revenue Department including charitable institutions. On the basis of the report a proclamation was made separating the Devaswom from Land Revenue Department on 12 April 1922. Through the separation of Devaswom from the Land Revenue Department, the main obstacle for the employment of the aggrieved communities in the Revenue Department was relieved. The leaders of Sree Narayana movement struggled for the opening of public amenities before the non caste Hindus. They opposed the installation of pollution board known as 'theendal palakakal' in various parts of the state, prohibiting the movement of the low caste Hindus near the temples. These notice boards defending on the one hand the ritual integrity of the Hindu temples and on the other hand denying an elementary civic right to some sections of the subjects of the state. 25 It was for getting the rights of the low caste Hindus to move freely through the approach roads of the temples that the famous satyagraha movements at Vaikom, Guruvayur, Suchindram, Thiruvarppu, Ambalappuzha, Kalpathi, Paravur and Kanyakulangara conducted in which the leaderships were taken by T.K. Madhavan and other stalwarts of Sree Narayana movement. Role of Sree Narayana Movement in the Nivarthana Agitation It was for getting adequate representation in the legislative bodies that the aggrieved communities of Ezhavas, Christians and Muslims started the Nivarthana agitation otherwise called Abstention movement in 1930s. This movement was a continuation of the Civic Rights movement. This movement was started against the legislative reform proclaimed on 28 October 1932 as Regulation II by His Highness Sri Chitra Thirunal Maharaja which provided least representation to the aggrieved communities. Samiti. They have decided to abstain from the election to the legislative council. Kerala witnessed several meetings, protests, signature campaigns and submission of memorandums during the period of abstention movement. On 13 May 1935 C. Kesavan, the leader of Sree Narayana movement made a provocative speech at Kozhenchery against the Diwan C.P. Ramaswamy Iyer, the brain behind the legislative reforms of Travancore. He satirically argued that with the development of Nivarthana agitation 'The Ninth Celebration began in the Huzur Garbha Griha.' 26 He labelled Sir. C.P. Ramaswamy as 'Jantu'(Creature) and said, "We have to deport that Jantu-I do not say Jantu, but Hindu, beyond Sahyadri and sprinkle the cowdung." 27 Kesavan was tried and put in jail by the authorities. The situation changed in the state with the vehement protest of the aggrieved communities and the favouarable stand taken by the new Dewan of Travancore Habibulla. Government appointed E. Subramonia Iyer, Retired Principal, Government Law College as Franchise Commissioner on 17 August 1935. The new move was wholeheartedly supported by the aggrieved communities. On the basis of the report of the franchise commissioner reservation of seats were provided in the legislatures for the principal communities of non Hindus and non caste Hindus. Sree Narayana Movement and the Politics of Conversion In Hinduism temples were the visible symbols of religion and in Kerala the low caste Hindus not only denied entry but also prohibited the passage through the approach roads. The innumerable protests and petitions of the people could not create any positive signal from the side of the authorities. The 26 N. Thankappan. "Kesavanum Nivarthana Prasthanavum." Vivekodayam, vol. 3, 1969, p. 80. 27 Ibid. movement for temple entry was started in Kerala by the Ezhava community and leaders of Sree Narayana movement. To begin with Raman Thampi a Judge in the High Court while delivering the presidential address in the Sree Narayana Guru Jayanthi meeting held at Kollam in 1918, made a plea that the Ezhava communities of Travancore should start parallel temples in the state and raise their demand for entry into Hindu temples. Following this C.V. Kunjuraman, leader of Sree Narayana movement wrote an article in the Desabhimani daily in favour of starting temple entry movement. In the popular assemblies the same demand was raised by SNDP leaders like T. K. Madhavan, Kunju Panikker and Chavarkottu Marthandan Vaidyan. T. K Madhavan succeeded in enlisting the support of Gandhi for the temple entry demands of the non caste Hindus of the state. The leaders of Sree Narayana movement like C. Krishnan, Sahodaran Ayyappan and C.V Kunjuraman used the politics of conversion as a threat for conceding the rulers in favour of temple entry. There were occasional cases of conversion from the Ezhavas to Buddhism, Sikhism, Islam and Christianity. For conversion one of the early choices of the Ezhava leaders was Buddhism. Changaranmkulathu Krishnan who owned and edited the news paper Mitavadi presented Buddhism as an anti-thesis to discriminatory Hinduism and made a lot of homework to reproduce the heritage of the Ezhavas to Buddhist religion. Further he argued that Buddhism was the suitable religion for the Ezhavas for conversion because unlike Christianity and Islam it guaranteed equal treatment to all its followers including converts. 28 When the movement for conversion to Buddhism was in the air one Sreedharan along with a few others embraced Buddhism in Alapuzha. He set up an organisation called Buddha Mission in Alapuzha and assumed a new name R. Sugathan. C. Krishnan and Ayyappan invited a Budha Bhikshu from Ceylon and arranged all facilities for his stay at Kozhikode. Sikhism was another option for the Ezhavas to convert. When the conversion propaganda crossed the boundaries of Kerala, the Sikh leaders once again reached Kerala and attempted to convert the Ezhavas to their faith. Later for propagating conversion two Sikh leaders namely Master Thara Singh and Sirdar Lal Singh visited Kerala. They even made attempts to construct few Gurudvaras in Malabar. An Ezhava leader K.C. Kuttan and his few associates embraced Sikhism. Mr.Kuttan assumed a new name Sardar Jaya Singh after his conversion. Islam was another religion opted by the leaders of Sree Narayana movement as part of conversion threat. Mr. K.L. Gauba an Islamic leader was entrusted for opularising Islamic religion among the Ezhavas. But Christianity was the most acceptable religion for the Ezhavas to convert to whom they had more lineages. They reiterated the enormous services rendered by the Christian missionaries for the educational advancement, annihilation of social abuses & case rigidity. The strong stand for conversion to Christianity was taken by C.V Kunjuraman, under whose initiative the Ezhava leaders met Bishop Moor at Changanassery aramana for studying the Christian practices. At maramon convention he participated and openly declared that the Ezhava community is going to convert to Christianity. More over the SNDP Yogam in its meeting held at Changanassery under the presidentship of Sahodaran Ayyappan on 6 May 1936 the issue of conversion came up for discussion and the decision was taken in favour of the conversion to Christianity. It was at this point that the Travancore Diwan C.P. Ramaswamy Iyer advised the King Chithira Thirunal to issue the Temple Entry proclamation which became a reality on 12 November 1936. The Sree Narayana movement and its leaders played a crucial role in the struggle for responsible government in Travancore. The decision to form the Travancore State Congress was taken at Thiruvananthapuram. The SNDP Yogam leaders allied with other principal communities against the autocratic rule of Diwan CP Ramasawamy Iyer and popularised the demand for responsible government at national level. An unsuccessful attempt was made on the life of C.P on 26 July 1947. After this incident he relinquished the Diwanship on 19 August 1947 and responsible government was declared in Travancore On the basis of its recommendations of a reform committee constituted, a free election was held in Travancore in February 1948 and the first ministry was formed under Pattom Thanu Pillai. Conclusion Sree Narayana Dharma Paripalana Yogam was pioneer organization that played a conspicuous role in the transformation of Kerala into a modern democratic society. From the position of the caste victims the Ezhava community and other lower caste Hindus could elevate themselves to the makers of their own destinies. They could politicize the social relations for their advantages that resulted in the social change that involved transformation in social, political and economic organization. This phenomenal change occurred in Kerala was through contradictions. The shift of change was from structural relations to human relations or from caste hierarchical structure to inter-personal relations. For achieving this objective the non Hindus and non caste Hindus made various permutations and combinations. The Socio-Religious reform movements, particularly Sree Narayana movement worked for the creation of an honourable identity for the depressed castes,who were mute millions without a voice in the public realm. In the process of identity formation, the reformers did not wish to wean away the untouchable castes from the larger Hindu identity. The identity of caste was by the Non caste Hindus of Kerala as a powerful weapon against internal colonialism built out of caste principles which according to them more dangerous that external colonialism. Thus before political nationalism caste nationalism had taken root especially in the large majority of people of Kerala who remained outcastes and depressed classes. Political liberty for them was a luxury when compared to the necessary social freedom. Even though they were primarily meant for the material and spiritual uplift of the respective social group, they actively put their head into the political affairs of the state because every social issue had at that given period its political undertones. Protest movements were the vehicles through which the backward castes in Kerala attempted to make social transformation in which the movement played the vital role. Author Details Shaji Anirudhan, Professor of History, University of Kerala, Kerala, India, Email ID: shajideepam@gmail.com
v3-fos-license
2023-01-19T22:08:31.391Z
2015-04-01T00:00:00.000
255986261
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://parasitesandvectors.biomedcentral.com/counter/pdf/10.1186/s13071-015-0821-6", "pdf_hash": "22a0bb449e7b1a6025673021a0779338c11d1203", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44663", "s2fieldsofstudy": [ "Biology" ], "sha1": "22a0bb449e7b1a6025673021a0779338c11d1203", "year": 2015 }
pes2o/s2orc
Evolutionary history of Leishmania killicki (synonymous Leishmania tropica) and taxonomic implications The taxonomic status of Leishmania (L.) killicki, a parasite that causes chronic cutaneous leishmaniasis, is not well defined yet. Indeed, some researchers suggested that this taxon could be included in the L. tropica complex, whereas others considered it as a distinct phylogenetic complex. To try to solve this taxonomic issue we carried out a detailed study on the evolutionary history of L. killicki relative to L. tropica. Thirty-five L. killicki and 25 L. tropica strains isolated from humans and originating from several countries were characterized using the MultiLocus Enzyme Electrophoresis (MLEE) and the MultiLocus Sequence Typing (MLST) approaches. The results of the genetic and phylogenetic analyses strongly support the hypothesis that L. killicki belongs to the L. tropica complex. Our data suggest that L. killicki emerged from a single founder event and that it evolved independently from L. tropica. However, they do not validate the hypothesis that L. killicki is a distinct complex. Therefore, we suggest naming this taxon L. killicki (synonymous L. tropica) until further epidemiological and phylogenetic studies justify the L. killicki denomination. This study provides taxonomic and phylogenetic information on L. killicki and improves our knowledge on the evolutionary history of this taxon. Background Leishmaniases are neglected tropical diseases caused by Leishmania parasites and transmitted to mammals through bites by infected Phlebotomine sandflies of the genus Phlebotomus [1]. In humans, these diseases can have cutaneous (CL), muco-cutaneous (MCL) or visceral (VL) clinical manifestations. Since the first description of the genus Leishmania Ross, 1903, the classification methods have considerably evolved. Indeed, between 1916 and 1987, Leishmania taxonomy followed the Linnaean classification system, mainly based on extrinsic features, such as clinical manifestations, geographical distribution, epidemiological cycles and behaviour in sandfly vectors. This method has led to the subdivision of the Leishmania genus in the two sub-genera Leishmania and Viannia [2,3]. In the eighties, the biochemical classification based on the study of the parasite isoenzymatic patterns started to be developed. This approach has evolved from the classical Adansonian to the numerical cladistic classification method that uses isoenzymes as evolutionary markers [4][5][6][7][8]. The description of several Leishmania complexes in the Old and New World is based on these analyses. Specifically, by using numerical phenetic and phylogenetic approaches, Rioux et al. [9] identified four main Leishmania groups in the Old World, while Thomaz et al. and Cupolillo et al. [10,11] defined eight complexes and two Leishmania groups in the New World. Currently, the numerical taxonomic approach based on isoenzyme analysis is considered as the gold standard for the classification of the genus Leishmania and is routinely used for classification updates and for epidemiological studies [12,13]. The drawbacks of this approach are the need of bulk cultures of Leishmania parasites and its relatively poor discriminatory power. It is also time-consuming. Therefore, DNA-based techniques represent valuable alternatives for the identification and the classification of these parasites. MLST is one of the most appropriate approaches for taxonomic studies because it provides data on the genetic variations of housekeeping genes. This approach has been increasingly used for phylogenetic investigations to understand the epidemiological and transmission features of many Leishmania complexes [20,[29][30][31][32][33]. However, because of the complexity of this genus and the lack of studies, several taxa need to be detailed further [34]. Leishmania killicki is a recently described taxon that causes CL in Tunisia [35], Libya [36] and Algeria [37]. L. killicki taxonomic status and evolutionary history relative to L. tropica are based on very few studies and samples. The numerical taxonomic analysis using the Multilocus Enzyme Electrophoresis (MLEE) approach first included this parasite in the L. tropica complex [9,38]. However, after the revision of the Leishmania genus classification, it was considered as a separate phylogenetic complex [39]. Recently, an update study by Pratlong et al. [12] confirmed the inclusion of L. killicki within the L. tropica complex. Phenetic and phylogenetic studies using MLMT [40], PCR-sequencing [41] and MLST [31] also classified L. killicki within the L. tropica complex and suggested a closer genetic link with L. tropica from Morocco. However, these data were obtained using only seven L. killicki strains: two strains were analyzed by Schwenkenbecher et al. [40], two by Chaouch et al. [41] and three by El Baidouri et al. [31]. Therefore, the present study wanted to analyze by MLST a large number of L. killicki and L. tropica strains in order to precisely determine the evolutionary history and the taxonomic status of L. killicki. Origin of strains For this study, strains of L. killicki (n = 35), L. tropica (n = 25), L. major (n = 1) and L. infantum (n = 1) from different geographic areas and with various zymodeme patterns were included (total = 62 strains). These strains were from human cutaneous lesions, except the L. infantum strain that was isolated from a patient with VL. Most strains (n = 53) were selected from the Cryobank of the Centre National de Référence des Leishmanioses (CNRL) (Montpellier, France) and nine L. killicki strains were collected by the team of the Laboratoire de Parasitologie -Mycologie Médicale et Moléculaire (Monastir, Tunisia) during epidemiological investigations. Forty-eight strains, among which 34 L. killicki strains (six from Algeria, one from Libya and 27 from Tunisia) and 14 L. tropica strains from Morocco were analyzed by MLST for the first time during this study. The eleven remaining L. tropica strains were from several countries (one from Egypt, one from Greece, two from Israel, two from Jordan, three from Kenya and two from Yemen) and were previously typed by MLST. Their sequences were published in Genbank under the following accession numbers: KC158621, KC158637, KC158643, KC158677, KC158682, KC158683, KC158690, KC158696, KC158711, KC158722 and KC158761 (see [31]). One L. killicki strain (LEM163) MHOM/TN/80/LEM163 had also already been analyzed by MLST (Genbank accession number KC158820 (see [31]). Isoenzymatic identification All studied strains were identified by MLEE, according to Rioux DNA extraction Genomic DNA from cultured parasites was extracted using the QIAamp DNA Mini Kit (Qiagen, Germany) following the manufacturer's recommendations and eluted in 150 μl. Analysis by Multilocus sequence typing (MLST) The L. killicki (n = 34) and L. tropica (n = 14) strains that had not been previously assessed by MLST were typed using the MLST approach based on the analysis of seven loci coding for single-copy housekeeping genes that was developed and optimized by El Baidouri et al. [31]. Genomic DNA was amplified by real-time PCR using the SYBR Green method (Light cycler 480 II, Roche). The amplified products were sequenced on both strands (Eurofins MWG Operon, Germany) and the obtained sequences were aligned and checked in both directions using the CodonCode Aligner software, v.4.0.1 (Codon Code Co., USA). For each strain, polymorphic sites (PS) and ambiguous positions corresponding to heterozygous sites (HS) were identified in each locus using the same software. The DnaSP software v.5 [42] was used to calculate the number of haplotypes from the concatenated sequences. Phylogenetic relationships were inferred using a Bayesian approach implemented with the MrBayes software v. 3.2.3 [43]. The concatenated duplicated sequence alignments of the seven loci for the 32 Leishmania strains representing all the identified haplotypes and the two outgroup strains (n = 34 in total) were used to run two independent chains for 10,000,000 generations each and trees sampled every 1000 generations. The burn-in period was set to 200,000 generations to fit the first 20% of the analyses. Analyses were conducted using the General time reversible model of substitution with a proportion of invariable sites and gamma distribution estimated by the program (GTR + G + I). The chain convergence was assessed using the average standard deviation of split frequencies (ASDSF). If two runs converge onto the stationary distribution, the ASDSF is expected to approach zero, reflecting the fact that the two tree samples become increasingly similar. An average standard deviation below 0.01 is thought to be a very good indication of convergence (below 0.004 in our analysis). The consensus tree was constructed using 1000 trees sampled from the stationary phase. The MEGA 5.10 software [44] was used to identify amino acid variations between L. killicki and L. tropica. Isoenzymatic identification of Leishmania strains Among the 62 strains under study, 53 had been previously characterized by MLEE at the Centre National de Référence des Leishmanioses. The nine strains collected by the team of the Laboratoire de Parasitologie -Mycologie Médicale et Moléculaire (Monastir, Tunisia) were identified for the first time in this study using the same technique [12,[35][36][37][38][45][46][47]. Nevertheless, all the strains were analyzed again by MLEE at the Centre National de Référence des Leishmanioses (Montpellier, France). Seventeen zymodemes were identified: three for L. killicki, 12 for L. tropica and a single zymodeme for each L. major and L. infantum strain (Table 1) [35]. On the other hand, the MDH, ME, GOT1, GOT2 and FH profiles were different in the zymodemes MON-317 and MON-301 [37], and the MDH, GOT1, GOT2 and FH profiles allowed discriminating between MON-317 and MON-306 (a zymodeme described in Algeria, but not included in our sample collection) [48] (Table 2). For L. tropica, all the identified zymodemes were already known [12,45] (Table 1). Sequence analysis The sequences of the L. killicki (n = 34) and L. tropica (n = 14) strains were submitted to GenBank (accession numbers from KM085998 to KM086333). The sizes of the seven loci under study were identical to those reported by El Baidouri et al. [31], except for locus 12.0010 (only 579 pb instead of 714 pb), leading to a total length of 4542 pb for the concatenated sequences (Table 3). All chromatograms were clearly readable. Polymorphic sites (PS) and heterozygous sites (HS), which corresponded to ambiguous positions with two peaks, were easily identified. No tri-allelic site was detected. Genetic polymorphisms in L. killicki and in L. tropica Table 4). Assessment of the presence of mutations in the seven loci under study in the L. killicki and L. tropica (heterozygous mutations were excluded from the analysis) identified 55 mutations of which 29 were silent substitutions and 26 resulted in altered amino acid residues ( Table 5). All L. killicki mutations corresponded to a single amino acid change. Conversely, in the L. tropica strains, mutations could lead to more than one amino acid change. Phylogenetic analysis of L. killicki In total, 32 different haplotypes were identified: 10 for the 35 L. killicki strains and 22 for the 25 L. tropica strains. Twenty-six haplotypes were unique (eight for L. killicki and 18 for L. tropica) and the two taxa did not share any haplotype. The L. killicki MON-317 (strain LEM6173) had its own haplotype ( Table 6). The Bayesian consensus tree using 32 strains representing all the identified haplotypes was constructed based on the concatenated sequences and duplicated nucleotide sites to avoid the loss of genetic information in ambiguous positions ( Figure 1). The phylogenetic tree showed that L. killicki formed a separate group, although it belonged to the L. tropica complex. The L. killicki cluster showed low structuring and low polymorphism (see Figure 1). In contrast, L. tropica was highly polymorphic with strong structuring supported by high bootstrap values and some links with the country of origin, especially for strains from Kenya and Yemen. The larger and main clade was composed by all the Moroccan strains with the addition of other strains from other countries. Discussion Previous studies using a small number of strains and different molecular tools and analytic methods [9,12,31,38,40] included L. killicki in the L. tropica complex, except the study by Rioux and Lanotte [39] in which it was considered as a separate phylogenetic complex. The present study wanted to improve the knowledge on L. killicki phylogeny and its evolutionary history relative to L. tropica by using a larger sample of L. killicki strains from different countries. The phylogenetic analyses performed in this study confirm the position of this taxon within L. tropica in agreement with the previous biochemical and genetic findings. The close phylogenetic relationships between these taxa were also confirmed by the low number of polymorphic sites compared to those found between various Leishmania species [30,31]. The phylogenetic tree shows that L. killicki creates an independent group within L. tropica with high bootstrap value and no common haplotypes between them. Nevertheless, this taxon is included in the L. tropica complex and our data indicate that the species status of L. killicki is not justified. Furthermore, based on the L. tropica complex diversity and the multiple monophyletic branches in this complex, if L. killicki were to be considered as a species, the L. tropica complex would be composed of many species. Therefore, we suggest calling this taxon L. killicki (synonymous L. tropica) as it was previously done before for L. chagasi (synonymous L. infantum) [9,11,49,50]. Further epidemiological and clinical studies in the different countries where this taxon has been reported will say whether the L. killicki denomination should be maintained. From an evolutionary point of view, these data strongly suggest that L. killicki descends from L. tropica following only one founder event. This hypothesis is supported by the structure of the phylogenetic tree and by biochemical and genetic data. Indeed, the isoenzymatic characterization showed a low number of L. killicki zymodemes compared to those of L. tropica. This low polymorphism in L. killicki was confirmed by the low numbers of PS, HS and haplotypes and amino acid variations in the sequence of the different strains. The analysis of the phylogenetic tree suggests that L. killicki could have originated from an L. tropica ancestor from the Middle East. This ancestor would have separated into L. tropica in Morocco and other countries and into L. killicki in several other countries. Finally, the lack of shared haplotypes and the identification of the new zymodeme MON-317 and its own haplotype suggest that L. killicki is now evolving independently from L. tropica, probably due to their different transmission cycles (zoonotic for L. killicki [51,52] and both anthroponotic and zoonotic for L. tropica [45,53,54]). As the L. killicki strains showed low structuring and low polymorphism, we could not determine the precise evolutionary history of this taxon and particularly the country in which it emerged for the first time. Based on the epidemiological data, the higher genetic diversity and especially the relatively high number of described cases in Tunisia compared to the other countries [35,[55][56][57], it is likely that this taxon has emerged for the first time in Tunisia and then has spread in other North-African countries. Nevertheless, this should be further investigated. Conclusion The present work brings new insights into the evolutionary history of L. killicki and its taxonomic classification relative to L. tropica. However, more investigations need to be carried out on this model and particularly a detailed population genetics analysis to better understand the epidemiology and population dynamics of this parasite in comparison to L. tropica. revision of the manuscript; ALB participated in the analysis, interpretation of data and has contributed to the draft and revision of the manuscript; NH has been involved in the revision of the manuscript; PL and LT have participated in the technical experiments; FEB has contributed to data analysis; KJ and ZH have participated in sample collection; JPD was involved in the revision of the manuscript; HB directed the study; FP directed the study, revised and approved the manuscript. All authors read and approved the final manuscript.
v3-fos-license
2020-07-09T15:02:55.596Z
2020-07-09T00:00:00.000
220417085
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-020-17246-w.pdf", "pdf_hash": "199f467bb72ada2b98a95f4847038c0ae8f8f1d7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44665", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "18f58e231bd37fe02ed3f462b12bc62c295ad943", "year": 2020 }
pes2o/s2orc
Development of A4 antibody for detection of neuraminidase I223R/H275Y-associated antiviral multidrug-resistant influenza virus The emergence and spread of antiviral drug-resistant viruses have been a worldwide challenge and a great concern for patient care. We report A4 antibody specifically recognizing and binding to the mutant I223R/H275Y neuraminidase and prove the applicability of A4 antibody for direct detection of antiviral multidrug-resistant viruses in various sensing platforms, including naked-eye detection, surface-enhanced Raman scattering-based immunoassay, and lateral flow system. The development of the A4 antibody enables fast, simple, and reliable point-of-care assays of antiviral multidrug-resistant influenza viruses. In addition to current influenza virus infection testing methods that do not provide information on the antiviral drug-resistance of the virus, diagnostic tests for antiviral multidrug-resistant viruses will improve clinical judgment in the treatment of influenza virus infections, avoid the unnecessary prescription of ineffective drugs, and improve current therapies. • Affinities measured by ELISA are used for a rough estimation. Since the authors measured the affinity of the antibody by surface plasmon resonance, which is the state-of-art technique for such, the ELISA measurements should be kept as supplemental information or omitted since it does not add anything to the paper. Moreover, the methodology for measuring the affinity by ELISA was not described in the material and methods section. • There is no methodology for docking described in the paper and also there is no methodology for free energy calculation described in the paper. Please clarify this. • Different from protein structure modelling and despite its evolution in the past years, proteinprotein docking methods are still only valid after experimental validation. Given two structures, a docking algorithm will always find many docking conformations even if the two proteins don't actually bind. Without experimental validation of the docking structure, it's impossible to assess its quality and validity. Without further evidence there is no way to substantiate the claims made. Moreover, all the features pointed out for increased binding affinity consider hypothetical enthalpic changes, when in fact it could be entropic contributions. HA, which commonly renders the virus resistant to antiviral drugs, including oseltamivir. Furthermore, the authors describe the incorporation of this antibody within a lateral flow immunoassay. Even though the flow of experiments is logical, I do not believe that the manuscript at its current stage justifies the conclusions. Most importantly, all tests are performed with recombinant proteins or virus. In order to lay claim that this platform, in particular in combination with the LFI, would have any impact on clinical management of influenza virus infection, it would be absolutely essential that the authors evaluate performance of their assay directly in patient samples. Many diagnostic platforms have ultimately failed and a rigourous assessment using nasal swabs or broncheoalveolar lavage samples would be needed to assess the performance of this. Furthermore, the authors do not discuss the relevance of other drug resistance-associated variants in Influenza virus subtypes. Even though I223R/H275Y is an important resistence-associated variant, there are others playing an important role in other influenza virus subtypes. As far as I understand from the manuscript, the here-described antibody is specific to the H1N1 strain, even though it is not clear whether the authors tested other influenza virus subtypes. This would need to be included to ascertain the diagnostic validity of the assay. Reviewer #3: None We appreciate the reviewers for valuable comments to improve our manuscript. The changes in the manuscript and the answers to the reviewers' comments are as follows: Reviewer comments In this work, the authors develop a new monoclonal antibody capable of specifically recognizing an influenza neuraminidase double-mutant which confers drug resistance. The goal stated by them is to create a point-of-care fast diagnosis method for drug-resistant strains. This is a real problem identified by the authors and they go in the direction of solving it. This is a very respectable work but, despite the paper's merits, it cannot be framed as a protein engineering paper neither as diagnostic development paper since it lacks in both areas. I find that there are major issued that have to be addressed to substantiate their claims before it can be accepted in a publication such as Nature Communications. Question 1 They show the antibody recognizing the double-mutant protein and not recognizing the wildtype. However, they never use single-mutants in their study which also confers drug resistance and is more prevalent them the double mutants. It's paramount to test the single mutants to show the diagnostic value of the antibody. Answer) Following the reviewer's suggestion, we tested the detection of single-mutant influenza virus using A4 antibody. Because H275Y mutation is the most frequently observed drug-resistant mutation, [1] we examined the diagnostic ability of A4 antibody for pH1N1/H275Y mutant virus. The pH1N1/H275Y mutant virus (H275Y mutation A/Korea2785/2009 pdm: NCCP 42017) was obtained from the National Culture Collection for Pathogens (NCCP) operated by the Korea National Institute of Health (KNH). Figure R1a shows binding activity of purified A4 antibody to H275Y NA by competition ELISA. The A4 antibody bound to H275Y NA in a concentration-dependent manner with K d of 0.12 µM. Figure R1b displays the interaction between A4 antibody and pH1N1/H275Y mutant virus (10 7 PFU/mL) by dot-blot analysis. A4 antibody was applied to pH1N1/H275Y mutant virus and HRP-conjugated anti-human IgG Fc was applied for detection. For the comparison, I223Y/H275Y pH1N1 (10 7 PFU/mL) and wt NA were also examined. As shown 2 in Figure R1b, the dot was observable only from the double-mutant virus. This suggests the low affinity of A4 antibody to the single-mutant influenza virus. Figure R1. (A) Binding activity of purified A4 antibody to H275Y NA by competition ELISA. (B) Interaction of A4 antibody to I223Y/H275Y pH1N1 (10 7 PFU/mL), H275Y pH1N1 (10 7 PFU/mL), and wt NA (0.5 mg/mL) by dot-blot analysis. We also applied A4 antibody for the detection of pH1N1/H275Y mutant virus by using colorimetry, SERS, and LFA. Figure R2a is absorption spectra of A4-Au NPs in the presence of pH1N1/H275Y mutant virus. The presence of single-mutant virus in the A4-Au NP solutions caused little change in absorption spectra shift. The SERS-based immunoassay result for H275Y pH1N1 (10 6 PFU) also shows negative signals ( Figure R2b). Lastly, the micrograph of the LFA after detection of single-mutant virus (10 7 PFU) exhibits no test line. Taken together, we concluded that the A4 antibody can recognize the I223R/H275Y pH1N1 virus specifically. Although we reported the detection of multidrug-resistant virus in this manuscript, the identification of single-mutant influenza virus is also important as mentioned by the reviewer. To achieve this goal, we had been developed the methods for pH1N1/H275Y mutant virus. [2][3][4] Moreover, we will report the other state-of-art results for the accurate identification of drugresistant virus soon. "Additionally, we tested the recognition of single-mutant influenza NA protein (H275Y NA) using A4 antibody. H275Y mutation is the most frequently observed drug-resistant mutation. 10 The A4 antibody bound to H275Y NA in a concentration-dependent manner with K d of 0.12 µM ( Figure S4A). Figure S4B displays the interaction between A4 antibody and pH1N1/H275Y mutant virus (10 7 PFU/mL) by dot-blot analysis. For the comparison, I223R/H275Y pH1N1 (10 7 PFU/mL) and wt NA were also examined. As shown in Figure S4B, the dot was observable only from the double-mutant virus. This suggests the low affinity of A4 antibody to the single-mutant influenza virus." (Line 20, Page 17). 4 "The presence of wt pH1N1 virus and H275Y pH1N1 virus in the A4-Au NP reaction solutions caused little change in color or absorption spectral shift ( Figure 4A, S6C, S7A).". "For the detection of influenza viruses, the immune substrates were reacted with I223R/H275Y pH1N1, wt pH1N1, or H275Y pH1N1, and then the immunoprobes were reacted ( Figure S8). Figure 5a is the SERS-based immunoassay results for I223R/H275Y pH1N1 (blue spectrum) and wt pH1N1 (black spectrum). The number of both viruses is 1,500 PFU. The SERS-based immunoassay result for H275Y pH1N1 was shown in Figure S7B. When the sample includes I223R/H275Y mutant virus, Au NPs on a nanoplate (NPs-on-plate) structures can be constructed through the immunoreaction of A4-I223R/H275Y pH1N1-HA. This NPs-on-plate architectures can provide significantly enhanced SERS signals. In contrast, very weak SERS signals were obtained when the sample has wt pH1N1 or H275Y pH1N1 because A4 does not bind to the wt influenza virus or single-mutant virus." (Line 1, Page 22). "On the other hand, wt pH1N1 virus and H275Y pH1N1 virus are not able to interact with A4-Au NPs; thus, a signal from the test line is not observed. Figure 6A, S7C is the micrographs of the LFAs after detection of wt, double-mutant, and signle-mutant viruses. When the I223R/H275Y pH1N1 virus samples were applied, the red test lines were observed clearly. Importantly, even in the cases of high concentrations of wt pH1N1 virus (10 6 PFU) and H275Y virus (10 6 PFU), only the control line was observed in the absence of the I223R/H275Y pH1N1 virus." (Line 9, Page 23). Question 2 They frame the paper as a diagnostic development work, but all the tests are done against recombinant protein or laboratory produced viruses. It's paramount that their very interesting tools are tested on real human samples where it should be compared to traditional diagnostic methods such as DNA sequencing and/or qPCR to prove its ability to accurately differentiate the virus in real samples. Sensitivity and specificity numbers should be calculated after these tests to show the validity of the methods. This verified that the A4 antibody can recognize the I223R/H275Y pH1N1 virus specifically in real sample. The sensitivity and specificity of A4-based LFA developed for rapid antiviral multidrug-resistant influenza virus diagnostic tests are 100% (14/14) and 100% (14/14), respectively." (Line 11, Page 24). Question 3-1 They show computational studies for docking of the antibody-antigen complex without providing any methodological detail. Free energy number is discussed but it's never explained how does the numbers were obtained. Different from protein structure modeling and despite its evolution in the past years, protein-protein docking methods are still only valid after experimental validation. Given two structures, a docking algorithm will always find many docking conformations even if the two proteins don't actually bind. Without experimental validation of the docking structure, it's impossible to assess its quality and 7 validity. Without further evidence, there is no way to substantiate the claims made. Moreover, all the features pointed out for increased binding affinity consider hypothetical enthalpic changes, when in fact it could be entropic contributions. Answer) With respect to the docking simulations of the epitope in the CDR of A4, a total of 20 conformations of the epitope were generated with the genetic algorithm. Among these putative binding conformations, clustered together had similar binding modes differing by less than 1.5 Å in positional root-mean-square deviation. The lowest-energy configuration in the top-ranked cluster was selected as the final structural models for the antibody-epitope complexes. To explain these, we have added a paragraph in the revised manuscript as follows. "Docking simulations of wt and I223R/H275Y NA in the CDR of A4 3D structure of A4 obtained in the precedent homology modeling served as the receptor model in docking simulations with wt and I223R/H275Y mutant NA. The epitope structures were extracted from the X-ray crystal structure of (PDB entry: 4B7R). 37 Docking simulations were carried out using the AutoDock program to estimate the binding free energy (ΔG bind ) of the epitope in the complementarity-determining region (CDR) of A4, which can be expressed mathematically as follows. 38 The weighting parameters for van der Waals contacts (W vdW ), hydrogen bonds (W hbond ), electrostatic interactions (W elec ), entropic penalty (W tor ), and ligand dehydration free energy (W sol ) were set to 0.1485, 0.0656, 0.1146, 0.3113, and 0.1711, respectively, as in the original AutoDock program. r ij stands for the interatomic distance, and A ij , B ij , C ij , and D ij are associated with the well depth and the equilibrium distance in the potential energy function. The hydrogen bond term has the additional weighting factor (E(t)) to describe the angledependent directionality. To compute the electrostatic interaction energy between A4 antibody and the epitopes, we used the sigmoidal function with respect to r ij proposed by Mehler et al. as the distance-dependent dielectric constant. 39 In the entropic penalty term, N tor indicates the number of rotatable bonds in the epitope. In the hydration free energy term, S i and V i denote the atomic solvation energy per unit volume and the fragmental atomic volume, respectively, while Occ i max represents the maximum occupancy of each atom in the epitope. 40 8 All the energy parameters in Eq. (1) were extracted from the original AutoDock program to derive the binding modes of wt and I223R/H275Y mutant NA in the CDR of A4. Among 20 conformations generated with the genetic algorithm, those clustered together had similar binding modes differing by less than 1.5 Å in positional root-meansquare deviation. The lowest-energy configuration in the top-ranked cluster was selected as the final structural models for antigen-antibody complexes." (Line 22, Page 11). Question 3-2 They show computational studies for docking of the antibody-antigen complex without providing any methodological detail. Free energy number is discussed but it's never explained how does the numbers were obtained. Different from protein structure modeling and despite its evolution in the past years, protein-protein docking methods are still only valid after experimental validation. Given two structures, a docking algorithm will always find many docking conformations even if the two proteins don't actually bind. Without experimental validation of the docking structure, it's impossible to assess its quality and validity. Without further evidence, there is no way to substantiate the claims made. Moreover, all the features pointed out for increased binding affinity consider hypothetical enthalpic changes, when in fact it could be entropic contributions. Answer) We agreed that the binding modes derived from docking simulations had to be validated with experimental approaches. Therefore, we carried out the mutational analysis at positions His94 in the light chain and Trp33 in the heavy chain in order to assess the importance of the hydrophobic interactions to stabilize the epitopes in the CDR of A4. These mutant A4 antibodies were purified with the same method as A4 antibody. Figures with the aromatic residues in CDR seems to be a determinant for selective binding to A4 antibody." (Line 7, Page 19). Question 3-3 They show computational studies for docking of the antibody-antigen complex without providing any methodological detail. Free energy number is discussed but it's never explained how does the numbers were obtained. Different from protein structure modeling and despite its evolution in the past years, protein-protein docking methods are still only valid after experimental validation. Given two structures, a docking algorithm will always find many docking conformations even if the two proteins don't actually bind. Without experimental validation of the docking structure, it's impossible to assess its quality and validity. Without further evidence, there is no way to substantiate the claims made. Moreover, all the features pointed out for increased binding affinity consider hypothetical enthalpic changes, when in fact it could be entropic contributions. Answer) The binding free energy function used in this work included not only the enthalpic term but also the entropic term that is proportional to the number of rotatable bonds in the epitope. To place an emphasis on this point, we added a sentence in the revised manuscript as "In the entropic penalty term, N tor indicates the number of rotatable bonds in the epitope. In the hydration free energy term, S i and V i denote the atomic solvation energy per unit volume and the fragmental atomic volume, respectively, while Occ i max represents the maximum occupancy of each atom in the epitope. 40 " (Line 13, Page 12). Question 4 There are no considerations or references on phage display library specifications. This is very important for an antibody display paper. Answer) Previously constructed large naïve human Fab phage display library (3 × 10 10 ) in Korea Research Institute of Bioscience and Biotechnology was used for antibody screening. 11 We added the description in the revised manuscript as "For antibody screening, previously constructed large naïve human antigen-binding fragment (Fab) phage display library (3 × 10 10 ) in Korea Research Institute of Bioscience and Biotechnology was used. 31 " (Line 21, Page 8). Additional reviewer comments Those are the major points that I firmly believe should be addressed to make this paper, that has a lot of potentials, a very important piece of work that will push the current state-of-theart on influenza diagnosis. Other minor points are described below. Question 5 Affinities measured by ELISA are used for a rough estimation. Since the authors measured the affinity of the antibody by surface plasmon resonance, which is the state-of-art technique for such, the ELISA measurements should be kept as supplemental information or omitted since it does not add anything to the paper. Moreover, the methodology for measuring the affinity by ELISA was not described in the material and methods section. were pre-incubated at 37 °C for 1 -2 h. The mixture was then added to each well previously coated with 100 ng of NA. Anti-human Fc-HRP (Thermo, 1:10,000 v/v) was added to the wells. All incubations were carried out at 37 for 1 h. Color was developed with OptEIA TMB Substrate (BD), and the absorbance was measured at 450 nm in a microtiter plate reader. Affinity was determined as the antigen concentration required to inhibit 50% of binding activity and K d value was calculated from a Klotz plot. We modified the manuscript as "Microtiter wells were coated with the purified NA (100 ng) in 50 mM sodium carbonate buffer (pH 9.6) at 4 overnight, blocked with BSA (2%) in PBS, and washed with PBST. A reaction mixture containing purified antibody (10 nM) and various concentrations (10 −11 -10 −5 M) of NA as a competing antigen were pre-incubated at 37 °C for 1 -2 h. The mixture was then added to each well previously coated with 100 ng of NA. HRPconjugated goat anti-human IgG (Pierce) was used for the detection of bound IgG. Color was developed with the 3,3′,5,5′-Tetramethylbenzidine substrate reagent set (BD Biosciences), and the absorbance at 450 nm was measured using a microtiter plate reader (Emax; Molecular 12 Devices). Affinity was determined as the antigen concentration required to inhibit 50% of binding activity and binding affinity (K d ) value was calculated from a Klotz plot." (Line 14, Page 10). Question 6 There is no methodology for docking described in the paper and also there is no methodology for free energy calculation described in the paper. Please clarify this. Answer) Please, refer the answer of Question 3-1. Question 7 Different from protein structure modelling and despite its evolution in the past years, proteinprotein docking methods are still only valid after experimental validation. Given two structures, a docking algorithm will always find many docking conformations even if the two proteins don't actually bind. Without experimental validation of the docking structure, it's impossible to assess its quality and validity. Without further evidence there is no way to substantiate the claims made. Moreover, all the features pointed out for increased binding affinity consider hypothetical enthalpic changes, when in fact it could be entropic contributions. Question 8 Line 34: It abbreviates neuraminidase to "NA" which has never been used in the text and so far. Answer) We changed the NA to neuraminidase in the revised manuscript. Answer) We changed the Delbecco to Dulbecco in the revised manuscript. Question 10 Line 130: why did you used insect expression system. Instead of mammalian expression system? Answer) Previously, soluble NA protein from the 1918 H1N1 (A/Brevig Mission/1/1918) strain was successfully expressed using a baculovirus expression system and crystalized for structural analysis. Line 138: the centrifugation for 1h at 16,000g was to precipitate the protein or cell debris? I assume the protein was in the supernatant and that was used for affinity chromatography purification, but the text suggests that the "protein was protein was obtained by centrifugation" which implies that the protein was pelleted. Answer) We agree with the reviewer's comment. The corresponding sentence in the manuscript has been changed as "After the cell lysate was sonicated to reduce its viscosity, the cell debris was removed by centrifugation for 1 h at 16,000 g. The soluble protein from the cell supernatant was applied to Ni-Nitrilotriacetic acid agarose resin (Qiagen), washed, and eluted with buffer (50 mM Tris-HCl, 0.5 M NaCl, 0.5 M imidazole, pH 8.0)." (Line 13, Page 8). Question 12 Line 146: where the antibody library comes from? Is there a reference for this library? Is it human? Is it scFv os Fab format? What is the promoter? What is the helper phage used? Answer) Previously constructed large naïve human Fab library (3 × 10 10 ) in Korea Research Institute of Bioscience and Biotechnology was used for antibody screening. Question 13 Line 147: How many washes were performed in each round? It's very important for the authors to give details of the method used to obtain the antibody so others can successfully follow. round. "Four rounds of panning were conducted, and the stringency of selection was increased with each round by gradually increasing the number of washes from 10 to 40." (Line 6, Page 9). Question 14 Line 156: How was the soluble Fab expressed? Was it fused to g3p? Is there an amber stop codon between the Fab gene and g3p so one can express soluble Fab when using a nonsuppressor strain? Answer) Soluble Fab expression was induced in E. coli TG1 cells at 30°C overnight by adding isopropyl β-D-1-thiogalactopyranoside to a final concentration of 1 mM. We added the description in the revised manuscript as "To screen individual clones for specific binding to I223R/H275Y NA, 500 colonies were randomly selected from the output plate after the third or fourth round of panning, cultured in Superbroth medium containing 100 μg/mL ampicillin until optical density of 0.5, and induced for Fab expression in Escherichia coli TG1 cells at 30 °C overnight by adding isopropyl β-D-1thiogalactopyranoside to a final concentration of 1 mM." (Line 9, Page 9). Question 15 Line 157: how was the Fab detected? Is there a tag so one can use a labeled anti-tag antibody? Please give more details. Answer) A micortiter plate was coated with 100 ng of I223R/H275Y NA in coating butter (0.5 M carbonate buffer, pH 9.6) and incubated at 4 °C overnight. After blocking, Goat F(ab')2 Anti-Human IgG (Fab')2-HRP (Abcam) antibody was used for the colorimetric detection of bound clones using the tetramethylbenzimidine substrate. We added the description in the revised manuscript as "In detail, a microtiter plate was coated with 100 ng of I223R/H275Y NA in coating buffer (0.05 M carbonate buffer, pH 9.6) and Question 2 Furthermore, the authors do not discuss the relevance of other drug resistance-associated variants in Influenza virus subtypes. Even though I223R/H275Y is an important resistanceassociated variant, there are others playing an important role in other influenza virus subtypes. As far as I understand from the manuscript, the here-described antibody is specific to the H1N1 strain, even though it is not clear whether the authors tested other influenza virus subtypes. This would need to be included to ascertain the diagnostic validity of the assay. Answer) We tested the present mutant virus sensing methods by using four kinds of influenza virus subtypes. The reverse genetics system was used to generate different subtypes of the I223R/H275Y influenza virus. The expression plasmids for the eight-plasmid reverse genetic 1 9 3 4 (H 1 N 1 ) CL TL A / B ri s b a n e / 1 0 / 2 0 0 7 (H 3 N 2 ) A / c a n in e / I welcome the possibility of reviewing the paper for the 2nd time. In the first version the authors developed a new antibody aiming to improve Influenza diagnosis, however, many things remained to be clarified. In this 2nd revision, the authors have addressed the major concerns I had previously: -They have tested the antibody against the single mutant; -They tested the antibody against donor samples; -They provided some experimental validation on the antibody-antigen binding mechanism; -The methodology section has been significantly improved, providing details regarding the phage display methodology and the docking methodology. I believe the paper has scientific merit and recommend it to be published at Nature Communications. Best wishes, André Reviewer #4: Remarks to the Author: Review of Author's response to reviewer #2 questions. Reviewer #2 Question #1. Reviewer#2 raised a very valid concern on the application of the LFA in a clinical setting without vigorous validation of the assay performance using real clinical material. The authors responded by including a small study (n=14) using NP/Op swabs spiked with 103 pfu pH1N1 viruses with I223R/H275Y markers. A few issues remain: 1). Sensitivity is often an issue for LFA assays. the authors did not demonstrate a dose effect with the detection of the I223R/H275Y in NP/OP swabs. 2). Matrix effect from different NP/OP swabs/broncheoalveolar lavage samples may also impact the sensitivity of the LFA detection, a larger number of the actual clinical samples collected from different patients are needed in order to evaluate the performance of this assay. 3). A more vigorous study using actual clinical specimen that are conducted side-by-side with sequencing are indeed needed to truly demonstrate the sensitivity and specificity of the assay and its true value in the actual clinical setting. Question 2 "Furthermore, the authors do not discuss the relevance of other drug resistance-associated variants in Influenza virus subtypes. Even though I223R/H275Y is an important resistanceassociated variant, there are others playing an important role in other influenza virus subtypes. As far as I understand from the manuscript, the here-described antibody is specific to the H1N1 strain, even though it is not clear whether the authors tested other influenza virus subtypes. This would need to be included to ascertain the diagnostic validity of the assay. " Reviewer #2 raised a good point. The authors did not sufficiently addressed this question. I223R/H275Y is a recognized marker for A(H1N1) oseltamivir/zanamivir resistance, but not necessarily for other subtypes. Furthermore, there are several other genetic markers that were identified to be associated for drug resistance, and not all drug resistance are based on neuraminidase. The authors should consider modifying the language in manuscript to address this limitation, including in the title to avoid overstatement of the study findings, for example the title can be modified to: "Development of novel A4 antibody for detection of neuraminidase I223R/H275Y associated antiviral multidrug-resistant influenza viruse" 1 We appreciate the reviewers for valuable comments to improve our manuscript. The changes in the manuscript and the answers to the reviewers' comments are as follows: Reviewer comments I welcome the possibility of reviewing the paper for the 2nd time. In the first version the authors developed a new antibody aiming to improve Influenza diagnosis, however, many things remained to be clarified. In this 2nd revision, the authors have addressed the major concerns I had previously: -They have tested the antibody against the single mutant; -They tested the antibody against donor samples; -They provided some experimental validation on the antibody-antigen binding mechanism; -The methodology section has been significantly improved, providing details regarding the phage display methodology and the docking methodology. I believe the paper has scientific merit and recommend it to be published at Nature Communications. Best wishes, André Answer) Thank you for the recommendation of our manuscript in Nature Communications. We appreciate for your valuable comments to improve our manuscript. Reviewer comments Reviewer#2 raised a very valid concern on the application of the LFA in a clinical setting without vigorous validation of the assay performance using real clinical material. The authors responded by including a small study (n=14) using NP/Op swabs spiked with 10 3 pfu pH1N1 viruses with I223R/H275Y markers. A few issues remain: Question 1 Sensitivity is often an issue for LFA assays. the authors did not demonstrate a dose effect with the detection of the I223R/H275Y in NP/OP swabs. Answer) Following the reviewer's suggestion, the dose effect of I223R/H275Y detection in nasopharyngeal swab samples was demonstrated. Figure R1 shows the results of LFA-based In addition to the previous data, an additional 26 nasopharyngeal swab samples were examined using the developed LFA, as the reviewer suggested. Figure R2 shows the results of a newly tested LFA after detecting the mutant virus in human nasopharyngeal swab samples. The test line was only observed in the presence of the I223/H275Y pH1N1 viruses. In this study, a total of 40 nasopharyngeal swab samples were tested. According to the guideline of National Institute of Food and Drug Safety Evaluation of Korea, a minimum of 20 sample results are required for approval of an in vitro diagnostic system. Therefore, the current results are suitable for assessing the performance of A4 antibody-based LFA assay. The figure was included in Supplementary Information and the manuscript was changed as "Totally, we tested 40 human nasopharyngeal swab samples ( Figure S12). The sensitivity and specificity of A4-based LFA developed for rapid antiviral multidrug-resistant influenza virus diagnostic tests are 100% (40/40) and 100% (40/40), respectively. According to the guideline of National Institute of Food and Drug Safety Evaluation of Korea, a minimum of 20 sample results are required for approval of an in vitro diagnostic system." (Line 21, Page 24). Question 3 A more vigorous study using actual clinical specimen that are conducted side-by-side with sequencing are indeed needed to truly demonstrate the sensitivity and specificity of the assay and its true value in the actual clinical setting. Answer) Following the reviewer's suggestion, sequencing tests were performed and compared to the LFA assay results. The viral sequence was identical in all samples tested as shown in Figure R3. The sensitivity and specificity of A4-based LFA assay for antiviral multidrug-resistant influenza virus diagnosis is determined as 100% (26/26) and 100% (26/26), respectively ( Figure R2). (Previous Question) Furthermore, the authors do not discuss the relevance of other drug resistance-associated variants in Influenza virus subtypes. Even though I223R/H275Y is an important resistance associated variant, there are others playing an important role in other influenza virus subtypes. As far as I understand from the manuscript, the here-described antibody is specific to the H1N1 strain, even though it is not clear whether the authors tested other influenza virus subtypes. This would need to be included to ascertain the diagnostic validity of the assay. Reviewer #2 raised a good point. The authors did not sufficiently addressed this question. I223R/H275Y is a recognized marker for A(H1N1) oseltamivir/zanamivir resistance, but not necessarily for other subtypes. Furthermore, there are several other genetic markers that were identified to be associated for drug resistance, and not all drug resistance are based on neuraminidase. The authors should consider modifying the language in manuscript to address this limitation, including in the title to avoid overstatement of the study findings, for example the title can be modified to: "Development of novel A4 antibody for detection of neuraminidase I223R/H275Y associated antiviral multidrug-resistant influenza virus" Answer) The goal of this study is to find an antibody to the I223R/H275Y mutant virus, an antiviral multidrug-resistant virus for both zanamivir and oseltamivir. After careful evaluation of A4 antibody, we were successful in detecting the mutant virus in several ways. However, as the reviewer pointed out, there are certainly other genetic markers of drug resistance and not all drug resistance is based on NA. In agreement with the reviewer's suggestion, the title of the manuscript has been modified to limit the current approach to the detection of neuraminidase I223R/H275Y associated antiviral multidrug-resistant influenza virus only.
v3-fos-license
2021-05-05T13:23:31.484Z
2021-05-05T00:00:00.000
233732457
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2021.660720/pdf", "pdf_hash": "482929ef592b4c8d67dd5fcb49097a842e7ac722", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44667", "s2fieldsofstudy": [ "Biology" ], "sha1": "482929ef592b4c8d67dd5fcb49097a842e7ac722", "year": 2021 }
pes2o/s2orc
Microglia Phenotypes Converge in Aging and Neurodegenerative Disease Microglia, the primary immune cells of the central nervous system, hold a multitude of tasks in order to ensure brain homeostasis and are one of the best predictors of biological age on a cellular level. We and others have shown that these long-lived cells undergo an aging process that impedes their ability to perform some of the most vital homeostatic functions such as immune surveillance, acute injury response, and clearance of debris. Microglia have been described as gradually transitioning from a homeostatic state to an activated state in response to various insults, as well as aging. However, microglia show diverse responses to presented stimuli in the form of acute injury or chronic disease. This complexity is potentially further compounded by the distinct alterations that globally occur in the aging process. In this review, we discuss factors that may contribute to microglial aging, as well as transcriptional microglia alterations that occur in old age. We then compare these distinct phenotypic changes with microglial phenotype in neurodegenerative disease. INTRODUCTION Microglia originate from hematopoietic progenitor cells found in the yolk sac and, upon entering the brain, gradually adapt a homeostatic microglial phenotype (1). Homeostatic microglia feature a distinct ramified morphology and were first identified by del Rio-Hortega (2). As the primary immune cells of the brain, microglia are mostly associated with acute or chronic responses to injury. In response to these stimuli, microglia display morphological and biochemical changes that have often been grouped under the term "activation." These changes can entail a variety of downstream effects including cytokine and chemokine production, enhanced phagocytosis, proliferation, and migration. Historically, based on in vitro experiments, this rather generalized diversion from the homeostatic cell state has led to a differentiation into two microglial groups: M1 (proinflammatory) and M2 (neuroprotective). This relatively oversimplified classification (3) of microglial reactivity is now being refined by single-cell resolution techniques that show diverse transcriptional states that can be adapted by microglia in either a gradual or acute manner. Intriguingly, we and others have shown that microglia are long-lived cells that undergo an aging process on a cellular level, altering their surveillance capacity and injury response time, and also influence neurodegenerative diseases (4)(5)(6)(7) [reviewed in (8,9)]. These rather slow and gradual alterations are contrasted by rapid changes brought on by acute damage. Local signals in the microglial microenvironment drive acute as well as gradual changes, leading to broad alterations in gene transcription, cell morphology, phagocytotic activity, and proliferation status (10)(11)(12). Over decades of research, a wide variety of terms have been used to describe microglial cell states (13). As research has advanced, new distinctions in microglial phenotypes have been identified based on expression of particular genes and, most recently, their transcriptome signature. In this review, we aim to uncover potential similarities in microglial phenotypes in advanced age and neurodegenerative disease ( Figure 1A) (12,15,16). MICROGLIAL AGING: A SUMMATION OF FACTORS ACCUMULATED THROUGHOUT LIFE? It recently was shown that not only cells of the adaptive immune system but also those of the innate immune system can display memory effects (5). The priming step of the immune cells, induced by a primary insult, can result in an enhanced or dampened subsequent injury response (17)(18)(19). In vivo tracking of individual microglial cells showed that microglia can revert to a homeostatic morphology post-injury (20)(21)(22). However, even if the morphological homeostatic phenotype is reestablished, epigenetic modification may render the cells altered from their homeostatic state. As long-lived cells, microglia are very likely to be primed by signals in their microenvironment, hence memory effects might add up during the cell's life span. This poses the question if these rather individualized priming steps might be part of the cellular aging process or if they have the potential of adversely affecting healthy aging. Furthermore, as some acute injuries such as microlesions are likely to only affect surrounding microglial cells, the priming impact might be locally restricted. Individual local events could result in the generation of distinct spatially restricted transcriptional and phenotypic alteration as microglial phenotypes are partially regulated through membrane-bound pattern recognition receptors (PRRs), which depend on molecules released by cells in their microenvironment. In a local ischemic event, disease-associated molecular patterns are passively released from dying cells (23), which may lead to a transient activation with a priming effect, whereas the deposition of amyloid β (Aβ) in the parenchyma [a hallmark of Alzheimer disease (AD), but can also be found in aged brains on a lesser scale] may lead to a different chronic disease-associated microglial phenotype, which not only depends on PRRs, as microglia also possess a wide variety of receptors to detect other types of molecules such as hormones and neurotransmitters (23). Therefore, a multitude of factors exist that could potentially affect local and global microglial behavior both in a short-or long-term fashion. A recent study demonstrated the importance of the local milieu within the brain in this regard by transiently depleting microglia using the colony-stimulating factor 1 receptor antagonist PLX5622 (24) in aged mice (25). The authors hypothesized that withdrawal of the drug would result in the replenishment of "young" unprimed microglia (25). Conversely, they reported that the transcriptomic alterations in old age were only partially reversed, and the replenished microglia responded to lipopolysaccharide (LPS) with an exaggerated proinflammatory response, typical of primed microglia. Further in vitro experiments confirmed that media conditioned by 24-h cultivation of brain slices from aged, but not young adult, mice were sufficient to trigger an exacerbated response to LPS in neonatal microglia, elegantly demonstrating the importance of the milieu in which microglia are resident (24). When focusing on the healthy aging process of microglia, they have been described as dystrophic or senescent [reviewed in (26)]. Historically, senescence is characterized by arrested growth caused by oxidative stress as well as elevated DNA damage. Age-related changes in the secretory profile were described to coin the term senescence-associated secretory phenotype, classifying a particular cell state in the aging brain (27,28). The term dystrophic, on the other hand, was created by the observation of changes in microglial morphology in brain sections from elderly humans and potentially includes all visually altered microglia (20). Among other features, this phenotype includes the beading of microglial processes, which are held together by thin channels (29), and was proposed to signify microglial senescence (30). Previous studies addressing the question if one of the described phenotypes is purely age-related are controversial; some have found dystrophic microglia in aged humans without any underlying neurodegenerative disorders (30)(31)(32), whereas others gathered evidence suggesting that dystrophic microglia are associated with a variety of diseases including, e.g., AD (29,(31)(32)(33)(34), Huntington disease (14), and multiple sclerosis (35). However, recently some light has been shed on the question whether these two terms are describing the same or two different phenotypes. To address this issue, Shahidehpour et al. (36) conducted stereological analysis of microglia in human brain tissue spanning the age of 10-90 years. The analysis revealed an increased number of dystrophic microglia with age, which, however, was much greater when neurodegenerative pathology was present as well (36). They hence conclude that aging itself is only associated with a minor increase in dystrophic microglia (36). It is thus possible that the disease event that generated an activated microglia phenotype, potentially early in life, has a priming effect on cellular aging, leading to an increase of dystrophic microglia in old age ( Figure 1A). Also, the opposite assumption is valid; the overall cellular aging process is likely causative for poor local injury responses, resulting in an ineffective healing process that in turn might again increase the amount of dystrophic/senescent microglia. One example is the finding that population RNAseq of murine microglia has identified a consistent age-dependent increase in genes associated with a low-grade inflammatory response (37), which might be causative for a poor local injury response by microglia to additional acute insults. We have found the injury response time to a local laser lesion of aged microglia (∼2.4year-old mice) to be reduced by ∼50%. Additionally, while microglial process end-tips in young and adult mice showed an increase in local diameters after a microlaser lesion, the aged animals displayed significantly less morphological changes upon lesioning, as the process end-tips already were found to be enlarged prior to the insult (4). Further supporting a gradual overall drift into a low-grade inflammatory state Conversely, only nine genes were mutually expressed between DAM and OA3 suggesting that the OA3 population may be distinct from both DAM and OA2 microglia. Venn diagram generated using Venny 2.1 (https://bioinfogp.cnb.csic.es/tools/venny/). during aging, Minhas et al. recently put forward evidence suggesting that a change in the metabolic state of macrophages (in the brain and periphery), signified by a reduction of the two main metabolic pathways (glycolysis and oxidative phosphorylation), is affecting brain health as acute energy demands, e.g., in order to support macrophage activation, cannot be met any longer (38). More specifically, prostaglandin E 2 (PGE 2 ), a proinflammatory signaling protein, which is known to increase not only during aging but also in AD, was investigated. The inhibition of PGE 2 was shown to lead to brain rejuvenation, reducing inflammatory levels in the aged brain (38). These findings are of particular interest as they support other reports that microglia can be influenced by stimulation of peripheral immune cells (5) and that their responsiveness can be (partially) restored even in the aged brain. With the detailed mechanism leading to the reported reduction in metabolism still being unknown, a future challenge is to unravel this pathway to explore possible therapeutic interventions, aiding in a wide range of diseases, as well as aging itself. TRANSCRIPTOMIC ALTERATIONS IN MICROGLIA IN OLD AGE AND NEURODEGENERATIVE DISEASE In the effort to characterize microglia phenotype in disease, scRNAseq has become a powerful weapon and has facilitated even greater insight into the transcriptomic alterations of microglia in diverse conditions. At present, however, comparing transcriptomic data from different studies comes with inherent challenges. Variability can be introduced at many stages such as dissociation, gating for cell sorting, the scRNAseq procedure, and data analysis. Technical limitations can further result in discrepant data. For example, it was recently revealed that the technical limitations of single-nucleus (sn)RNAseq (a theoretically useful approach for postmortem human tissue as it is compatible with frozen tissue) may have resulted in data that lead inadvertently to the overstating of differences between murine and human microglial transcriptomes (16). Thrupp et al. found that many transcripts associated with microglial activation are concentrated in the cytosol as opposed to the nuclei, resulting in many transcripts remaining undetected with snRNAseq (16). Another pertinent issue is that despite recent technical advances in scRNAseq, correlating transcriptional profiles to mechanistic data remains a persistent bottleneck. Spatially resolved scRNAseq, especially at a single-cell resolution, would provide an unparalleled advantage in correlating morphological observations to distinct transcriptome signatures. At present, however, these limitations greatly hinder correlating microglial phenotypes (as evident by morphology or behavior in vivo) with specific gene-expression profiles. Despite these difficulties, highly valuable data have been collected in several high-impact studies. Hammond et al. (12) identified two distinct microglia clusters using scRNAseq that, in the absence of overt pathology, were expanded in aged (∼1.5 years old) mice. One cluster (entitled OA2) was found to up-regulate the chemokines Ccl3 and Ccl4 along with interleukin 1 beta (Il1b), indicative of a shift to a proinflammatory phenotype during aging (12). The other emerging cluster in aged animals was found to be enriched in several interferonresponse genes [Ifitm3, Rtp4, and Oasl2 (entitled OA3)] (12). This shift toward expression of interferon-response genes is highly interesting, given that recent research has demonstrated that interferon signaling in a mouse model of AD triggers microglial activation, neuroinflammation, and synaptic loss in response to nucleic acid containing Aβ plaques (39). Sala Frigerio et al. (40) also identified two microglial clusters using scRNAseq that were expanded in aged mice. One cluster, entitled activated response microglia (ARM), increased from around ∼3% in 3-month-old mice to ∼12% of total microglia in 21-month-old mice (40). ARM microglia were found to upregulate histocompatibility complex class II genes (Cd74, H2-Ab1, and H2-Aa) and proinflammatory genes Cst7, Clec7a, and Itgax (encoding CD11c). Notably, with the exception of Itgax and Clec7a, these genes were also found to be up-regulated in OA2 microglia (12). Another microglia cluster was also identified, dubbed interferon-response microglia (IRM) (40). This cluster was found to up-regulate Ifit3, Ifitm3, Irf7, and Oasl2, consistent with OA3 microglia (12). Furthermore, using semisupervised pseudotime analysis, the authors found that homeostatic microglia in old age can transition into either IRM or ARM. Taken together, these data suggest that in old age, microglia transition into one of two mutually exclusive states, one characterized by up-regulation of interferon-response genes and the other characterized by a shift to a proinflammatory state. However, as previously mentioned, it remains challenging to ascertain the impact that transcriptional alterations have on microglial phenotype in vivo. Microglia play complex roles in neurodegenerative disease (41), often being beneficial in some respects, while pathogenic in others. Taking AD as an example, microglia phagocytose Aβ (42,43) (although this becomes attenuated with aging/Aβ plaque load) and encircle Aβ plaques, prohibiting the spread of [comparably more toxic (44)] soluble amyloid species into the surrounding brain parenchyma (45). Consistent with this, ablation of microglia after amyloid deposition results in increased LAMP1 immunoreactivity surrounding Aβ plaques (46); indicative of dystrophic neurites (47). While these findings suggest that microglia play a beneficial role in AD, microglia themselves contribute to synaptic and neuronal loss in AD (48). It was also shown that abolishing microglia after Aβ plaques are well-established in the brain fails to provide any beneficial effects (49) in the APPPS1 AD mouse model. Furthermore, if microglia are abolished prior to amyloid deposition, Aβ plaques fail to develop in the 5XFAD AD mouse model (46) (although Aβ accumulates on blood vessels resulting in cerebral amyloid angiopathy, a risk factor for hemorrhagic stroke). Taken together, because of the complex matter at hand, it is hard to determine whether microglia provide a net beneficial or detrimental role in AD, and a simple binary answer is not likely. Despite the potential for global or spatially restricted alterations in transcriptome signature in aging and pathology, recent publications seem to suggest a common-or at least highly similar-transcriptional program (50,51) in diseased states [i.e., disease-associated microglia (DAM) (52), microglia neurogenerative phenotype (53), ARM (40), and, most recently, white matter-associated microglia (54), a recently identified microglial phenotype that expands during aging and in neurodegenerative disease]. Massively parallel single-cell analysis with chromatin profiling on immune (CD45+) cells from 5XFAD mouse brains (a well-established AD mouse model) revealed two unique microglial clusters (52). Intriguingly, these clusters expressed genes implicated in lipid metabolism and phagocytosis (52). By analyzing mice from the age of 1-8 months (as Aβ deposition is advancing), the authors uncovered an age-dependent shift from a homeostatic phenotype to a DAM phenotype with one of the two clusters identified as being a transitory stage (defined as stage 1) (52). DAM are characterized by the up-regulation of Itgax, Trem2, Axl, Cst7, Ctsl, Lpl, Cd9, Csf1, Ccl6, Clec7a, Lilrb4, and Timp2 (52). Notably, Itgax in particular was found in every cell featuring a DAM transcriptome signature (52). In addition, they downregulate Cx3cr1, P2ry12, and Tmem119 (52), genes typically expressed in homeostatic microglia (12). The authors further identified that DAM are in close association with Aβ plaques and contain phagocytosed Aβ (52). Intriguingly, the authors identified that a transition from stage 1 DAM to stage 2 DAM was dependent on triggering receptor expressed on myeloid cells 2 (Trem2) (52); homozygous loss-of-function mutations in this gene are known to cause autosomal recessive early-onset dementia [Nasu-Hakola disease and frontotemporal dementia (FTD)-like disease] (55)(56)(57). TREM2 acts as a receptor for apolipoprotein E, Aβ, and high-and low-density lipoprotein and has been identified as crucial for triggering microglial phagocytosis, proliferation, and inflammation (58). Importantly, loss-of-function mutations in Trem2 have also been implicated in diverse neurodegenerative diseases including AD (59), amyotrophic lateral sclerosis (60), Parkinson disease (61), and FTD (62). Consistent with this, DAM transcriptomes have now been identified in diverse models of neurodegenerative disease. Based on these elegant findings, it is tempting to speculate that the DAM phenotype represents a pan-neurodegenerative disease response. GRADUAL ACCUMULATION OF DAM DURING AGING Even though we and others have shown morphological and functional changes of microglial cells in aged mice, only a rather small subpopulation show a transcriptomic profile consistent with that of DAM. By comparing ( Figure 1B) the top upregulated genes (greater than 1.5-fold change) between DAM (52) and the small clusters of transcriptionally distinct cells from aged mice (OA2 and OA3) (12), a set of 39 mutually expressed genes can be identified between DAM and OA2 ( Figure 1C). Conversely, only nine genes were mutually expressed between DAM and OA3 ( Figure 1D). The amount of overlap in genes upregulated in DAM and OA2 appears to support the hypothesis that a common transcriptional program is activated in both DAM and aged microglia, but only in a minority of the cell population in healthy aging. However, in various disease models, the cell population displaying transcriptomic signatures consistent with DAM is much larger, suggesting that pathological insults during the animal's life span will heavily expand the DAM population, which has a potential impact on cellular aging. Consistent with this, the ARM phenotype described by Frigerio et al. (consistent with DAM) eventually becomes the majority population of microglia at 12 months in an AD mouse model (App NL−G−F ) (40). IRM (consistent with OA3) conversely seemed to increase with age more rapidly in App NL−G−F mice than wild-type mice but ultimately also represented only a minority of cells (< ∼5%) similar to wild-type mice. Ferretin expression has been identified as a marker for dystrophic microglia (36) and senescence (26). Shahidehpour and colleagues (36) found ferretin-expressing dystrophic cells to be present, but again to a very small extent, in healthy aged humans. Conversely, the number of dystrophic microglia was significantly increased in patients suffering from neurodegenerative disorders. Consistent with the data of Shahidehpour et al. (36), ferritin is expressed in the OA2 subpopulation (12), which again is minimal in the absence of overt pathology, yet abundant in neurodegenerative disease conditions (DAM) (52). Taken together, the data suggests a disease-induced increase of cellular aging hallmarks. CONCLUSIONS Microglia with transcriptomic signatures consistent with that found in neurodegenerative diseases represent only a minority of microglia in healthy aging. It remains unclear what this subset of microglia contributes toward overall microglial dysfunction or, conversely, if they might have a beneficial impact. With regard to microglia featuring a neurodegenerative disease-associated transcriptome signature, it is possible that neurodegenerative disease causes advanced cellular aging, or conversely, advanced cellular aging may be a contributing factor to neurodegenerative disease (63). Further studies will be required to interrogate the roles of these interesting microglial populations in old age. In addition, it seems reasonable to speculate that data gathered from mice at the end of the mouse life span (∼2.5 years) would be particularly valuable given the advanced ages reached by humans in contemporary society. scRNAseq studies to date seem to suggest that, despite the many factors that could potentially influence microglial phenotypes in either a global or spatially restricted manner, microglia in aged mice appear to consist of homeostatic microglia, neurodegenerative diseaselike microglia, and IRM. The lack of apparent heterogeneity could conceivably be in part due to the artificial conditions that laboratory rodents reside in. This is an important caveat that should be considered when attempting to extrapolate observations from laboratory rodents to humans. With highly individualized lifestyles, disease backgrounds, and environmental factors, microglia in humans are likely to be primed more diversely and extensively given human longevity in comparison to laboratory mice. Hence, much remains to be discovered that could potentially bring valuable mechanistic insights into both aging and neurodegenerative disease.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-07-28T00:00:00.000
937766
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0022907&type=printable", "pdf_hash": "3f509b51a4dfa90a03723d01c044ad2cf9b12336", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44668", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "3f509b51a4dfa90a03723d01c044ad2cf9b12336", "year": 2011 }
pes2o/s2orc
Eos Negatively Regulates Human γ-globin Gene Transcription during Erythroid Differentiation Background Human globin gene expression is precisely regulated by a complicated network of transcription factors and chromatin modifying activities during development and erythropoiesis. Eos (Ikaros family zinc finger 4, IKZF4), a member of the zinc finger transcription factor Ikaros family, plays a pivotal role as a repressor of gene expression. The aim of this study was to examine the role of Eos in globin gene regulation. Methodology/Principal Findings Western blot and quantitative real-time PCR detected a gradual decrease in Eos expression during erythroid differentiation of hemin-induced K562 cells and Epo-induced CD34+ hematopoietic stem/progenitor cells (HPCs). DNA transfection and lentivirus-mediated gene transfer demonstrated that the enforced expression of Eos significantly represses the expression of γ-globin, but not other globin genes, in K562 cells and CD34+ HPCs. Consistent with a direct role of Eos in globin gene regulation, chromatin immunoprecipitaion and dual-luciferase reporter assays identified three discrete sites located in the DNase I hypersensitivity site 3 (HS3) of the β-globin locus control region (LCR), the promoter regions of the Gγ- and Aγ- globin genes, as functional binding sites of Eos protein. A chromosome conformation capture (3C) assay indicated that Eos may repress the interaction between the LCR and the γ-globin gene promoter. In addition, erythroid differentiation was inhibited by enforced expression of Eos in K562 cells and CD34+ HPCs. Conclusions/Significance Our results demonstrate that Eos plays an important role in the transcriptional regulation of the γ-globin gene during erythroid differentiation. Introduction The human b-globin locus consists of five functional globin genes (e, Gc, Ac, d, and b) within a 70 kb domain. During development expression of these genes displays two switches, the embryonic (e-) to fetal (Gc-and Ac-) globin switching, coinciding with the transition from yolk sac to fetal liver, and the fetal to adult (b-) globin switching, occurring near the parturient period with the establishment of bone marrow as the main site of hematopoiesis [1,2]. During erythroid differentiation the cto b-gloin gene switching is also displayed and it is called ''compressed switching'' [3]. The precise developmental program of human b-like globin gene expression is governed by a diverse array of regulatory mechanisms. Sequences within or immediately flanking globin genes control expression in tissue-specific or temporal patterns. High-level globin expression is directed by the locus control region (LCR), a set of key regulatory sequences 6-20 kb upstream of the e-globin gene, that are characterized by the presence of five 59 DNase I hypersensitivity sites (HSs) [4]. Preferential interactions between the LCR and individual globin promoters during distinct developmental stages are pivotal to the strict regulation of globin gene expression. These interactions are mediated by erythroid tissue-restricted and ubiquitous transcription factors. Because fetal c-globin gene reactivation in adults has potential as an effective therapy for sickle cell anemia and b-thalassemia [5], the detailed characterization of c-globin gene regulation mechanisms is particularly significant. Several studies have reported transcriptional activation of the c-globin gene by FKLF [6], FKLF2 [7], NF-E4 [8] and NF-Y [9]. However, repressors also play a critical role during cto b-globin switching. The repressors BCL11A [10], Ikaros [11], GATA-1 [12], the orphan nuclear receptors TR2 and TR4 [13], and NF-E3/COUP-TFII [14] have been associated with human c-globin gene silencing. Despite avid research regarding c-globin gene regulation, the mechanisms responsible for c-globin gene silencing are not fully understood. Eos, also known as IKZF4, is a member of the zinc finger transcription factor Ikaros family characterized by the presence of four DNA-binding N-terminal zinc fingers and two C-terminal zinc fingers required for homo-and heterodimerization with other Ikaros family members [15]. Ikaros family of genes consists of several members: Ikaros (IKZF1), Aiolos (IKZF3), Helios (IKZF2), Eos (IKZF4) and Pegasus (IKZF5). The Ikaros family of transcription factors acts as key repressors of transcription during the development and function of lymphocytes [16][17][18][19]. Ikaros is involved in regulation of human b-like globin gene switching by binding to critical cis elements implicated in the gene switching and facilitating long-distance DNA looping between the LCR and a region upstream of d-globin. When the DNA-binding region of Ikaros is disrupted by a point mutation in plastic mice, concomitant marked downregulation of b-globin expression and upregulation of c-globin expression are observed [20]. Eos is a 585 amino acid highly conserved zinc finger transcription factor that binds typical WGGGAAT Ikaros recognition sites in DNA and functions as a transcriptional repressor ( Figure S1) [21]. Eos may also play an important role in the development of the central and peripheral nervous systems [22,23]. Eos can self-associate, form heterodimers with other Ikaros family members, or interact with C-terminal binding protein (CtBP2), PU.1, or microphthalmiaassociated transcription factor (MITF) to repress transcription of cathepsin K and tartrate-resistant acid phosphatase (TRAP) promoters [21,24]. Eos is expressed at low levels in kidney, thymus, liver and heart and at high levels in skeletal muscle [23]. Eos mediates Foxp3-dependent gene silencing in CD4+ regulatory T cells by interacting directly with Foxp3 and inducing chromatin modifications that result in gene silencing in CD4+ regulatory T cells [19]. Although it has known that Eos protein is expressed in lymphocytes and is implicated in the control of lymphoid cell development, the function of Eos in the regulation of other haemopoietic lineage has not been addressed. In this study, we examined the effects of Eos on human globin gene regulation and demonstrated its important role in c-globin gene regulation during erythroid differentiation. Eos represses c-globin gene expression in K562 cells during erythroid differentiation Hemin-induced erythroid differentiation of K562 cells was evaluated using the benzidine cytochemical test. Western blot and quantitative real-time PCR indicated that the protein ( Figure 1A) and mRNA ( Figure 1B) expression of Eos gradually decreased during hemin-induced erythroid differentiation. Conversely, a substantial increase in c-globin expression was observed during erythroid differentiation of K562 cells ( Figure 1A and 1C). The reciprocal association of Eos and c-globin gene expression following hemin induction in K562 cells supports the hypothesis that the Eos protein might repress c-globin expression. To study the effect of Eos on c-globin gene expression, K562 cells were transfected with an Eos expression plasmid (pcDNA3.1-Eos), and overexpression of Eos was confirmed by Western blotting ( Figure 1D). Northern blotting revealed that overexpression of Eos significantly downregulated transcription of the c-globin gene but had little effect on transcription of the aand e-globin genes in K562 cells before and after erythroid differentiation ( Figure 1E). Quantitative real-time PCR results were consistent with Northern blotting ( Figure 1F and 1G). These results support a specific repressive function of Eos on the human c-globin gene. Eos represses c-globin gene expression in stable m'LCRAcybdb/GM979 transformants Because the b-globin gene is not expressed in K562 cells, the effect of Eos on human b-globin cannot be examined in K562 cells. Thus, we used stable MEL GM979 transformants with integration of human b-globin gene cluster. GM979, a MEL cell line which expresses both murine embryonic and adult globins, is an appropriate model system to study human globin gene expression [25]. The human Eos protein is not detected by human Eos antibody in GM979 ( Figure S2). A linearized cosmid construct m'LCRAcybdb (Figure 2A), which contained a 3.1 kb m'LCR cassette, a subset consisting of the impact core sequences of four of the DNaseI hypersensitive sites, linked to a 29 kb fragment from the human Ac-to b-globin genes with the natural chromosome arrangement, has been demonstrated a correct developmental expression of human globin genes in transgenic mice [12]. GM979 cells were cotransfected with the linearized cosmid construct m'LCRAcybdb and the pTKneo plasmid by electroporation. Stable transformant cells were elected in medium containing G418. Stable m'LCRAcybdb/GM979 transformants were subsequently transfected with pcDNA3.1-Eos or control vector (pcDNA3.1) respectively. Eos expression then was analyzed by Western blot ( Figure 2B). The levels of human Ac-and bglobin transcripts as well as endogenous murine a-globin transcripts were measured by Northern blot ( Figure 2C) and quantitative real-time PCR ( Figure 2D). A significant decrease in transcription of human c-globin was observed in m'LCRAcybdb/ GM979 cells overexpressing Eos, whereas human b-globin and murine a-globin transcripts were not significantly affected. These results further supported a specific repressive function of Eos on the expression of human c-globin gene. Identification and validation of functional Eos binding sites within the human b-globin gene cluster To investigate whether Eos represses c-globin expression by direct association with the human b-globin locus, we searched the human b-globin locus for matches to the Eos binding motif (WGGGAAT). Thirty-two putative Eos binding sites were identified in the b-globin locus ( Figure 3A). Chromatin immunoprecipitation (ChIP) was performed using an anti-Eos antibody. Using DNA fragments precipitated with anti-Eos as templates, twenty-eight pairs of primers were designed to amplify the regions containing each of the putative Eos binding sites (Table S1). Of these Thirty-two putative Eos binding sites in the human b-globin cluster, only three discrete regions, which located in the HS3 of LCR, the promoter regions of Gc-and Ac-globin genes, were confirmed to be occupied by Eos protein ( Figure 3B). A significant reduction of Eos combination at these three binding sites was observed in K562 cells following 48 h of hemin induction, compared with uninduced cells, when the immunoprecipitated DNA was quantified by real-time PCR and compared with the relevant input DNA ( Figure 3C). This is consistent with the observed decrease in Eos protein and mRNA expression following hemin induction of K562 cells ( Figure 1A and Figure 1B). To investigate the effect of Eos on the expression of c-globin gene, a series of dual-luciferase reporter assays were performed in K562 cells. Firstly, a recombinant plasmid including the 1.4-kb cglobin promoter (pGL3-basic 21383/+49 Gc/Luc) was constructed and was cotransfected with various concentrations of the Eos expression vector (pcDNA3.1-Eos). Dual-luciferase reporter assays indicated that Eos repressed c-globin promoter activity in a dose-dependent manner ( Figure 4A). When 1 mg of the pcDNA3.1-Eos plasmid was used in luciferase reporter assays, the luciferase activity of pGL3-basic 21383/+49 Gc/Luc was reduced to about 50% of the activity observed in the absence of the pcDNA3.1-Eos vector. To validate precise sites of silencing elements bound by Eos in the c-globin promoter, a series of truncated c-globin promoters (21383/+49, 2998/+49, 2864/ +49, and 2562/+49) linked to a luciferase reporter gene respectively were cotransfected with pcDNA3.1-Eos or pcDNA3.1 empty vector respectively. Deletion analyses revealed that the region between 2998 and 2864 of the promoter was responsible for the negative effect of Eos overexpression on c-globin promoter activity ( Figure 4B). Mutants containing 21383 to +49 or 2998 to +49 of the c-globin promoter and with a mutation in the Eos binding motif (at approximately position 2930) were analyzed by luciferase reporter assay in K562 cells, with cotransfected pDNA3.1-Eos or pDNA3.1. When the Eos binding motif was mutated, overexpression of Eos was not repressive to c-globin gene expression compared with control ( Figure 4C). We generated a pGL3-basic m'LCR-c-globin promoter luciferase reporter construct by fusing the 1.4-kb m'LCR linked to the c-globin promoter with the luciferase reporter gene in the pGL3-basic vector ( Figure 4D). We also generated a series of mutants, including single mutations in the Eos binding site of the LCR or c-globin promoter and dual-mutations in two of the Eos binding sites. The relative luciferase activity in the K562 cells transfected with the reporter construct containing the m'LCR was 6.5 times greater than that in the K562 cells transfected with the reporter construct without m'LCR, and cotransfection with pDNA3.1-Eos reduced luciferase activity significantly. The single mutation of the Eos binding site in LCR partially restored luciferase activity of the LCR-c-globin promoter construct. The single mutation of the Eos binding site in the c-globin promoter significantly rescued of luciferase activity. Mutations in both of these sites resulted in a near complete restoration of the luciferase activity. These results suggest that these sites were functional Eos binding sites required for repressive effect of Eos on c-globin gene. The interaction between LCR and the c-globin promoter is inhibited by Eos To ascertain the mechanisms by which Eos inhibits c-globin gene transcription we performed a chromosome conformation capture (3C) assay in the presence or absence of enforced Eos expression to test if Eos affects the interaction between LCR and the c-globin promoter in K562 cells. Restriction enzyme XbaI was used and an XbaI-digested DNA fragment that includes LCR HS2/3/4 was elected as fixed region, herein referred to as fragment 1. The relative cross-linking efficiency between fragment 1 and other XbaI fragments then was measured by quantitative real-time PCR. Fragment 1 exhibited significantly higher relative cross-linking efficiencies with fragments 6 that includes Gc-globin promoter and 7 that includes Ac-globin promoter than with the other fragments ( Figure 5), consistent with the predominant were analyzed by quantitative real-time PCR. The relative level of each globin mRNA is shown as the fold value of the mRNA level in untreated K562 cells. (G) Relative globin mRNA levels before or after hemin-inducted K562 cells were analyzed by quantitative real-time PCR. The relative level of each globin mRNA is shown as fold value of the level of e-globin mRNA in untreated K562 cells. Each experiment was performed in triplicate, and mRNA levels were normalized to GAPDH or b-actin mRNA expression. Error bars represent one standard deviation. *P,0.05, #P,0.01. doi:10.1371/journal.pone.0022907.g001 expression of the c-globin gene compared with the other globin genes in K562 cells. In the presence of enforced Eos, a significant decrease in the interaction between the LCR and the c-globin promoter was detected both before and after hemin induction ( Figure 5). The repressive effect of Eos on c-globin transcription may be partialy attributed to the inhibition of Eos on the formation of a physical and functional link between the LCR and the c-globin promoter. Eos overexpression reduces c-globin gene expression in CD34+ HPCs derived from umbilical cord blood Human CD34+ HPCs were isolated from human umbilical cord blood (UCB) and were induced to erythroid differentiation using Epo. The Eos mRNA level decreased gradually during Epo-induced erythroid differentiation of CD34+ HPCs, as measured by quantitative real-time PCR ( Figure 6A). CD34+ HPCs were infected with the lentivirus control (Lenti-control) and the lentivirus with Eos overexpression (Lenti-Eos) respectively. The high lentivirus transduction efficiency of the CD34+ HPCs was observed through GFP expression (data not shown), and overexpression of Eos mRNA in CD34+ HPCs infected with lenti-Eos was confirmed by conventional RT-PCR ( Figure 6B) and quantitative real-time PCR ( Figure 6C). Globin mRNA levels in lentivirus-infected CD34+ HPCs at 3, 7, 11, and 15 day of erythroid differentiation also were analyzed by quantitative real-time PCR ( Figure 6D). Compared to untransfected and lenti-control-transfected CD34+ HPCs, a significant reduction of c-globin, but not b-globin, gene expression was observed in CD34+ HPCs transfected with lenti-Eos at each time point of erythroid differentiation ( Figure 7A). This suggested enforced Eos expression specifically and continuously inhibited c-globin gene expression in vivo. Eos appeared to have a minimal effect on c-globin gene expression prior to Epo induction. This is probably because CD34+ HPCs are a mixture of cells including HSCs and various progenitors, and c-globin mRNA has minimal expression in CD34+ HPCs. Additionally, after 7 d of erythroid induction culture when the cto b-globin switching appears, a slight increase in b-globin gene expression and an obvious decrease in c-globin gene expression were detected in lenti-Eos-infected cells compared with controls. Concomitantly, lenti-Eos-infected cells exhibited a slightly more rapid reduction in the ratio of c to [c+b]-globin mRNA compared with controls ( Figure 7B). Enforced expression of Eos inhibits erythroid differentiation To examine the role of Eos in erythroid differentiation, untransfected K562 cells and K562 cells transfected with pcDNA3.1 or pcDNA3.1-Eos were induced by hemin. Erythroid Figure 8B. In accordance with benzidine staining results, lower expression levels of CD71 ( Figure 8C) and CD235a ( Figure 8D) were observed in K562 cells transfected with pcDNA3.1-Eos compared with cells transfected with control vector when the cells were induced by hemin for 48 h. CD235a also was examined by quantitative real-time PCR and flow cytometric analysis during Epo-induced erythroid differentiation of CD34+ HPCs. Cells infected with lenti-Eos exhibited a reduction in CD235a at each time point during Epo induction compared with controls ( Figure 8E). The flow cytometric analysis also demonstrated a decreased expression in CD235a in the erythroid induction culture of CD34+ HPCs infected with lenti-Eos compared with cells infected with lenti-control at day 15 after Epo induction ( Figure 8F). These results suggest that overexpression of Eos inhibits erythroid differentiation. Discussion In this study, we identified Eos as a repressor of the c-globin gene during erythroid differentiation in K562 cells and in CD34+ HPCs. Previous reports have demonstrated that several genes could negatively regulate c-globin gene expression, whereas stem cell factor (SCF) induces c-globin gene expression by decreasing COUP-TFII expression [14]. Cohen-Barak et al. reported that Sox 6 binds to the ey-globin gene promoter and represses gene transcription in mice [26]. BCL11A, a multi-zinc finger transcription factor, was originally linked to c-globin levels in humans by a genome-wide association strategy [27]. The knockdown of BCL11A with small interfering RNA resulted in an increase in cglobin without affecting the expression of other erythroid-specific proteins, such as GATA-1, FOG-1, NF-E2, or EKLF. BCL11A was significantly recruited at HS3 of the LCR and at two sites between the Ac-and d-globin genes that were previously implicated in developmental silencing of the c-globin gene [10]. As a member of the Ikaros family, Ikaros protein is recruited to the human b-globin locus and targets the histone deacetylase, HDAC1, and the chromatin remodeling protein, Mi-2, to the human c-gene promoters, and thereby contributing to c-globin gene silencing at the time when the cto b-globin gene switching happens [11]. Because a high level of Eos protein expression was detected in K562 cells ( Figure S2) and K562 cells have been widely used as a model for the study of globin gene regulation and erythroid differentiation, we examined the effects and mechanisms of Eos on globin gene expression in K562 cells. Since K562 cells do not endogenously express b-globin, we also stably transformed m'LCRAcybdb/GM979 with the human b-globin gene cluster and used erythroid induction cultures of CD34+ HPCs to examine the effects of Eos on globin gene expression. A specific, negative regulatory effect of Eos on the c-globin gene was demonstrated in all of the three experimental systems. ChIP-PCR indicated a reduction in Eos proteins bound to the three positive Eos binding sites in the b-globin cluster in the hemin-induced K562 cells compared with uninduced K562 cells (Fig. 3B). This phenomenon could be relevant to the gradual decrease in Eos protein during erythroid differentiation, and it is consistent with the increase in c-globin gene expression owing to a decrease in Eos repression following hemin induction. Promoter truncation analyses suggested the present of a silencing element between 2998 and 2864 of the c-globin promoter that could bind Eos protein. Mutation (TCCC to GAAA) of the Eos binding motif from 2929 to 2933 in a 1.4-kb of the c-globin promoter resulted in the nearly complete restoration of luciferase activity in K562 cells expressing a reporter construct and overexpressing Eos. The LCR's influence on c-globin gene expression was displayed by dual-luciferase reporter assay using a pGL3-basic m'LCR-c-globin promoter luciferase reporter construct ( Figure 4D). The enforced expression of Eos significantly decreased the luciferase activity. Either a mutation in the Eos binding site in the LCR, or a mutation in the binding site in the c-globin promoter region resulted in a partially restoration of the luciferase activity. Mutations in both of the two sites resulted in a near complete restoration of the luciferase activity. In the dual-luciferase reporter assays, we speculated that Eos bound to the sites is involved in formation of a repression complex with other protein and the repression complex also reduces the interaction between the LCR and the c-globin promoter. We also detected expression of some transcription factors (BCL11A, Ikaros, TR2, TR4, NF-E3, GATA1, EKLF and FKLF), which had been reported to play important roles in cglobin gene regulation, before and after hemin induction of K562 cells. Real-time PCR assay did not detect significantly change in the expression of these transcription factors when Eos was overexpressed in K562 cells ( Figure S3). The results suggested that the c-globin gene regulation by Eos is not a consequence of the modification in expression of these transcription factors by Eos. Our 3C assay revealed that HS4, HS3, and HS2 in LCR function as a whole to interact with the c-globin promoter and regulate gene expression. This is consistent with the previous finding that individual HS core elements interact with the transacting factors bound to the HSs to form a higher-order structure referred to as the LCR ''holocomplex'' [28]. The repressive effect of Eos on the interaction between the LCR and the c-globin promoter in K562 cells after hemin induction is more noticeable than before hemin induction ( Figure 5). This might be an indirect consequence of Eos influence on K562 cell differentiation. Our results demonstrated that enforced Eos expression reduced the proportion of benzidine-positive cells compared with control during erythroid differentiation of hemin-induced K562 cells. This was accompanied by decreased expression of CD235a and CD71 (Figure 8), suggesting that Eos may inhibit erythroid differentiation of K562 cells. The increased c-globin gene expression is an indicator of erythroid differentiation of hemin-induced K562 cells and Epo-induced CD34+ hematopoietic stem/progenitor cells. So the dual effect of Eos on erythroid differentiation and the c-globin gene regulation are simultaneous and concurrent. However the enforced Eos expression did not significantly reduce transcription of other globin genes during erythroid differentiation in either K562 cells or CD34+ HPCs, which demonstrated a specific negatively regulation effect of Eos on the c-globin gene. These also suggested that the increase of c-globin expression during erythroid differentiation is not only a consequence of the effect of Eos on erythroid differentiation. There may be different mechanisms for the repression from Eos on erythroid differentiation and the c-globin gene expression. In this study, we examined mainly the mechanisms for which Eos regulates the c-globin gene transcription. The mammalian b-globin gene locus is a very well-characterized model system for studying long-range chromosomal interactions during erythropoiesis. The LCR is the major structural component of the human b-globin locus, and it is required for high-level globin gene transcription [29]. The human b-globin LCR contains binding sites for several transcription factors, including NFE2, EKLF, GATA-1, and Sp1 [30]. Other reports strongly suggested that contacts between the LCR and various genes of the b-globin locus are developmentally controlled and are required for the LCR to influence the expression rates of individual globin genes [31,32]. The results of our 3C assay indicated that Eos regulates c-globin gene expression by inhibiting the interaction between the LCR and the c-globin promoter ( Figure 5). Keys and colleagues examined the role of Ikaros in the assembly of the human b-globin active chromatin hub and subsequent globin gene transcription [20]. Ikaros was involved in human globin gene switching through Ikaros-Eos heterodimers or homodimers [20]. In this study we measured the kinetics of cand b-globin genes expression during Epo-induced erythroid maturation of CD34+ HPCs. As shown in Figure 7A, the levels of c-globin mRNA exhibited marked decreases on the whole and the b-globin mRNA levels exhibited slight increases in the lenti-Eos-infected CD34+ HPCs after day 7 of erythroid induction culture when the conversion of cto b-globin gene expression occurs. At the same time, the decline in the ratio of c to [c+b]-globin mRNAs also appeared to be a little more rapid in the Eos-virus-infected cells compared with controls ( Figure 7B). These results suggest that enforced Eos expression had a minor effect on cto b-globin switching during erythroid differentiation of HPCs. In conclusion, the present study suggests that Eos contributes significantly to the transcriptional regulation of c-globin during erythroid differentiation of K562 cells and UCB-derived CD34+ HPCs. Materials and Methods Cell lines, cell culture, and erythroid induction of K562 cells RNA isolation and quantitative real-time PCR analysis Total RNA was isolated from cells harvested using TRIzol reagent (Invitrogen) according to the manufacturer's instructions. RNA was reverse-transcribed to cDNA using the M-MLV reverse transcriptional system (Invitrogen). Quantitative real-time PCR were performed using an iQ5 Real-Time PCR Detection System (Bio-Rad, California, USA) and SYBR Premix Ex Taq kit (Takara, Dalian, P. R. China). Primers used in quantitative real-time PCR are listed in Table S2. Promoter region of the c-globin gene (21383 to +49 relative to the transcription start site) was amplified from human genomic DNA and cloned into the luciferase reporter vector pGL3-basic (Promega, Madison, WI, USA). A series of truncated c-globin promoter regions including 2562 to +49, 2864 to +49, and 2998 to +49, were cloned into the pGL3-basic vector as described previously [33]. The 3.1-kb micro-LCR (m'LCR) sequence was amplified from the cosmid construct m'LCRAcybdb [34] by PCR and was inserted upstream of the c-globin promoter in the pGL3basic plasmid. Mutations in the putative Eos binding sequence of the construct plasmid were introduced using PCR-based sitedirected mutagenesis. The bases TTTC replaced GGGA at LCR region and GAAA replaced TCCC in the approximately position 2930 of the c-globin gene promoter (Fig. 3A). All plasmids were prepared using the Plasmid Maxi Kit (Qiagen, CA, USA). All constructs were sequence-verified. Northern blot and Western blot Northern blot analysis of globin mRNAs was performed as described previously [35]. Briefly, T4 polynucleotide kinase and c-32 P ATP were used to 59 end-label ssDNA probes (NEB). Probe sequences are listed in Table S3. Western blot analysis was performed as described previously [24]. The following primary antibodies were used: anti-Eos (Santa Cruz Biotechnology, Inc., CA, USA), anti-c-globin (Santa Cruz), anti-b-actin (Proteintech, Group Inc., Chicago, IL), and anti-GAPDH (Proteintech). HRPconjugated secondary antibodies were used. Immunoblots were quantified using AlphaEaseFC software. Cell transfection and luciferase reporter assay GM979 cells were stably transformed with m'LCRAcybdb as described previously [36]. Briefly, 2610 7 cells were cotransfected with linearized cosmid m'LCRAcybdb and linearized plasmid pTKneo in HEPES buffered saline by electroporation at 250 V and 960 mF. Stable GM979 transfectants were selected in medium containing 130 mg/mL G418 for 2 weeks. For the dual-luciferase reporter assay, K562 cells were seeded in 24-well plates and cotransfected with plasmid pcDNA3.1-Eos or with pcDNA3.1 and the luciferase reporter plasmid (pGL3-basicbased construct and pRL-TK plasmid), respectively, using Lipofectamine LTX reagent (Invitrogen) according to the manufacturer's instructions. The transfection medium was replaced with complete medium after 6 h, and cells were cultured for 48 h. Cells then were lysed using Passive Lysis Buffer (Promega), and luciferase activities were measured with a Modulus Microplate Luminometer (Turner Biosystems, CA, USA) using the Dual-Luciferase Reporter Assay System (Promega) according to the manufacturer's instructions. Chromatin immunoprecipitation-PCR (ChIP-PCR) Chromatin immunoprecipitation (ChIP) assays were performed essentially as previously reported [37]. Briefly, uninduced and hemin-induced K562 cells were harvested and fixed in 1% formaldehyde (Sigma-Aldrich, Deisenhofen, Germany) at room temperature for 10 min and quenched for 5 min with glycine. Cells were lysed and sonicated to obtain chromatin fragments that were approximately 500-1000 bp in length. ChIP was performed using the EZ-ChIP Chromatin Immunoprecipitation Kit (Millipore, MA, USA) with minor modifications to the manufacturer's instructions. A rabbit polyclonal anti-Eos antibody (Santa Cruz) was used as the immunoprecipitating antibody, and rabbit IgG (Santa Cruz) was used as the control. Input and immunoprecipitated DNA were amplified by PCR using primers listed in Table S1. Quantitative real-time PCR was performed as described previously [38,39]. Immunoprecipitated DNA was amplified using SYBR green dye on a Bio-Rad iQ5 Real-Time PCR Detection System, and experimental PCR products quantified by comparison with the PCR products of a dilution series of relevant input DNA. Chromosome conformation capture (3C) assay The 3C assay was performed as described previously [40,41] with minor modifications. K562 cells (1610 8 ) were harvested and crosslinked with 1% formaldehyde at room temperature for 10 min and quenched with glycine to a final concentration of 0.125 M. After cells were lysed, SDS was added to a final concentration of 0.1%, and the reaction was incubated at 37uC for 10 min. Triton X-100 then was added to 1%. DNA was digested with XbaI (NEB) overnight at 37uC. The restriction enzyme was inactivated with the addition of 1.6% SDS, and the digested DNA was incubated at 65uC for 20 min. The reaction was diluted to 2.5 ng/mL DNA, and Triton X-100 was added to 1%. DNA was ligated for 4-5 h at 16uC using T4 ligase (NEB). Cross-links were reversed by overnight incubation in 5 mg/mL proteinase K at 65uC. DNA was purified by phenol-chloroform extraction and ethanol precipitation. DNA concentrations were measured using the NanoDrop 2000 Spectrophotometer (Thermo Fisher Scientific Inc., Bremen, Germany) and were diluted for quantitative realtime PCR. To generate the control template with detectable amounts of randomly ligated DNA fragments, PCR fragments from ligation products, which spanned each restriction sites, were purified from the agarose gel to enrich for the ligation products of interest. Equimolar concentrations of the PCR fragments were mixed, digested with XbaI, and ligated. The ligated fragments were purified by phenol extraction and ethanol precipitation and diluted to an appropriate concentration. Purified fragments (300 ng) were mixed with the same amount of digested and randomly ligated genomic DNA. The mixture was used as control sample for quantitative real-time PCR. The experimental PCR products were quantified by comparison with the PCR products of relevant control. The 3C primers used in this study are listed in Table S4. Assay of erythroid differentiation: benzidine staining and flow cytometry Erythroid differentiation of K562 cells was scored by benzidine staining as reported previously [42]. Flow cytometry was carried out according to standard protocols, and samples were analyzed using a C6 flow cytometer (Accuri Cytometers, MI, USA). The expression plasmid pWPXL-Eos was cotransfected with packaging plasmids into 293TN cells using Lipofectamine with Plus Reagent (Invitrogen). The packaging kit was purchased from System Biosciences and operated according to the manufacturer's instructions. Recombinant lentivirus particles (i.e., lenti-Eos and lenti-control) were harvested and added to the medium of CD34+ HPCs in culture. Lentivirus-infected CD34+ HPCs were washed with PBS and induced to erythroid differentiation with Epo for 2 weeks. Statistics Each set of experiments was repeated at least in triplicate, and standard error values were calculated. Data were analyzed using Student's two-tailed t-test, and P-values less than 0.05 were considered significant.
v3-fos-license
2018-04-03T04:37:28.955Z
2012-12-23T00:00:00.000
14777661
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/crirh/2012/724013.pdf", "pdf_hash": "27a2fc728a08ed59dc3657716f2829c12ee8ad4d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44674", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "27a2fc728a08ed59dc3657716f2829c12ee8ad4d", "year": 2012 }
pes2o/s2orc
Pulmonary Sarcoidosis following Etanercept Treatment Tumour necrosis factor (TNF) is an important cytokine involved in the pathology of a number of inflammatory conditions, and thus blockade with anti-TNF therapies is becoming the cornerstone in managing such diseases. With increasing use, evidence is collected for the association of sarcoid-like granulomatous disease developing after the initiation of anti-TNF-α therapy, with disease reversal after discontinuation. Introduction We report a case of a 37-year-old married Pakistaniborn woman, resident in the UK since the age of 19, who developed pulmonary sarcoidosis on treatment with etanercept for psoriatic arthritis. She first presented in 2009 with dactylitis affecting her left foot during the post partum period. She later went onto develop psoriatic plaques over her limbs, scalp, and trunk with worsening small and large joint arthritis. Her psoriatic arthritis was treated first with sulphasalazine. Due to side effects, her treatment was changed to methotrexate, with the later addition of leflunomide. Despite full-dose combination DMARD therapy and maintenance low-dose oral prednisone (5-10 mg/day), her disease remained active, requiring intramuscular and on occasion intravenous pulsed methyl-prednisolone to achieve even indifferent control and allow her to cope with looking after her 3 young children (PsARC score tender joints (TJ) 5, swollen joints (SJ) 6, physician global (PhG) 4/5, patient global (PtG) 4/5, DAS28 = 6.12, CRP 40 mg/L). Following screening, including a full history of potential exposure to tuberculosis (TB) and a normal baseline chest X-ray, she was commenced on etanercept 50 mg weekly sub.cut. in April 2010 and leflunomide was discontinued. She made a prompt response, achieving near joint and skin remission. (PsARC at three months TJ 0, SJ 2, PhG 1, PtG 2, CRP 1 mg/L, DAS28 = 2.03) and her maintenance prednisolone was phased out over the subsequent 3 months. She discontinued methotrexate on her own initiative 4 months after starting etanercept. She remained well until seven months into her etanercept treatment when she presented to clinic with a 3-week history of persistent dry cough and mild exertional breathlessness, having just returned from a 5-week stay in Pakistan. High resolution computed tomography showed multiple small nodules throughout the lung, few peribronchovascular nodules, with multiple enlarged mediastinal and hilar lymph nodes (Figures 2(a)-2(b)). Bronchoscopy performed revealed the presence of endobronchial nodes bilaterally; two transbronchial biopsies were taken from the right lower lobe with aid of screening. Histological analysis showed the presence of noncaseating granulomata with small number of surrounding lymphocytes. Special stains performed showed that there were no identifiable acid fast bacilli on Ziehl-Nielson stain or fungal element on PAS or Grocott stains (Figures 3(a)-3(c)). The investigations therefore make a diagnosis for miliary TB highly unlikely and thus suggestive of sarcoidosis given her clinical picture. Etanercept was discontinued and prednisolone commenced. Discussion Tumour necrosis factor-α (TNF-α) is produced by a number of inflammatory cells such as macrophages and its role is implicated in the pathogenesis of granulomatous inflammation; blockade of TNF-α offers a potential role for targeted therapy. However, a series of nation-wide case reports have reported the association of sarcoid-like granulomatous disease after initiation of anti-TNF-α therapy, with disease reversal after discontinuation. A possible mechanism for this association is that anti-TNF-α therapies modulate a CD4+ Th1 cytokine response, key to the immunopathogenesis of sarcoidosis. CD4+ T cells interact with antigen presenting cells which initiate the formation and maintenance of granulomas, resulting in differentiation of selective Th1 cells secreting IFN-γ and IL-2 [1,2]. In the chronic state, TNF-α, IL-12, IL-18 are the main cytokines produced. These cytokines are key in driving the Th1 commitment in the granulomatous process. Therefore blockade of TNFα should have a therapeutic effect on sarcoidosis [1]. The immunopathogenesis remains poorly understood; a possible explanation is that there is overproduction of other cytokines that play a crucial role in granuloma formation with TNFα blockade. Etanercept, a soluble TNF-α receptor fusion protein, is thought to enhance T-cell production of IFN-γ which is a key cytokine in the formation of granulomas in the acute stages of sarcoidosis [1,3]. A number of studies have been carried out assessing the role of anti TNF-α in treatment of sarcoidosis, but their role remains questionable. A study using etanercept for stage II or III pulmonary sarcoidosis in seventeen patients was terminated early due to treatment failure when compared to conventional corticosteroid therapy [4]. Likewise, a doubleblind randomised controlled study in eighteen patients with methotrexate resistant, corticosteroid dependent ongoing ocular sarcoidosis, showed a lack of steroid sparing effect and failure of ophthalmology global improvement [5]. These two studies using etanercept have failed to show treatment benefit in patients with progressive or methotrexate resistant sarcoidosis. Our patient continues to improve on a reducing regimen of oral corticosteroid therapy, with disappearance of Case Reports in Rheumatology symptoms and resolution of pulmonary nodulosis. However, the question remains on what to do next when inflammatory arthritis becomes active. There is limited data available regarding treatment options in such patients; with what we know is rechallenging with another anti TNF-α, the right thing to do? In contrast, two retrospective series reported symptomatic improvement in patients receiving infliximab when used in patients with chronic extrapulmonary disease (lupus pernio, uveitis, neurosarcoidosis), refractory to oral corticosteroid therapy or in patients who had not responded to etanercept [6]. However, a smaller study showed no difference in primary endpoints when used in patients with biopsy proven stage II-IV pulmonary sarcoidosis that had a suboptimal response or intolerance to oral corticosteroid therapy (minimum of 3 months treatment) [4]. In spite of this, a number of case reports to date are available highlighting the unexpected development of sarcoidosis following treatment with anti TNF-α, thus their role is controversial in granulomatous conditions. Consequently the use of corticosteroids is still the cornerstone of treatment in patients requiring systemic therapy. Again, although the evidence is limited, leflunomide has been reported as an efficacious alternative therapy in treatment of sarcoidosis, thus offering the possibility of use for patients where sarcoidosis has developed following anti TNF-α therapy. Unfortunately our patient's psoriatic arthropathy failed treatment with leflunomide. Conclusion The evidence available remains limited in order to draw firm conclusions. Further studies are required to clarify the role of a second anti TNF-α, and until then corticosteroids remain the preferred option.
v3-fos-license
2016-01-29T17:58:53.149Z
2009-07-06T00:00:00.000
1215916
{ "extfieldsofstudy": [ "Art", "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.scienceopen.com/document_file/91b603da-086e-4ec8-bfaf-953107c83483/ScienceOpen/030_Jessop.pdf", "pdf_hash": "112ea413815ce90f22c614826bbd287e8229ee8c", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44675", "s2fieldsofstudy": [ "History" ], "sha1": "112ea413815ce90f22c614826bbd287e8229ee8c", "year": 2009 }
pes2o/s2orc
THE DOMESDAY BOOK : VISUALIZATION TOOLS TO EXPLORE IDENTITY AT THE START OF THE SECOND MILLENNIUM Exploring patterns of settlement and land ownership have always been of interest to the historian but could these patterns be used, as it were, ‘in reverse’ not simply to display what is already known but to resolve gaps, uncertainties and ambiguities in the historical record? It was the nature of life in the eleventh century that individuals were rarely named uniquely and were frequently known by different names in different circumstances. Could visualization tools be used to resolve these frequently obscure and ambiguous identities recorded in the Domesday Book? INTRODUCTION The confirmation of our identity has become an almost daily requirement of modern life, whether through the use of bank cards in ATM cash machines or the possession of a passport carrying encoded biometric data.These have obvious value but others, such as the requirement to carry a photo identity card when one wishes to commute by train regularly are more invasive.The State gathers information that becomes associated with identity and is stored in rigid formats, such as those of modern census returns, recording who we are and how we live.Commercial organisations monitor our purchases and build up a profile of our socio-economic identity.In these cases our identity is rigidly defined by our surname, forename, date of birth, address, mother's maiden name and so forth in a way that has now extended beyond our national borders to become a global entity.This simple defining information is considered so unique that unauthorised possession of it allows others to access our savings as well as load us with debt for loans we did not take out; the theft of material goods can be achieved through the misappropriation of the information that defines our formal identity.In the complexity of life in modern society the individuality of our identity has become more important than ever before with the information that defines this uniqueness being paramount. However, identity has not always been defined so decisively, even for the elites of the social order.A current project at the Centre for Computing in the Humanities at King's College London, using the Domesday Book as source data, is providing an opportunity to explore the problems of identifying individuals living nearly a thousand years ago at the beginning of the second millennium; a time when there was no standard EVA 2009 London Conference ~ 6-8 July Martyn Jessop _____________________________________________________________________ way of referring to an individual even by name and only the bare beginnings of the organisation of a state at a national level. The information recorded within the Domesday Book is largely numerical and should therefore lend itself to computational analysis.The economic and social historian can glean valuable data on arable exploitation and income from landholding as well as quantifying the population in terms of social classes.But there is far more to this data than can be revealed by quantitative analysis.The data also contains evidence of the procedures of administration and patterns of shifts in power and influence.Much of this requires an understanding of the status of particular individuals before and after the conquest.The issues of uncertain or ambiguous identities present in the document impede the investigations at this level and missed or miss-identified groupings of names associated with a particular individual may skew the analysis of the data more generally.However, there are ways in which visualizations of the data in spatial terms can be used to resolve, clarify, or reveal the true identities and power of the men and women at the centre of English society over a thousand years ago. DOMESDAY BOOK The Norman invasion of 1066 plunged England into a period of intense social upheaval.During his later years King William came under threat from a number of sources.Chief among these were King Canute IV of Denmark and King Olaf III of Norway.The policy of the time was to buy off these two aggressors with a fund called the Danegeld.The most probable reason for the compilation of the Domesday Book was to determine how much tax William was receiving and therefore the level of Danegeld that could be paid.The book records, for each settlement in England, its monetary value and any dues owed to the King.The fiscal information is shown at the time of the survey, before Domesday, and from before 1066.It is a complete record of lands held by the king and by his tenants and of the resources that went with those lands.Its compilation formalised a process of transition by recording which manors rightfully belonged to which estates.It ended years of confusion resulting from the gradual dispossession of the Anglo-Saxons by the Normans.It is also a snapshot of the feudal hierarchy showing the identities of the tenants-in-chief who held their lands directly from the King, and of their tenants and under tenants.Many of the details of the changes that occurred are recorded in the Domesday Book but in a way that would be quite alien to a modern social scientist or geographer.The information recorded within it is resistant to the techniques that would be applied to modern social surveys and census data.The inconsistencies and ambiguity present defy modern quantitative methods and standard digital tools.However, the data is typical of that used by historians and other researchers within the humanities and so form an important case study.What is required is a more fluid qualitative method that is closer to arts and humanities methodology than that of the social scientist.Domesday records the nature and structure of society in 1086 but also tells us of the sweeping changes that had occurred in the intervening twenty years since the conquest.The fundamental problem is one of establishing the identity of individuals across the whole country.A single person frequently held land in more than one county but might EVA 2009 London Conference ~ 6-8 July Martyn Jessop _____________________________________________________________________ be referred to in each case by different identities.These are not just variations in spelling but also in title and the way an individual is known, by a byname such as 'the Wolf', or a number of other names and titles of differing purposes.What one is faced with is a very detailed, colossal puzzle which requires ingenuity and creative thought to unravel.Textual methods are of only limited value but spatial analysis and visualization offer significant advantages.Existing Geographical Information System (GIS) tools might be applied but the purpose of this project is to develop digital visualization tools that will allow anyone, whether professional historian or interested amateur, to explore the content of the Domesday Book via a visualization tool that is sufficiently versatile (and free!) but easy to use to tackle information that resists the 'traditional' scientific methods of GIS.The project has scope for fresh thinking about visualization tools that cross disciplinary boundaries and open access to digital resources for analysis and study by new audiences.The experiences have value that extends far beyond the current application and have informed our general views on a more fluid, versatile approach to the visualization of qualitative and quantitative data in the arts and humanities. RESOLVING IDENTITY THROUGH VISUALIZATION How can visualization tools be used to reveal new interpretations of data that has already been studied for many hundreds of years? There are three characteristics that can be utilized by visualization techniques to help resolve uncertain or ambiguous identities: These are three useful starting points but research with the visualization tool is still at an early stage.The essence of visualization is that there is an element of playful experimentation which may reveal further characteristics which can also be used as the work progresses. Proximity The most obvious spatial relationship found in land holdings is that a particular individual will tend to hold parcels of land that are clustered close together.If there is any doubt about the true identity of someone, a map of a known individual's landholdings and those of the ambiguous person may show the latter's holdings to be grouped with the known individual and thus suggest that the are one and the same person. Patterns There are a variety of possible patterns that can be revealed though visualization.The first is the size of estates; the nature of landholding at the time was that individuals who controlled large parcels of land tended to have only other large parcels (and vice versa).By searching for and displaying the distribution of estates of a certain size range EVA 2009 London Conference ~ 6-8 July Martyn Jessop _____________________________________________________________________ belonging to particular individuals it is possible to find relationships (or an absence of them) that lead the viewer to question current interpretations of the data. Experimentation with maps of different aspects of the data or of data revealed by speculative queries of combinations of particular variables may also reveal previously unsuspected relationships. Succession One of the many interesting aspects of the society recorded in Domesday is that it displays patterns of continuity and change across the upheavals of the post-conquest period.Although there was a massive change in the 'persons' involved, the social 'structures' appear to have remained the same.Post-conquest lordships often followed in the footsteps of the pre-conquest ones.The new Norman lords where frequently assigned lands in several different counties that were previously assigned to a single Saxon lord.For example the lands of the Anglo-Saxon Lord Geoffrey de Mandeville all passed to Asgar the Staller after the conquest.This process was not restricted to the upper echelon of society and the same process applies as one moves down the hierarchy.Thus the lands held by men and women commended to an Anglo-Saxon lord were passed to that lord's Norman successor and subsequently to his subtenants.These close links between the two power structures and the close similarity of the status of posts between Anglo-Saxon and Norman hierarchy can be used to resolve issues of identity.In principle, the patterns of landholding before and after the conquest should be similar with only the 'persons' involved changing not the geographical distribution and patterns of holdings.By comparing the patterns for particular individuals before and after the conquest it is possible to find anomalies that may hint at miss-identification of identity in existing interpretations.This is an area where a visualization tool is extremely valuable as it is possible to pull out quite complex sets of possible relationships and display them quickly and easily thus encouraging a 'let's see if …..' style of exploration. Implications for visualization The above discussion identifies the items of data from Domesday that are of value in the visualization; names, titles, the location of the estates, and items that identify their size or value.How does this shape the functionality of the visualization tool? When considering the production of maps one tends to think of gathering information on boundaries, rivers, topography and transport networks but in this case none of these things are of interest (or even of secondary interest).We are dealing purely with the relationships between locations and the comparison of different patterns of location; the relationships of them to other geographical features are largely irrelevant.What is required is basically a set of dots showing the locations: a symbol map or proportional symbol map.The dots may be differentiated by colour or size and that differentiation may display differences in holdings by individuals, the size of the holding or its fiscal value. VISUALIZATION, SPATIAL DATA AND THE HUMANITIES SCHOLAR The humanities scholar who wishes to use spatial data in his or her research faces a number of problems.Visual methodology is not widely applied in the predominantly textual research of the humanities scholar and so the methods are unfamiliar.The digital tools that are currently in use for mapping and geographical information systems (GIS) were designed for planners, land use studies, earth scientists and the social sciences.They are expensive, over-complex, hard to use, and ill-equipped to cope with the very different nature of historical sources [2]. Digital tools may be used to investigate all of the relationships that were described earlier as being important to an exploration of identity but they can be limited in what they can do with historical data such as that in Domesday.The tools were developed for disciplines where very detailed and highly precise data is collected in the field (for example, geography, social sciences, planning, land-management).The source data that records the past is rarely so detailed and is frequently sparse, imprecise and ambiguous.A researcher working on social phenomena today would probably find ways around these problems by carefully designed data collection methods, collecting more data or using geo-statistical techniques that would 'process out' the problems.However, it is often the ambiguities and anomalies that are of greatest interest to the historian; in this case they are not a 'problem' but rather a key aspect of the study of the data so ways must be found of accommodating them in the digital tools used by historians or other humanities scholars. Fortunately the pattern of administration and settlement at the end of the Anglo-Saxon period has persisted until the present day (for example, the county boundaries of 1086 were in use almost without change until the review of administrative boundaries in 1974).Anglo-Saxon place names can be linked to modern place names and precise geographical locations obtained.The quantitative data (area, fiscal value, number of cattle etc.) is clear.The problems therefore arise with the identification of individuals and as far as I am aware this project is the first to attempt to use visualizations of the spatial data to resolve them. THE CHOICE OF VISUALIZATION TOOLS The project started off by experimenting with Google maps.This allowed us to display point locations and gave visually appealing background maps (satellite images or modern road maps etc.) but the modern mapping was felt to be inappropriate and the database searching and mapping styles that we required would have required substantial additional programming effort.The capabilities that are required are simple but still approach those expected of a GIS; the ability to search a database via queries that allow some sophistication of interrogation and the ability to plot the results of these queries on proportional symbol maps with some form of boundary data.These characteristics are likely to be found only in dedicated mapping applications.However, the ability to access and display the data via the web is of considerable value and so other sources of web-mapping were explored.The final solution was to provide two alternative but complementary visualization tools; an online mapping resource for simple web-based EVA 2009 London Conference ~ 6-8 July Martyn Jessop _____________________________________________________________________ mapping and a desktop GIS file viewer that can be downloaded with a set of data files for more sophisticated exploration. There are a number of difficulties facing historians in their use of GIS, many of which are discussed in a paper by the author [2].These all impinged upon this project but I will focus on a more pragmatic subset of issues here:  The high cost of software  The complexity of the software and the steep learning curve associated with it  The level of cartographic knowledge required. Cost and difficulty of obtaining the underlying digital maps  Cost and difficulty of digitising source data. The software costs of a full mapping or GIS application are high and the sophistication of the software is far in excess of what a historian requires in order to explore the Domesday data.The first hurdle is therefore the software itself, both in terms of cost and the amount of learning time required to overcome the complexity of the software and of the cartographic process.Furthermore, when using a mapping tool or GIS, how does one disseminate research results which are embedded in GIS databases or other digital objects?Will everyone who one wishes to discuss the work have to have gone through the same time consuming and costly process as you?The desktop solution provided with the Domesday data overcomes all these problems by being free (for non-commercial use) and by providing only a simple (but still very powerful) set of functions.It is therefore accessible to all.The software, ArcExplorer™ is produced by ESRI™ the company who produce market leading GIS software. For the Anglo-Saxonist, few of the features depicted on modern maps are relevant to the period being studied.However, what level of detail is actually required in the underlying base map when what one is really interested in are the relationships 'between' locations in the data rather than their relationship to modern day features? The project provides only county boundaries, rivers, and simple topography.In practice the latter two are rarely used and the county boundaries are only there as a psychological 'prop', something familiar to tie the visualization to.The visual material that matters is the patterns of locations. When contemplating the use of GIS software and data there is a danger of falling into a mindset that seeks a level of complexity or sophistication that is completely unnecessary simply because this is how the methodology is presented by researchers in other fields.Although care is still required over the basic cartography, the approach advocated here greatly simplifies the acquisition, management, and visualization of the underlying map data.Many researchers are introduced to GIS as a digital tool rather than as an approach to scholarship.Gregory and Ell comment that the researcher using GIS should be asking 'what are the geographical aspects of my research question?' rather than 'what can I do with my dataset using this software?'[3].The relative ease with which ArcExplorer can be used and integrated with other commonly used tools such as Microsoft Word, Access or Excel helps the scholar to regard it as just another research tool rather than placing it on a pedestal.The excellent level of support provided EVA 2009 London Conference ~ 6-8 July Martyn Jessop _____________________________________________________________________ for ArcExplorer by ESRI helps this process greatly.There is a very good tutorial, materials to support its use in teaching, and support materials for allied areas of study such as cartography. Another key factor for humanities users is that the software is available on both Mac and Windows platforms with the same interface. CONCLUSIONS The project is focused upon an iconic historical document, widely known to the general public and scholar alike.Data sets have been derived from it before, for example the excellent Domesday Explorer project by John Palmer [4] but the focus has not been on visualization.Digital scholarship performed on Domesday data sets has the potential to explore how the quantitative methods of the social sciences can be applied to 11 th Century England.However, this project goes further than that to explore issues of data interoperability and, most significantly, digital visualization.The digital tools present here allow the results and supporting data sets to be disseminated without encountering the barriers normally experienced by the 'casual' user of GIS. In this first phase of the project the visualization tools and the associated data sets are currently being trialled with a group of selected users.The style of the visualizations has moved away from 'traditional' maps used simply for the presentation of the final results of scholarship to more dynamic tools for exploring it interactively.The visualizations are stripped of extraneous geographical information and the researcher focuses entirely upon the visual relationships within the image -patterns of dots of varying size and colour.The next step will occur as the tools are used to explore the data on a larger scale; are there other ways to visualize this information that could be developed to gain fresh insights into the content despite it being subject to many years of scholarly study already?
v3-fos-license
2019-04-27T13:09:39.272Z
2018-05-12T00:00:00.000
134250502
{ "extfieldsofstudy": [ "Medicine", "Geography" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/00934690.2018.1464332?needAccess=true", "pdf_hash": "1c7426b324ef6fed70959548215c5d6ae7d83bb8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44676", "s2fieldsofstudy": [ "Geography", "History" ], "sha1": "bf5bad1e75cdeb96e6f19fdc29fa5dbf6f4b75dc", "year": 2018 }
pes2o/s2orc
Landscapes of Urbanization and De-Urbanization: A Large-Scale Approach to Investigating the Indus Civilization’s Settlement Distributions in Northwest India ABSTRACT Survey data play a fundamental role in studies of social complexity. Integrating the results from multiple projects into large-scale analyses encourages the reconsideration of existing interpretations. This approach is essential to understanding changes in the Indus Civilization’s settlement distributions (ca. 2600–1600 b.c.), which shift from numerous small-scale settlements and a small number of larger urban centers to a de-nucleated pattern of settlement. This paper examines the interpretation that northwest India’s settlement density increased as Indus cities declined by developing an integrated site location database and using this pilot database to conduct large-scale geographical information systems (GIS) analyses. It finds that settlement density in northwestern India may have increased in particular areas after ca. 1900 b.c., and that the resulting landscape of de-urbanization may have emerged at the expense of other processes. Investigating the Indus Civilization’s landscapes has the potential to reveal broader dynamics of social complexity across extensive and varied environments. Introduction Investigating transformations in the distribution and density of past settlements is crucial to the identification of "signature landscapes," which are those generated by specific social, cultural, and economic processes within specific physical environments (Wilkinson 2003: 4-9). Comparative research has revealed an array of signature landscapes that have been associated with the emergence, transformation, and dissolution of social complexity across the globe (Algaze 2005;McIntosh 2005;Ur 2010;Wilkinson et al. 2014;Lawrence and Wilkinson 2015;Chase and Chase 2016;Lawrence et al. 2016Lawrence et al. , 2017. The identification and analysis of such landscapes contribute a large-scale dimension to models of social change, revealing interactions between societies and their dynamic and transforming environments. These investigations have the potential to transform these models, casting into high relief social processes that are dispersed across a broader landscape and may be hidden or obscured at the level of an archaeological excavation at a single site. Patterns in settlement distribution, especially the frequency with which sites appear within a given area or environment, play a useful role in these studies by revealing settings that people favored as prevailing social conditions changed through time. However, archaeological surveys are also often constrained to specific areas by the logistics of fieldwork, limiting the scale of their interpretation and analyses. To investigate large-scale changes in settlement distribution, it is necessary to assemble and analyze large synthetic datasets built over many years by multiple teams (Lawrence and Bradbury 2012). Successfully integrating datasets requires recognizing the limitations and errors incumbent to the production of each constituent survey project. Northwestern India was a key setting for the emergence of South Asia's earliest complex society, the Indus Civilization. Indus cities arose around 2600 B.C. across extensive and ecologically diverse areas of western South Asia (FIGURE 1), and concentrations of archaeological sites have been reported in the modern states of Rajasthan, Haryana, and Punjab in India (Stein 1942;Suraj Bhan 1975;Joshi et al. 1984;Possehl 1999;Shinde et al. 2008;Singh et al. 2008Singh et al. , 2011Chakrabarti and Saini 2009;Dangi 2009Dangi , 2011Kumar 2009;Pawar 2012). It has frequently been noted that the density of settlements across the alluvial plains of northwestern India appears to increase after ca. 1900 B.C. (Madella and Fuller 2006;Kumar 2009;Wright 2010Wright : 317-318, 2012. Climate change appears to have played a role in this shift, as changes in settlement density seem to have favored the variability of local environmental conditions in northwestern India in the face of a weakening in the Indian Summer Monsoon around 2200-2100 B.C. (Madella and Fuller 2006;Giosan et al. 2012). The increase in settlement density in northwestern India may have been due to the strong possibility that this region received more reliable rainfall from a weakened monsoon . As people left Indus cities, they appear to have populated particular areas, establishing new smallscale settlements and re-occupying mounds that had been abandoned in earlier periods. This apparent shift both resulted from and contributed to a process of de-urbanization, wherein smaller and more dispersed settlements replaced larger population aggregations. Much attention has been given to the process of urbanization that brought together multiple groups of specialized artisans and agro-pastoralists (Kenoyer 1997;Possehl 2002;Wright 2010). However, it is unclear how de-urbanization transformed these social relations, as it was a dispersed process that unfolded at a large number of sites across an extensive area, thus necessitating a large-scale approach that incorporates the results of multiple projects. To utilize multiple datasets in aggregate studies, it is necessary to compare the approaches, questions, and methods that contributed to each researcher's agenda (following Cooper and Green [2015]). It has been noted that site location reports from northwestern India vary in their intensity of survey coverage, adherence to modern administrative boundaries, and assumptions about the locations of past watercourses (Singh et al. , 2011. To address these challenges, this paper describes the assembly of a pilot database that integrates all site location data from a sample region that encompasses two major surveys carried out by the "Land, Water and Settlement" project (Singh et al. , 2011. The data were then analyzed using geographic information systems (GIS) analyses; this was the first stage of a larger effort to integrate site locations from northwestern India into a single relational database, which is being carried out for the "TwoRains" project. This approach is informed by Kintigh (2006: 573), who has advocated increasing the scale of archaeological investigations without compromising the detail recorded in specific reports. It allows the analysis of site location data at different levels of certainty (following Lawrence and Bradbury [2012]). The pilot database facilitated a test of the following hypotheses: first, that in northwestern India, the Mature Harappan period saw the nucleation of settled population; second, that the Late Harappan period saw an increase in settlement density. Our results support these hypotheses and enhance the interpretation that site density increased in particular locations with the decline of Indus cities. It follows that the landscapes of urbanization and de-urbanization created by Indus populations integrated a range of varied environments to produce and sustain social complexity. Landscape Archaeology and the Indus Civilization Landscape archaeology provides the approaches necessary to frame research on past social processes. It has been foundational to modeling social complexity in ancient Mesopotamia (Adams 1966(Adams , 1981Adams and Nissen 1972;Wilkinson 2003;Ur 2010;Wilkinson et al. 2014;Lawrence and Bradbury 2012;Lawrence et al. 2016Lawrence et al. , 2017, and has also been critical to the study of complex societies across the globe (Kantner 2008;Chase et al. 2011;Glover 2012;Kosiba and Bauer 2012;Luo et al. 2014). Large scale analyses are necessary for outlining the interaction between emerging complex societies and their varied local settings, revealing patterns that are difficult Sites that have been identified as cities (red dots) are shown as well as the sample area considered in this paper (blue square). Extent was derived from secondary sources. Basemap Source: http://earthobservatory.nasa.gov/Features/BlueMarble to explain in reference to their local settings alone and thus must result from processes of greater regional integration (Lawrence et al. 2017). By incorporating data from locations across broad and varied environments, landscape approaches have the potential to challenge traditional models of complexity and urbanism. Such approaches have revealed processes such as the heterarchical clustering of settlements (for example, McIntosh [2005]) and alternative political trajectories (for example, Fargher and colleagues [2011]). Wilkinson (2003: 4-9) argued that relationships between archaeological remains and their environmental contexts result in "signature landscapes" that exemplify the prevailing configurations of social, cultural, and economic processes within specific environmental settings and chronological periods. Signature landscapes can be compared to one another to investigate social change (Wilkinson 2003: 215). Site locations are key to this approach, but to address largescale processes that take place throughout a landscape typically requires aggregating data built up by many projects. A framework for integrating heterogeneous survey datasets has been set out by Lawrence and Bradbury (2012), who characterize site locations using factors such as boundary certainty, geographical precision, and archaeological significance, ascertaining different levels of certainty in archaeological datasets. Boundary certainty addresses the size of archaeological sites and lies beyond the scope of this paper, but site location reports from northwestern India can be used to establish a basic level of certainty based on geographical precision (locations) and archaeological significance (approximate chronology). Linking multiple datasets has become essential to investigating shifts in settlement density that illustrate how populations engage with and retreat from local ecologies as social relations transform (Lawrence et al. 2017). This approach is particularly applicable to northwestern India, where integrating a wide range of site location reports has the potential to cast the Indus Civilization's signature landscapes, and interrelationships between varied local environments and social complexity, into high relief. The Indus Civilization in northwestern India After a protracted period of village-based occupation, the first cities in South Asia appeared during the Mature Harappan period of the Indus Civilization (ca. 2600-1900 B.C.), which were the largest of thousands of settlements across areas that today lie in western India and Pakistan (Marshall 1931;Wheeler 1953Wheeler , 1966Wheeler , 1968Sankalia 1962;Fairservis 1967Fairservis , 1971Lal 1993Lal , 1997Kenoyer 1998;Chakrabarti 1999;Possehl 1999Possehl , 2002Agrawal 2007;Wright 2010;Coningham and Young 2015;Ratnagar 2016). Five Indus sites are typically considered cities, and their locations in contrasting environments support the interpretation that they were to some degree politically discrete (Kenoyer 1997(Kenoyer , 2006Wright 2010;Petrie 2013;Sinopoli 2015; (FIGURE 1). At the same time, the aspects of Indus material culture that were shared across such a vast and varied extent suggest that the Indus Civilization's political organization resulted in signature landscapes that were distinct from those materialized by other early complex societies. Excavations at Indus sites have produced evidence of a broad range of sophisticated technologies (K. K. Bhan et al. 1994;Vidale 2000;Miller 2007;Agrawal 2009), including copper metallurgy (Hoffman and Miller 2009), standardized weights and measures (Ratnagar 2003;Kenoyer 2010;Miller 2013), and engraved stamp seals (Joshi and Parpola 1987;Shah and Parpola 1991;Parpola et al. 2010;Green 2016). Indus settlements also present examples of civic coordination and planning, though they lack direct evidence for the extreme forms of social differentiation and political hierarchy reported in other complex societies (Wright 2010(Wright , 2016Green 2018). Landscape approaches and archaeological surveys have been essential to challenging past narratives that suggest that the Indus Civilization was socio-culturally uniform and homogeneous (Piggott 1950;Wheeler 1966). Initial surveys highlighted its great extent (Stein 1942;Sankalia 1962), and subsequent studies identified local variation in material culture (Suraj Bhan 1969Bhan , 1975Mughal 1971;Possehl 1980;Possehl and Raval 1989;Possehl and Herman 1990). The increase in fieldwork in India between 1960 and 1980, predominantly recorded in Indian Archaeology: A Review, has been used by multiple researchers to generate site location lists. One such study by Joshi and colleagues (1984: 513) suggested that the distribution of site locations revealed "economic pockets" during the Mature Harappan period, which were apparent concentrations of settlements that were closely knit and perhaps economically self-sufficient. As features of the Urban Phase, economic pockets were thought to support one or more large settlements (Joshi et al. 1984: 514). Smaller settlements, which have many of the same characteristics as the cities themselves, comprise the majority of Indus sites (Chakrabarti 1999;Wright 2010;Petrie 2013;Sinopoli 2015). Surveys of the settlement distribution along the Beas River in Pakistan's Punjab revealed that the economic diversification and intensification apparent in assemblages from the city of Harappa is also apparent in the material assemblages of nearby smaller settlements (Wright et al. 2001(Wright et al. , 2003. Other studies have used survey data to clarify site distribution patterns in other Indus regions, including Sindh in Pakistan (Flam 1993(Flam , 2013Jansen 2002;Shaikh et al. 2003;Mallah 2008), and Gujarat in India (Possehl and Raval 1989;Possehl and Herman 1990;Shinde 1992;Sonawane and Ajitprasad 1994;Possehl 1999). The plains of northwestern India are characterized by a range of alluvial environments, an absence of mineral resources, extensive irrigation farming, and numerous archaeological sites from all periods. Some site locations were initially reported as early as 1832, and relatively informal excavations at Indus sites in this region began in the early twentieth century (Possehl 1999;Lahiri 2006). Field methods and recording improved with the reinvigoration of the Archaeological Survey of India under Sir John Marshall, but remained rudimentary by modern standards (Lahiri 2006). Parts of what is now northwestern India were later explored by Stein (1942) and Ghosh (1952), who assumed that settlement densities in the region resulted from proximity to now-dry watercourses. Further surveys through the 1970s and 1980s brought to light many important Indus sites, including Mitathal and Rakhigarhi (Suraj Bhan 1975;Suraj Bhan and Shaffer 1978;Francfort 1985), and there were several attempts to collate these data (Joshi et al. 1984;Possehl 1999). Unfortunately, the majority of these studies predate the use of global positioning systems (GPS), so there is a degree of imprecision in the reported site location coordinates (Petrie and Singh 2008; Singh et al. 2008). During the same period, excavations were also undertaken at the sites of Kalibangan (Thapar 1975;Lal 1979Lal , 2003, Banawali (Bisht 1978(Bisht , 1987(Bisht , 2005Asthana 1979), andMitathal (Suraj Bhan 1975). These excavations were essential to developing ceramic typologies for northwestern India, which typically include pottery vessel types and styles like those found at the cities of Harappa and Mohenjo-daro along with other types and styles with local characteristics. Subsequently, excavations were carried out at Rakhigarhi, which appears to have been urban in scale and complexity (Nath 1998(Nath , 1999(Nath , 2001Shinde 2016), and the smaller sites of Bhirrana (Rao et al. 2004) and Kunal (Khatri and Acharya 1995). More recent excavations at Farmana have unearthed large mud-brick houses, a coordinated street plan, and an extensive cemetery, highlighting additional associations between elements of material culture found at other major Indus cities and local artifact styles (Shinde et al. 2011). Material culture assemblages from these sites are believed to correspond to the periods nested within the overarching chronology of the Indus Civilization, such as those employed by Kenoyer (1997, 2003), Possehl (2002), and Wright (2010Wright ( , 2012. These periods include the Early Harappan, Mature Harappan, and Late Harappan periods. Following the Indus Civilization comes a sequence of phases marked by distinctive pottery types, such as Painted Gray Ware. This framework is widely utilized in South Asian archaeology, though the attribution of many types and styles to specific periods is not straightforward (Parikh and Petrie 2017, in press). Since 2000 there have been many surveys conducted in several states in northwestern India, including Haryana (Shinde et al. 2008;Dangi 2009Dangi , 2011Parmar et al. 2013), Rajasthan (Pawar 2012;Pawar et al. 2013), and Punjab (Sharan 2018). Most archaeological surveys in northwestern India have employed a "village-to-village" methodology, wherein a survey team visits the contemporary villages within an administrative unit and asks local informants where archaeological materials can be found (see discussion of these methods in colleagues [2010, 2011]). The number of villages and intensity of agricultural land use therefore impact the results of these surveys. Many site locations are only readily accessible through secondary studies, which combine the primary results of published and unpublished survey projects, and which reinforce the notion that the region was home to several dynamic settlement concentrations, though they differ on specific interpretations. For example, Kumar (2009: 17) argued that settlement density in northwestern India increased markedly during the Late Harappan period, while Chakrabarti and Saini (2009: 77) suggested that the change in population between the Mature and Late Harappan periods was less dramatic, indicating that that migration from the declining cities may be unlikely. It has been clear for some time that a high-resolution evaluation of these site location data will improve scholarly understanding of the processes of urbanization and deurbanization that created and transformed the Indus Civilization's signature landscapes. The "Land, Water and Settlement" (hereafter LWS) project produced two complementary site location datasets that can anchor data assembly projects: the Rakhigarhi Hinterland Survey and the Ghaggar Hinterland Survey. LWS focused on rural life in northwestern India, and expanded and refined a subset of site location datasets from this region (Singh et al. , 2011). The LWS surveys demonstrated that during the Mature Harappan period there was an overall reduction in settlement density that sustained the emergence of larger urban settlements like Rakhigarhi (Singh et al. , 2011. During the Late Harappan period, the number of sites in northwestern India appears to increase, but these settlements are typically small in size (Madella and Fuller 2006;Kumar 2009;). This transformation is likely associated with climate change, and it has been suggested that a weakening summer monsoon prompted communities in northwestern India to diversify their agricultural practices (Madella and Fuller 2006). However, it is clear that this diversity emerged well before cities and may have provided the risk buffering and mitigation necessary to maintain food surpluses in the face of climate change (Petrie et al. 2016Petrie 2017;. New landscape approaches to the Indus Civilization have the potential to reveal how social complexity integrates vast and varied environments in the face of dramatic changes in social scale. However, the environmental and socio-cultural diversity and variation across the extensive region occupied by Indus populations inhibit the understanding Indus landscapes if site location reports remain confined to the spatial silos of individual studies. Assembling Indus site location reports into larger integrated databases creates an opportunity to critically assess settlement densities and identify research strategies that will increase certainty by revealing areas where data need to be reviewed and re-examined and locations that will benefit from additional survey. More research on the diverse range of social processes that unfolded in early complex societies is needed. It is particularly critical to determine when transformations in past landscapes reinforce current models of social complexity, and when they demand the revision of traditional models, and the Indus Civilization is particularly important in the regard. Investigating the Indus Civilization's signature landscapes may reveal how particular environments, and variation within them at smaller scale, interact with heterarchical social processes, such as those outlined by Crumley (1995) and McIntosh (2005). Moreover, most classic studies of site location data tend to emphasize the relationship between an early complex society and a particular environment, such as Wilkinson (2003). The Indus offers a fundamentally different challenge: an example of an extensive early complex society that encompassed a great range of different environments. Methods Assembling archaeological survey data from northwestern India into a single relational database facilitates the comparison, quantification, and spatial analysis of heterogeneous datasets. Though there have been several attempts to synthesize northwestern India's settlement distributions (Joshi et al. 1984;Possehl 1999;Chakrabarti and Saini 2009;Kumar 2009), the inherent limitations and discrepancies between datasets are rarely considered. Singh and colleagues (2008Singh and colleagues ( , 2010Singh and colleagues ( , 2011 noted that some reports omit precise coordinates, utilize inconsistent naming protocols, and only implicitly define their survey boundaries. Moreover, many of the primary surveys that underpin these datasets used modern administrative boundaries to delimit study areas (such as districts or blocks), and survey coverage is often strongly influenced by assumptions about the location of watercourse locations ). Combining "other people's data" into larger datasets requires identifying comparable attributes across datasets and assembling them into formats that can be cross-referenced (Atici et al. 2012). Integrating site location data within a single relational database is the first step toward developing a cyber-structure that preserves the character of particular datasets (Cooper and Green 2015). Toward this end, this paper aggregates site location reports to generate a novel tabulation that integrates all previously reported site locations within a sample area. Sources The site locations from four secondary studies (Joshi et al. 1984;Possehl 1999;Chakrabarti and Saini 2009;Kumar 2009) were digitized to provide initial tables for the pilot database. These studies analyzed overlapping geographical regions using multiple primary site location reports. The two earlier studies examine settlement patterns across the entire extent of the Indus Civilization (Joshi et al. 1984;Possehl 1999), and the two later studies selected areas that were assumed to be in proximity to past watercourses in northwestern India (Chakrabarti and Saini 2009;Kumar 2009). Some primary site locations have been reported by multiple sources. A series of unpublished tables based on previous efforts to combine Indus site locations into an integrated database was also included in the pilot database. These started with Possehl's (1999) tabulations, and incorporated an additional table of site locations developed as a Google Earth .kmz file by Randall Law. This .kmz file presented Possehl's tabulation in a format that could be read by Google Earth and projected onto satellite imagery. Law enhanced this dataset by visiting many locations, adding to or adjusting their coordinates. Although it was not formally published, Law's .kmz file was made available to the scholarly community, and contains important supplementary notes for many locations mentioned in the secondary studies. This table has undergone some cleaning and revision via a comparison between the Possehl and Law datasets (Cameron Petrie and Edward Cork, personal communication 2008). Additional tables derived from recent primary site location reports were drawn from location reports from the LWS surveys (Singh et al. , 2011, a survey of the Mansa district of India's Punjab ) and a report of site locations in the districts of Fatehabad in India's Haryana and Mansa and Sangrur in India's Punjab (Dangi 2011). The LWS surveys employed GPS and aimed for complete coverage within their bounded study regions. The Rakhigarhi Hinterland Survey (RHS) investigated a circular area roughly within a 15 km radius surrounding the major Indus city of Rakhigarhi , while the Ghaggar Hinterland Survey (GHS) targeted a previously un-surveyed area around the middle course of an important watercourse that is largely known from remote sensing imagery (Singh et al. 2011). These LWS surveys prioritized questions about site and water catchments over administrative districts. Pilot database development To assemble the pilot database, tables derived from the above sources were imported into a relational database using File-Maker Pro (v15), which facilitated the speedy examination of attributes from non-corresponding tables prior to developing related fields through comparison. After importing the selected tables, each site location was given a unique identifying value: the Pilot TwoRains Identification Number (ptr_id). The resulting ptr_id list was initially extensive, including over 10,000 entries. Moreover, overlap between the original tables resulted in significant duplication of entries. To reduce the ptr_id list, entries that shared a common location were reclassified, which reduced the number of ptr_ids. As records based on the same site location were linked to the same key ptr_id, it became possible to query information about the same location derived from multiple sources. Duplicates were then assigned the same ptr_ids by projecting the site table in a GIS (ArcGIS v10.4.1) and examining each location against ESRI's World Imagery. While the resulting ptr_id table allowed the querying of related fields across multiple tables, standardizing the information available for each site location and reconstructing its history and characteristics required the review of each record. To evaluate settlement density in northwest India, ptr_ids from a sample area were selected for more detailed assessment. The sample area consists of a projected rectangle that encloses both LWS survey areas that was automatically generated (FIGURE 2). In addition to the LWS site locations, the entire sample region was included within the research areas of all the major secondary studies of Indus Civilization site distribution mentioned above. The sample area encloses a projected area of 10476.77 km 2 and includes 695 reported site locations. Bibliographic information was assembled for each site location and cross-referenced with the original publications to the extent that primary sources were available, and assessments of site location accuracy and precision were included in the resulting table. Outright errors, reported locations that lacked complete geographical information, were located outside of South Asia, or were unlikely to be related to a specific location in the landscape, were flagged with the assistance of GIS analyses undertaken using ArcGIS 10.4.1 and QGIS v2.18.2. The apparent precision of site location reports was noted (also indicated by whether full geographical coordinates were included). Reported periodization for each site location was also compiled and included in the resulting table. The pilot database compiled the history of study for each site location, along with its earliest likely discovery date, and the tabulated results of this compilation are presented in the supplement accompanying this paper (SUP-PLEMENTAL MATERIAL 1). Results The aggregate site location data assembled in the pilot database facilitated the development and testing of interpretations about Indus settlement density in northwestern India (FIGURE 3). Most site locations were reported between 1981-1990, and there was a resurgence in archaeological survey that appears to have dramatically increased the number of reported site locations in the sample region following the year 2000 (FIGURE 4). Unstandardized reporting conventions raise the need to examine the relationship between contemporary villages and archaeological sites in detail, as many coordinates in the database, especially in earlier reports, are known to reflect the location of nearby villages rather than the location of specific settlement mounds. The sample area included 695 previously reported site locations, 80% of which were reported with geographical coordinates that include degrees, minutes, and seconds (n = 554). However, there are also site locations that include seconds but are likely to be imprecise, with reported values of 00, 15, 30, or 45. Reassessment of these locations will be carried out in future stages of data consolidation and a sample of these locations will be updated after future fieldwork. Those reported without full geographical coordinates were typically documented in 2002 or earlier (n = 64), prior to the regular use of GPS. A negligible number (n = 14) of site locations appear to have been reported erroneously, either in recording of the site location in the field or in later re-publishing. Erroneous site locations have coordinates that appear to be incomplete or refer to locations that did not likely correspond to archaeological sites (as indicated in ESRI's World Imagery Basemap). Though the great majority of site locations were reported with precise geographical coordinates, only 386 were likely collected with the aid of GPS (FIGURE 5). It is clear that many of the reports in the northeastern quadrant of the study area were recorded without the assistance of GPS and may warrant re-investigation. As survey coverage is not uniform, many sites likely remain to be discovered in areas that were ostensibly covered by secondary studies, but which may not actually have been surveyed extensively (FIGURE 3). Around half (n = 372) of the site locations in the pilot database have only been reported once. Of those, 43% (n = 161) are site locations that pre-date the LWS surveys and do not appear to have been revisited or reconfirmed, while the remaining site locations (57%, n = 211) consist of new reports by the LWS or later surveys. This pattern of reporting has important implications for the identification of site concentrations: areas that have particularly high site densities may correspond to what Joshi and colleagues (1984) described as the Mature Harappan period's economic pockets. Similar concentrations may remain unreported in areas that have not been recently surveyed, which is a possibility that warrants further testing. Recent efforts to improve survey coverage in northwestern India have transformed projections of site density in the study area, reinforcing previously identified patterns and revealing new ones. Figure 6 presents contrasting heat maps of location density for sites identified before and after 2009 for all periods. These were created using the Heatmap Plugin v0.2 for QGIS v2.18.2. The plugin was used to rasterize vector data derived from the pilot site location table (sorted by earliest year reported) using a radius value of 5 mm and a maximum automatic value. The best rendering quality setting was used, and the resulting raster layers were exported through a print composer that presented both side by side. These raster images assign each pixel a value according to the number of nearby site locations. The results of surveys prior to 2009 reveal several site location concentrations apparent in the dataset (FIGURE 6B), including concentrations to the northwest and southeast of the modern city of Ratia in the northwestern quadrant of the study area and a slight concentration around the site of Banawali southwest of Ratia. A clear concentration was found around the site of Rakhigarhi, which appears to be aligned with linear concentrations of settlements extending toward the southwest. In line with this concentration near Rakigarhi are concentrations near Jind and northeast of the modern town of Hansi. In the northeastern quadrant, a further concentration appears northeast of the town of Narwana, not unlike those found in association with Rakhigarhi. Three concentrations in the northeastern quadrant are largely based on the findings of older surveys (Suraj Bhan 1975;Suraj Bhan and Shaffer 1978). Recent surveys have enhanced the clarity of these findings (FIGURE 6B). Given that increased survey efforts confirmed previously identified patterns, it will be critical for future surveys to reassess the concentrations identified in the northeastern quadrant, which have not yet been revisited. It is unclear whether areas with few reported site locations, such as between the LWS survey areas, were in fact thinly occupied, or whether they simply require additional study. There is a gap in survey coverage within the southwestern quadrant of the sample area, extending around today's city of Hisar and the village of Barwala. Site density in the northeastern corner of the study area, however, is similar to that seen in the areas covered by the LWS surveys. While reported sites in the northeastern quadrant of the survey area are numerous, none of the locations were collected with the assistance of GPS (FIGURE 5). The site locations reported in the northeastern quadrant of the sample area are nonetheless characterized by a clear pattern. Figure 7A depicts each site according to the number of times it has been reported (as increasing size) and the earliest year of its report (darker blue is more recent). Those in the northeastern quadrant have been re-reported often, and although their original reports are quite early (Suraj Bhan and Shaffer 1978), they have not been revisited. While some concentrations of sites in the northwestern and southeastern quadrants have a similar pattern in reporting, they have been surveyed more intensively in recent years. The northeastern quadrant exhibits patterns in site proximity that are similar to those in the LWS survey areas (FIGURE 7B). Assuming a settlement's overall spatial plan was approximately circular, a buffer of 1 km around a site location would encapsulate the entire area of even the largest Indus cities (Mohenjo-daro's largest reported area exceeds 200 hectares [Jansen 1993]). Calculating the number of site locations that fall within 1 km of one another reveals that each site is proximal to a mean of two others. Twenty-eight site locations are within 1 km of 5 other site locations, and four are within a kilometer of more than six other sites. In the more intensively surveyed northwestern and southeastern quadrants, high-proximity sites are often associated with major settlements, such as Rakhigarhi and Banawali. The northeastern quadrant, in contrast, has not benefited from recent survey efforts, and yet high proximity site locations exist within this quadrant. Reported chronological data reveals diachronic changes in the locations that were favored for settlement as people left Indus cities beyond (FIGURE 8). Just over half of the site locations in the sample (n = 343) have been characterized as Early (n = 207), Mature (n = 122), and/or Late Harappan (n = 278) (FIGURE 9). Many site locations have components that post-date the Indus Civilization, with materials that belong to the Painted Gray Ware (n = 84), Early Historic (n = 245), and/or Medieval (n = 221) periods. These figures support the hypothesis that the overall number of settlements decreased during the Mature Harappan period and increased as the major cities were depopulated after ca. 1900 B.C. (FIGURE 9). The spatial dimensions of these trends support previous research on settlement density and northwestern India, and can be used to develop new research questions. Discussion This paper supports the interpretation that the number of settlements in northwestern India decreased during the Indus Civilization's Mature Harappan period. Notably, the LWS surveys did not document increases in post-urban occupation in either of the areas of the primary surveys, which suggests that any increases occurred elsewhere . Settlement increases may have occurred in the northeastern quadrant of the sample area, contributing to the increasing of the settlement density of northwestern India in the Late Harappan and Painted Gray Ware periods. It is reasonable to state that sites that have been characterized as Early Harappan were evenly distributed within surveyed regions, which is the view proposed by Chakrabarti and Saini (2009) and supported by subsequent projects (Dangi 2011). Gaps in the distribution of Early Harappan sites around the future urban center of Rakhigarhi, and concentrations in the distribution of GHS sites in the northwestern corner of the sample area have, however, been detected (Singh et al. : 41, 2011. Early Harappan settlements thus appear to have been numerous, but tended to be some distance apart from one another. This apparent pattern may be the result of data quality, as the most widely distributed site locations appear to correspond to older surveys (FIGURE 7A), but the patterns are not mutually exclusive, and their co-occurrence suggests that the people who established these early settlements did not adopt a single approach to obtaining or accessing water. Petrie and colleagues (2017) have suggested that this distribution likely set the stage for the Indus Civilization's later emergence, positioning settlements to take advantage of a wide variety of water sources. The Mature Harappan period saw an overall reduction in the absolute number of site locations (FIGURE 9). There is no consensus as to whether the emergence of Indus cities required dramatic changes in water use. Chakrabarti (1988Chakrabarti ( , 1999 has long argued that canal based irrigation may have been important, and there is evidence for major water storage facilities at sites like Dholavira (Bisht 2005;Wright 2010). Others have proposed that Indus settlements had a wide variety of low-cost irrigation techniques at their disposal (Miller 2006(Miller , 2015Wright 2010: 33-34;Petrie 2017), but our understanding of water supply in Indus period northwestern India remains nascent. That there are fewer site locations in the Mature Harappan period than in the Early Harappan period indicates a general concentration of settlement in specific areas (FIGURE 8B). The pattern appears to have been variable, however, and the reduction of settlement in the northwestern corner of the sample area (Singh et al. 2011: 101) was more pronounced than the reduction in the number of Mature Harappan sites near Rakhigarhi (Singh et al. 2010: 46;. Given the apparent diversity in cropping practices that is evident in northwestern India's Mature Harappan period (Petrie 2017;Petrie et al. 2016Bates et al. 2017aBates et al. , 2017b, and the problematic linkage between site location and watercourses that has often been assumed (see review by Petrie and colleagues [2017] and colleagues [2010: 44, 2011: 102]), it is essential to further investigate the socioeconomic and environmental dynamics that contributed to this concentration of settlement during the height of the Indus Civilization. The Late Harappan period marked a return to the widespread distribution of site locations observed during the Early Harappan period (FIGURE 8A,C). This reassessment has confirmed that around Rakhigarhi, Late Harappan settlement site locations are more numerous than, but generally proximal to, their Mature Harappan predecessors, which is a pattern previously identified by Singh and colleagues (2010: 42). The results presented here, however, confirm that site locations in the northwestern corner of the sample area are dramatically reduced overall in the Late Harappan period (Singh et al. 2011;. The northeastern quadrant of the sample area appears to have been densely occupied in the Early Harappan period and re-occupied later. There thus appears to have been a shift in settlement locus from the northwestern to the northeast of the sample area during the closing years of the Mature Harappan period (FIGURE 8B), and potentially also movement of populations into the northeast from outside of the study area. It has been argued that this particular area of the plain may have had more reliable monsoon rainfall (Petrie 2017;. A shift toward this part of the plain may have been a key strategy for building resilience in the changing climatic conditions that characterize the end of the Mature Harappan period . However, it remains unclear to what extent this Late Harappan shift towards the northeastern quadrant of the study area may be an artifact of early methods and assumptions. Determining the veracity of the Late Harappan shift is critical, considering that in the subsequent periods (FIGURE 8C-D) no site locations have yet been reported in the northeastern quadrant of the sample area. This, again, may reflect survey methods, the chronological breadth of surveys, and/or the research interests of surveyors, rather than an actual absence of sites. There are, however, numerous reports of Painted Gray Ware sites in the northwestern quadrant, and a further increase in settlement there in the Early Historic period (Singh et al. 2011). It is notable that many of these later sites contribute to the growing concentration of sites stretching from immediately east of Ratia to just north of Fatehabad, which is shown to striking effect in Figure 6a. The distribution of Painted Gray Ware sites also breaks with the concentration of Late Harappan sites near Rakhigarhi (Singh et al. 2010: 46). Prior to 2009, a total of 455 sites had been reported within the sample area. This number has increased substantially since then, increasing the total reported site locations while increasing survey coverage in less than half of the sampled area. If similar quantities of new site locations are reported throughout the entire sample extent, the number of total site locations could well increase another twofold. Future data integration work will address these issues, as will iterative phases of fieldwork to ground truth and update site location data. Moreover, the category of "site" needs to be expanded to specify different kinds of archaeological phenomena in northwestern India, and it is essential to conduct complementary intensive surveys at individual sites, systematically assessing surface materials to identify and delineate the specific spatial distribution of different classes of artifacts and features, an approach which has yielded considerable insights into social relations between the Indus city of Harappa and its surrounding settlements in Pakistan's Punjab (Wright et al. 2001(Wright et al. , 2003. Adopting these techniques could contribute new regional perspectives on patterns in material culture that are unbound by the site concept (Kantner 2008; Howey and Burg 2017). The ptr_id table has provided a means of tentatively assessing certainty in site location datasets from northwestern India. At this stage, the pilot database speaks primarily to the archaeological significance and geographical precision of site location reports, though continued database development will allow the assessment of variables such as site boundary certainty and, thus, site size. There remain many unpublished and, at present, inaccessible site location datasets that must be digitized and added to the database. As this database grows and the findings presented here are confirmed (or refuted) through further fieldwork, it will be possible to identify further gradations of certainty in site location data, and test hypotheses at larger scales. The study reveals the necessity of examining the silos in which archaeological survey data are generated and analyzed. Projecting site locations merely as dots on a map can lure researchers into thinking they understand previous settlement patterns better than they do, while site locations that remain more or less unmoved after multiple on the ground surveys are of particular value. The Indus Civilization in northwestern India is particularly different in this regard, as it takes many different survey datasets to understand the Indus Civilization's settlement distribution, incorporating some areas that have been surveyed again and again. This very fact means that certain trends in settlement are surer than others. Further investigation of the Indus Civilization's signature landscapes also has the potential to enhance alternative models of social complexity, revealing how heterarchical social relations may have materialized and supported social relations across vast and varied environments. Conclusions Archaeological survey data are essential for understanding the dynamics of social complexity. Identifying the signature landscapes that materialized the prevailing social processes that underpinned these dynamics requires large scale analysis that exceed the boundaries of most individual field survey projects. By integrating site location data from multiple projects, this paper offers new support for the interpretation that northwestern India comprised one or more of the Indus Civilization's signature landscapes, where settlement densities chart trajectories of urbanization and de-urbanization, involving agglomeration and dispersal into areas with suitably favorable environmental conditions. Site location concentrations appear to generally correspond to previous survey coverage, and there has been an overall underestimation of northwestern India's settlement density across both time and space. There remain many areas where systematic surveys are needed, such as the broad area between the LWS surveys, and many areas would benefit from re-visitation and re-evaluation, such as the site locations reported in the northeastern quadrant of the study area. An extensively occupied landscape appears to have emerged during the Early Harappan period and was largely re-occupied during the Late Harappan period, as there appears to have been a displacement of settlement into specific parts of the plain. It remains necessary to test the veracity of this re-occupation by reassessing sites located in the northeastern corner of the surveyed area and closing gaps in survey coverage. Engaging in such reassessment will contribute to research on the signature landscapes that inform scholarly understanding of urbanization and de-urbanization and the impact of variable and changing environments on settlement distributions in the past. Acknowledgments This study has been carried out as part of the TwoRains project, a multidisciplinary study of climate change and the Indus Civilization in northwestern India. The TwoRains project is funded by the European Research Council. It is based at the McDonald Institute for Archaeological Research at the University of Cambridge, and is being carried out in collaboration with Prof. Ravindra Nath Singh and the Department of AIHC and Archaeology, Banaras Hindu University. The authors would like to thank their collaborators, especially Prof. Singh and Dr Vikas Pawar, but also the members of the Land, Water and Settlement project that were involved in the initial surveys and data processing, including Carla Lancelotti, Sayantani Neogi, Arun Kumar Pandey, Danika Parikh, and David Redhouse. They would also like to acknowledge Hector Orengo, who provided invaluable feedback at throughout the development of the manuscript. Disclosure Statement No potential conflict of interest was reported by the author(s). Funding This work was supported by H2020 European Research Council (BE) [2020,648609] Notes on Contributors Adam S. Green (Ph.D. 2015, New York University) is an anthropological archaeologist who is interested in the comparative study of complex societies through the lenses of technology, landscapes, and political economy. He specializes in the archaeology of South Asia and of the Indus Civilization. As a member of the TwoRains project, he is combining systematic archaeological fieldwork with emerging digital and computational tools to refine, enhance, and expand settlement distribution data from northwestern India. Cameron A. Petrie (Ph.D. 2002, University of Sydney) is the Principal Investigator of the TwoRains project, a multi-disciplinary investigation of climate change and the Indus Civilization in northwestern India. He has conducted research on the archaeology of India, Pakistan, and Iran, focusing on the investigation of complex societies and the relationships between humans and their environments. In collaboration with Prof. R. N. Singh at Banaras Hindu University, he is leading field component of the TwoRains project, which builds on the results of the previous collaborative Land, Water and Settlement Project.
v3-fos-license
2020-03-05T10:14:45.208Z
2020-02-29T00:00:00.000
214436496
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/02/29/2020.02.29.970962.full.pdf", "pdf_hash": "6b2037a50cf2df9e93a03573bd184bf30a7ebd07", "pdf_src": "BioRxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44677", "s2fieldsofstudy": [ "Biology" ], "sha1": "18392640a9c373f099a1c55b94ac9259aeb9e59d", "year": 2020 }
pes2o/s2orc
Correlating predicted epigenetic marks with expression data to find interactions between SNPs and genes Despite all the work done, mapping GWAS SNPs in non-coding regions to their target genes remains a challenge. The SNP can be associated with target genes by eQTL analysis. Here we introduce a method to make these eQTLs more robust. Instead of correlating the gene expression with the SNP value like in eQTLs, we correlate it with epigenomic data. This epigenomic data is very expensive and noisy. We therefore predict the epigenomic data from the DNA sequence using the deep learning framework DeepSEA (Zhou and Troyanskaya, 2015). INTRODUCTION Genome wide association studies (GWAS) are a powerful tool to study common, complex diseases. They help find genetic variants associated with a disease. Most efforts to analyze GWAS data are confined to single-nucleotide polymorphisms (SNP) occurring in coding regions of the genome. However, most SNPs found in GWAS are in non-coding regions (Deplancke et al., 2016;Gandal et al., 2016;Ward and Kellis, 2012). This has led to the hypothesis that variant SNPs in non-coding regions cause a change in gene expression rather than in protein function (Tak and Farnham, 2015). The first step in identifying the phenotypic consequences of these variants is to identify their respective gene targets. Identification of cis-regulatory elements and their gene targets was found to be a difficult task given the dimension of the non-coding genome (ENCODE Project Consortium, 2012). The effects of theses regulatory elements can be held over considerable distances (Chris Cotsapas, 2018), making proximitybased assignments incorrect. Several approaches have emerged as solutions to the problem: identifying genes with an eQTL driven by a disease risk variant in a locus, and identifying genes affected by regulatory elements driving disease risk (Chris Cotsapas, 2018). These approaches have limitations. eQTLs are common and therefore do not show causality. They are also very noisy. Close eQTLs are also not independent because of all the linkage disequilibrium (LD). Epigenomic marks influence gene expression so another approach is to correlate the epigenomic data with the gene expression in order to discover important regulatory regions for each gene. Unfortunately, this epigenomic data is very expensive and noisy. To tackle these two issues, we predict the epigenomic data from the DNA sequence using DeepSEA (Zhou and Troyanskaya, 2015). DeepSEA (Zhou and Troyanskaya, 2015) is a deep learning-based algorithmic framework for predicting the chromatin effects of sequence alterations with single nucleotide sensitivity. DeepSEA can accurately predict the epigenetic state of a sequence, including transcription factors binding, DNase I sensitivities and histone marks in multiple cell types. The predicted epigenetic state's only cost is the computation cost, it is more relevant to our use than the measured data because it is more robust and not sensitive to individual measurements perturbations. The model is made available through Kipoi's API (Avsec et al., 2018). Our methodology aims to correlate the predicted epigenetic information with expression measurements. Using this, we create a gene network of interaction for transcription factors (TF). METHODS All the code for our method can be found on GitHub (https://github.com/adespi/link_ epi_to_expr) We use the sequence and expression data from the 1000 genome project (Consortium andThe 1000 Genomes Project Consortium, 2015;Consortium and GTEx Consortium, 2017). We have both the expression and the sequence solely for 445 because we use the expression from Geuvadis project and take only the European American individuals to reduce confounding factors. We can analyze the expression by PCA and remove its 5 first components in order to reduce confounding factors (as biggest components usually contain mostly batch effects). We can take the log of the gene expressions to match the biological meaning of the correlation. We can standardize the gene expressions to give them an equal value in the correlation. To fulfill all these goals, we take a version of the expression pre-processed by the Geuvadis team called "GD462.GeneQuantRPKM.50FN.samplename.resk10.txt.gz". Due to the high number of genes in humans, we select only a few TFs (transcription factors) to test. We start as a basis with the list from the interaction network between TFs and genes found by Marbach et al. (2016) while working on Tissue-specific regulatory circuits. By using this list, we reduce the amount of data to process and keep only TFs that have a higher probability of being interesting. We have the Geuvadis expression data only for 392 TFs out of the 633 that we found in the paper by Marbach et al. (2016). For each TF, we extract a 500kb fasta sequence around the gene start from the vcf files, only taking into account the SNPs and not indel, mnp, ref, bnd, or any other alteration from reference sequence. We output only 1 sequence per individual even though they all have two alleles. We take the sequence of the alternate allele in case of heterozygous genotype. DeepSEA predictions We give this sequence to the DeepSEA model which runs through the 500kb around the gene start with a window of 1kb and a step of 100bp making 5000 predictions per gene (see Figure 1). Figure 1. Windows organization around gene start To compute all these predictions, we started with a CPU computer and the command line interface (CLI) version of Kipoi. The CLI only allows 9 predictions at a time and needs to rebuild the model each time. Building the model took more time than the actual prediction so we moved to the python version of 2/8 Kipoi, which builds the model once before being able to use the model as many times as needed. This enables a significant jump in speed from 2 iterations/sec to 12 iterations/sec (all the times are in CPU time). A further speed improvement is gained by using GPUs for the predictions enabling 47 iterations/sec. However, we notice that this configuration with the GPU is spending most of its time on data loading and pre-processing because these actions are performed by the CPU. Our current configuration gives predictions for each individual. However, lots of individuals have the same DNA sequence because there are usually few SNPs in a 1000 bp window. The prediction could be calculated only once for different individuals with the same sequence. We tried to implement this unique prediction, but the pre and post processing computation time was higher than the gained time on the prediction. We however don't calculate positions where there is no SNP at all because that condition is easy to verify. We perform all our time tests on small data-sets and taking a larger data-set gives better results (we suppose because of the optimization from the processors and python for large vectors). Our final model predicts for the 500kb around the gene in 11h of CPU time. On a 16 core machine this corresponds to 40 minutes. Correlations From the prediction we compute correlations. For this we define: Finally, we calculate cm for each i, j and k as in equation 1: For each position, we find the correlation between its predicted epigenetic marks and the TF's expression across the individuals (see Equation 1). We tried two languages to calculate the correlation and found that python takes 10 minutes and R 23 minutes to compute all the correlations. We also found that outsiders in the data sometimes give a high false positive correlation. To ensure robustness, the correlations are performed after a quantile normalization using scikit-learn (Pedregosa et al., 2011). As stated in their documentation, this method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers. It may distort linear correlations between variables measured at the same scale but renders variables measured at different scales more directly comparable. In terms of computation time, this increases the correlation CPU time from 10 minutes to 6 hour per 500kb interval (22 minutes on our 16 cores machine). At this point, we try to find which 1000 bp interval is the most correlated between the delta in expression across the individuals and the delta in DeepSEA prediction. DeepSEA bases it's prediction on its training data. It may be based on motifs recognition if they are relevant, but can also capture more abstract information, unknown to us. DeepSEA's predictions are distributed upon 148 different cell types. We keep all cell type predictions and not only cell types matching with the whole blood expression for several reasons. Firstly, DeepSEA may have missed something in its blood prediction and could have found more motifs or patterns in other cell lines but its results may still relevant for blood. Secondly, a change in whole blood expression can originate from a mutation which has an effect in a tissue. Thirdly, if the expression is changed in blood, it may change in other cell types as well. As the output number of correlations is very high (5000 positions * 919 marks * nbr o f genes), we use fdrtool (Strimmer 2008), an R package to compute q-values from the correlations using an empirical model to predict false discovery rate (FDR). The tail area-based FDR is simply a P-value corrected for multiplicity, whereas local FDR is a corresponding probability value. We used the tail area-based FDR in our analysis. The local FDR is more robust but we didn't have time to rerun the whole process with this FDR. We get q-values for each gene, at each position and for each predicted mark. We keep only the smallest q-value per position and only positions with a q-value < 0.05. The correlation for which q = 0.05 is 0.20 on average (std of 0.06). 3/8 Second round After the first round, we have a list of potentially interesting gene positions. For these positions, we compute the correlations between the predictions from DeepSEA and the gene expressions for all the TFs in our list (and not only for the corresponding gene like in the first round). We get a matrix of size (position * T F * DeepSEA prediction). From these big correlation matrices, we built an interaction network. We consider that two genes are linked if at least one correlation is above the threshold between one gene's DeepSEA prediction and the other gene's expression. The threshold for a significant correlation is set at the correlation value 0.2 (correlation for which BIN1 gets a q − value < 5 on the first round). We take the interaction network and run a Louvain community detection algorithm (Aynaud, 2011) on it. We group different TFs together (see Results Section). Correlation for position We look at the best correlations per position to see which regions around the gene are more important in its regulation. For some genes, we can see some regions standing out on Figure 2a. For BIN1, a region of approximately 50000 bp is clearly more correlated with the gene expression. This region has more impact on the gene's expression and contains more elements to regulate it. On the other hand, for other genes like ESRRA, there is no specific region standing out which is more important in the gene regulation (see Figure 2b). On Figure 2c,d we have an example of the best correlation for BIN1. Our method is able to determine the regions near the gene start which are more inclined to be important in the regulation of the gene. We think that these have a high probability of containing an enhancer linked with the promoter of the gene. Epigenetic marks We then analyze the significant predictions to see which epigenetic marks are involved and in which cell lines they appear more often. Figure 3a shows the mean correlation depends on the max correlation at each best position per gene. We can see that usually, when one correlation is high for one mark at a given position, all the correlations at this position tend to be high as well. The two variables are highly correlated (Spearman correlation of 0.88) and this shows that there is not one prediction from DeepSEA that is much more significant than the others, but that they more or less all vary in the same way. We did not expect this outcome because we wanted only one (or few) prediction(s) to be positive: the prediction(s) which would tell us which transcription factor's binding site or epigenetic mark is modified at that precise position. However, the result that we obtain can be explained. Indeed, the DeepSEA predictions come from DNA sequence. If there is only 1 SNP in the 1000 bp window, all the predictions will have only two possible values and the correlation will be exactly the same for all DeepSEA predictions, independently of the range of the epigenetic prediction (because of the normalization of the data necessary for the correlation). A smaller but similar effect can be observed if there are only a few SNP in the 1000 bp window. Cell lines The Geuvadis gene expression is from whole blood RNA sequencing. We therefore expected the correlations to be mostly significant with prediction markers from DeepSEA that are on whole blood cell lines. 18.1% of the DeepSEA prediction markers come from whole blood cell lines. However, over all our genes, 17.4% of the significant correlations come from whole blood cell lines. Similarly, BIN1, is a TF probably involved in AD (Alzheimer's disease). 4.3% of the DeepSEA prediction markers come from brain cell lines. However, 5.6% of the significant correlations come from brain cell lines. Prediction magnitude By looking at Figure 3b, we can observe that most of the significant correlations (usually found above the 0.18 threshold) have a small range of magnitude in the DeepSEA predictions. We interpret that as being a simple eQTL effect, with no added value from DeepSEA. However, we can identify that some significant correlations come from a high magnitude in the DeepSEA predictions. For these data points, we could look at which TF binding site or epigenetic mark have a high log magnitude. This could give us some insight about what is happening inside the cells. TF interaction network and gene ontology enrichment From the method, we obtain the TF interaction network on Figure 4 left. We remove the TFs with no interaction and run a Louvain community detection algorithm (Aynaud, 2011) Even though the number of genes in each cluster is small (9-12), we perform a gene ontology (GO) enrichment analysis using Panther (Mi et al., 2013) on each of the clusters. We look at which biological processes are affected by the genes in each cluster. See the impacted biological processes in each cluster here: • Cluster 1 We see that two clusters contain specific GO enrichment. Cluster 1 mainly contains glucose metabolism and cell replication biological processes. Cluster 4 is composed of embryology and development biological processes. Clusters 2, 3 and 5 mostly contain generic biological processes common to every cell. DISCUSSION Our results tend to show that our method is sensitive to any eQTL effect and captures these in the analysis. There does not seem to be any epigenetic-dependent effect on the correlation score. Our results therefore do not show any direct improvement of our method to eQTL analysis. However, we can focus our attention on genes where there are epigenetic effects involved, even if they do not influence the correlation. There are two ways to do that: 1. Calculate the correlations solely for genes and positions where there is a high difference in DeepSEA prediction between individuals. 2. Calculate all the correlations and focus our interest only on those who have a high difference in DeepSEA prediction between individuals. Performing this analysis could give us some insight on what is happening inside the cells. This could help us understand which TF binding sites or epigenetic marks are important in the expression of which genes.
v3-fos-license
2017-06-16T13:05:34.439Z
2012-12-01T00:00:00.000
6714222
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "2d320860f73b52627f6326e5ea44d1c59831274f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44678", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2d320860f73b52627f6326e5ea44d1c59831274f", "year": 2013 }
pes2o/s2orc
In vitro capacity and in vivo antioxidant potency of sedimental extract of Tinospora cordifolia in streptozotocin induced type 2 diabetes. OBJECTIVE The role of herbs against the free radicals have been put forth recently in combating many diseases. The aim of this study was to elucidate the in vitro capacity and in vivo antioxidant properties of sedimental extract of Tinospora cordifolia (SETc). MATERIALS AND METHODS SETc was subjected to in vitro chemical analysis such as 1,1-diphenyl-2-picrylhydrazyl (DPPH), nitric oxide, hydrogen peroxide, and superoxide anion radicals scavenging respectively and finally drugs reductive ability in order to elucidate the antioxidant capacity of the test drug before introducing it into the biological membrane. The resulting capacity was evaluated in vivo by analyzing enzymic (SOD, CAT) and non-enzymic (vitamin C & E) antioxidant levels in the homogenized samples of major organs isolated from streptozotocin induced type 2 diabetic rats after 30(th) day of SETc (1000 mg/kg/p.o.) treatment. Finally, the histopathological evaluation was done using cut portion of the respective organs prone to free radical mediated cell destruction with STZ in order to study their micro anatomical changes. RESULTS Chemical analysis with SETc in vitro for its IC50 proves a key evident for its total antioxidant capacity of around 2046 times, in 1000 mg/kg of fixed dose per oral for in vivo analysis. In contrast to the above, the lipid peroxide levels and in vivo enzymic and non-enzymic antioxidant levels were found to possess most significant difference (p<0.001) and moderate difference (p<0.01) with diabetic non-treated animals which was an supporting contribution for those in vitro parameters studied and have proved that SETc (1000 mg/kg/p.o.) was a potent drug to elevate the antioxidants levels and further healing of damaged organs as compared with that of diabetic and standard drug treated groups. CONCLUSIONS Finally, it was concluded that, the presence of antioxidant potentials in SETc was about 2046 time as an effective scavenger of free radicals in vitro and as a potent healer in ameliorating many signs of tissue damages in vivo in long term complicated diseases such as diabetes. Introduction Oxygen free radicals are natural physiological products, but are also reactive species. Free radicals generated in vivo, damage everything found in living cells including proteins, carbohydrates, DNA, and other molecules in addition to lipids, .i.e., oxidizable substrate. Hyperglycemia alone does not cause complications resulted from chronic glucose toxicity (Hideaki et al., 1999) which is mediated and complicated through oxidative stress (Robert D. Hoeldtke et al., 2005). Hence, antioxidants are any substance that, when present at low concentrations compared to those of an oxidizable substrate, delay or prevent oxidation of that substrate. Lipid Peroxides (LPO) and deficiency of enzymes like Superoxide Dismutase (SOD), Catalase (CAT) and non-enzymes such as vitamin C and E are important factors in the development of diabetic complications in vivo. Many other substances have been proposed to act as anti-oxidants in vivo. They include β-carotene, other carotenoids, xanthophylls, metallothioein, taurin and its precursors, creatinine, polyamines, retinol, flavonoids, and other phenolic compounds of plant origin (Dhanukar, Kulkarni, Rege, 2000). Synthetic drug molecules with established mode of actions, rarely exhibit antioxidant activity and high incidence of toxicity in long term treatment and are well documented. Herbal molecules have the distinct advantages of built-in antioxidant activity which have an important therapeutic impact to prevent the late complications of diabetes, such as nephropathy and cardiovascular diseases (Chopra and Singh, 1994). Indian medicinal plants of different species have been reported to have antidiabetic and antioxidant properties (Dhanukar, Kulkarni, Rege, 2000). These medicinal plants were used in the ancient Indian system of medicine such as ayurvedha and siddha for the treatment of madhu meha (diabetes mellitus) from time immemorial. In the present study one such reputed medicinal plant, Tinospora cordifolia (Family-Menispermaceae; Guduchi -Hindi), was selected for which previous records were reported to have most of the beneficial medicinal property (Singh et al, 2003) such as therapeutic supplement for pregnant diabetic rats (Shivananjappa and Muralidhara, 2012), a major work done with its alkaloidal fraction (Patel and Mishra, 2011) and aqueous portion of the stem (Sreenivasa Reddy et al, 2009), and with various extracts such as hexane, ethyl acetate, and methanol (Rajalakshmi et al., 2009). In the current study, the main aim is to elucidate the role of antioxidant potentials of the stem stalk of T. cordifolia via its antioxidant capacity and its potency, thereby hypoglycemic activity (Sangeetha et al, 2011) better than standard glibenclamide . Therefore, we made a cut sediment portion of aqueous soaked stem extract of T. cordifolia named as Guduchi satwa (sediment salt), composed of polysaccharide which consists chiefly of 1 4 linked glucan with occasionally branched points (Rao and Rao, 1981) and was used by ancient Indians for treating diabetes (Folklore medicine) for investigating its scientific evaluation of antioxidant activities in vitro (chemical reaction) and in vivo (experimental design) in diabetic rats. Plant collection Dried Preparation of plant extract Around 2 kg of cut stems were grilled and grounded to a coarse powder and soaked in 1000 ml of distilled water and kept macerated for 24 hr. Next day, the top layer was decanted in a separate vessel (leaving the debris to filter off) and evaporated in a hot water bath at 100 o C and reduced to 70 o C following thick concentrate, to avoid escaping of thermolabile constituents. This portion is considered as water soluble portion which was admixed with sedimented portion in 1:3 ratio, after washing 2-3 times with fresh distilled water to avoid cell debris. The final sedimental extract of Tinospora cordifolia (SETc) was prepared as test drug for further screening. Chemical and reagents All Chemicals and reagents were purchased from the local markets, Chennai; Sigma-Aldrich Laboratories, Mumbai; SRL Laboratories, Delhi, and S. D. Fine chemicals, Mumbai. Instruments Ascensia one touch glucometer and strips (Code no: 3110), were used for glucose estimation. All the analytical instruments and surgical equipments were of well-known manufacturers. Animals Male Sprague dawley rats (200-250 g) were purchased from King's Institute, Guindy, Chennai, and were experimented (CLBMCP/131/IAEC/41) under CPSCEA guidelines. All rats were randomly selected, segregated and acclimatized for a period of one week with 12 hr day light and 12 hr dark cycle, food and water ad libitum. Phytochemical analysis A portion of the test drug was subjected to preliminary phytochemical analysis using standard procedure (Harbone, 1974). Nitric oxide scavenging activity (Sreejayan and Rao, 1997) Nitric oxide radicals were generated from sodium nitroprusside solution at physiological pH. One ml of sodium nitroprusside (10 mM) was mixed with 1 ml of the test extracts/ascorbic acid (3 µg) in phosphate buffer (pH 7.4). The test extracts were prepared in different concentrations (3, 30, 60, 90, and 120 µg). The mixture was incubated at 25 o C for 150 min. To 1.0 ml of the incubated solution, 1 ml of Griess' reagent (1% sulphanilamide, 2% ophosphoric acid, and 0.1% naphthyl ethylene diamine dihydrochloride) were added. Absorbance was read at 546 nm and percentage inhibition was calculated. The percentage inhibition of all the assays, was calculated by comparing the results of the test with those of the control using the formula, percentage inhibition=absorbance of control -absorbance of test / absorance of control X 100. Reductive ability (Jay Prakash et al., 2000) Reducing power of the test extracts was determined based on the ability of antioxidants to form coloured complex with potassium ferricyanide, TCA and FeCl3. One ml of the test extracts (100-800 µg) /ascorbic acid (20 µg) in ethanol was mixed with 2.5 ml potassium ferricyanide (1%) and 2.5 ml of phosphate buffer (pH 6.6). The mixture was incubated at 50 o C for 20 min. 2.5 ml TCA (10%) were added to it and centrifuged at 3000 rpm for 10 min. Two and a half ml of the supernatant was mixed with 2.5 ml water and 0.5 ml FeCl3 (0.1%). Absorbance was measured at 700 nm. Induction of experimental diabetes Sprague dawley rats (200-250 g) were fasted for 16 hours before the induction of diabetes with Streptozotocin (STZ). Animals were injected intraperitoneally with freshly prepared solution of STZ (45 mg/ml in 0.01 m citrate buffer, pH 4.5). The diabetic state was assessed in STZ-treated rats by measuring the non-fasting serum glucose concentration 48 hours post STZ injection. Only rats with serum glucose levels greater than 200 mg/dl were selected and used in this experiment (Soon and Tan, 2002). Treatment period A period of 30 days treatment with SETc of 1000 mg/kg/p.o., as dose fixed from the incremental dose finding procedure, studied earlier (Kannadhasan and Venkataraman, 2011) was carried out. On 31 st day, animals were sacrificed by anesthesia after 4 hours fasting, immediately followed by abdominal incision and removal of organs for in vivo antioxidant study and a portion of those were subjected to histopathological examination. Lipid peroxidation Tissue Lipid peroxidation The TBARS levels measured as an index of malondialdehyde (MDA) production were determined (Uchiyma and Mihara, 1978). MDA, an end product of lipid peroxidation reacts with thiobarbituric acid to form a red coloured complex. The measurement of MDA levels by thiobarbituric acid reactivity is the most widely used method for assessing lipid peroxidation. Briefly, 1 g of the liver and kidney samples were homogenized in 4 ml of 1.15% ice cold KCl using a homogenizer to form a 25% (w/v) homogenate. To 0.1 ml of 25% homogenate, 0.2 ml of 8.1% dodecyl sodium sulphate salt (SDS), 1.5 ml of 1% phosphoric acid, 0.2 ml of distilled water, and 1.0 ml of 0.6% 2-thiobarbituric acid (TBA) were added. The mixture was heated in a boiling water bath for 45 minutes. Subsequently, the heated mixture was cooled in an ice bath, followed by an addition of 4.0 ml of n-butanol to extract the cold thiobarbituric acid reactants. The optical density of the n-butanol layer was determined at 353 nm after centrifugation at 2500 rpm for five minutes and expressed as nmol MDA/25 mg wet weight. Plasma lipid peroxidation (Yagi, 1976) To 0.2 ml of plasma, 4.0 ml of 3 N sulphuric acid was added, mixed well and 0.5 ml of 10% phosphotungstic acid was added. The contents were centrifuged and the supernatant was discarded. The sediment was mixed with 2.0 ml of N/12 H 2 SO 4 and 0.3 ml of phosphotungstic acid. The mixture was centrifuged and sediment was dissolved in 4.0 ml of distilled water and to this, 1.0 ml TBA reagent was added and the contents were treated in a boiling water bath for 60 min. After cooling, 5.0 ml of n-butanol was added and the contents were shakened vigorously. Then it was centrifuged for 20 min and supernatant was read at 535 nm. Standards were also processed in similar manner. Plasma lipid peroxides values are expressed as mg/dl. In vivo antioxidant activity of SETc in normal and diabetic rats Estimation of Enzymic antioxidants Superoxide dismutase (Marklund and Marklund, 1974) To 1.0 ml of the sample, 0.25 ml of absolute alcohol and 0.15 ml of chloroform was added. After 15 minutes of shaking in a mechanical shaker, the suspension was centrifuged and the supernatant obtained was constituted in the extract. The reaction mixture for auto oxidation consisted of 2 ml of buffer (Tris HCL 8.2), 0.5 ml of 2 mM pyrogallol, and 1.5 ml of water. Initially, the rate of auto oxidation of pyrogallol was noted at an interval of 1 min for 3 min. The assay mixture for the enzyme contained 2 ml of 0.1 M Tris HCL buffer; 0.5 ml of pyrogallol, aliquots of the enzyme prepared and water to give a final volume of 4 ml. The rate of inhibition of pyrogallol auto oxidation after the addition of enzyme was noted. The enzyme activity is expressed in terms of units/min/mg protein in which one unit corresponds to the account of enzyme required to bring about 50% inhibition of pyrogallol auto oxidation. Catalase (Sinha, 1972) One-tenths of a ml of the homogenate was taken to which 1 ml of phosphate buffer and 0.5 ml of H 2 O 2 was added. The reaction was arrested by the addition of 2.0 ml dichromateacetic acid reagent. Standard H 2 O 2 in the range of 10 to 160 µmoles and 4 to 10 µmoles were taken and treated similarly. The tubes were heated in a boiling water bath for 10 min. the green colour developed was read at 570 nm. Catalase activity in tissue homogenate is expressed as nmoles of H 2 O 2 consumed/min/mg protein at 37 o C. Estimation of non-enzymic antioxidants vitamin C (ascorbic acid) Vitamin C was estimated followed by the method of Omaye et al., 1979. Aliquots of homogenate were precipitated with 5% ice cold tricarboxylic acid and centrifuged for 20 min at 6500 rpm. One-tenths of a ml of the supernatant was mixed with 0.2 ml of DTC (2,4 dinitrophenyl hydrazine: thiourea: copper sulphate) and incubated for 3 hr at 37 o C. Then 1.5 ml of ice cold 65% H 2 SO 4 was added, mixed well and the solution was allowed to stand at room temperature for additional 30 min. Absorbance was determined at 520 nm. Ascorbic acid values are expressed as µg/mg protein. Vitamin E (α-Tocopherol) Vitamin E was estimated by the method of Desai et al., 1984. Saponification and extraction To 500 mg of the tissue, 5.0 ml of isotonic KCl was added and homogenized. To 1.5 ml of homogenate 1.0 ml of ethanol and 0.5 ml of 25% abscorbate were added and pre-incubated at 70 o C for 5 min in glass-stoppered tubes. To this, 1.0 ml of saturated KOH was added and mixed again. This mixture was further incubated at 70 o C for 30 min. The tubes were immediately cooled in an ice water bath and 1.0 ml of distilled water and 4.0 ml of purified hexane were added. The tubes were shaken vigorously for 2 min and centrifuged at 1500 rpm for 10 min to separate the phases. Estimation Three ml aliquots of hexane extract was pipetted out into suitable reaction tubes and evaporated to dryness under nitrogen. The residue was then carefully dissolved in 1.0 ml of purified ethanol. The tubes containing α-Tocopherol standard were treated in the same way as test samples. To all the tubes, including a reagent blank, 0.2 ml of 0.2% bathophenanthroline reagent was added and the contents of the tubes were thoroughly mixed. The assay proceeded very rapidly from this point and care was taken to reduce unnecessary exposure to direct sunlight. Two-tenths of a ml of ferric chloride reagent was added and the tubes were mixed by vortexing. After 1 min, 0.2 ml of orthophosphoric acid was added and the tubes were thoroughly mixed again. The absorbance was read at 536 nm.Vitamin E value is expressed as mg/gm tissue. Histopathological studies (Kanai Mukherjee, 1989) The dissected samples of pancreas, liver and kidney from each group of animals were collected in 10% formalin solution and stained with hemotoxylin and eosin for preparation of section by using of microtome in Vaishnave Clinic, Chennai -17. Result Preparation of plant extract 535 g of sedimental extract and 15 g of water soluble portion dried under vacuum at 37 o C and obtained a total quantity of 550 g (SETc), which were used for the whole study. Preliminary phytochemical screening From the phytochemical analysis, it was observed that the sedimental extract of Tinospora cordifolia (SETc) showed the presence of active chemical ingredients as reported previously for various parts of the plant (Table 1). Antioxidant property of SETc by using in vitro -chemical methods SETc in different concentrations were tested for their antioxidant activity in five different in vitro models (Figure 1). DPPH radical scavenging activity of SETc The maximum percentage inhibition of DPPH by SETc was 26.57±0.254 at 120 µg concentration, whereas standard ascorbic acid showed 93.64±0.239 percentage inhibition of the DPPH at 20 µg (Figure 1.a). The IC 50 value of SETc was found to be 208.10 µg as obtained from the linearity curve. This is the concentration at which the free radical DPPH is scavenged. Superoxide radical scavenging activity of SETc The percentage inhibition of superoxide radical generation at 120 µg was found to be 74.49±0.286. However, Ascorbic acid and BHA showed percentage inhibition of 70.66±0.254 and 68.86±0.071, respectively at 25 µg each which is similar to the inhibition that produced by SETc at 120 µg and 90 µg, respectively. The IC 50 value of SETc was found to be 65.87 µg against 17.80 and 18.16 µg for standard vitamin C and BHA, respectively (Figure 1.b). Nitric oxide scavenging activity of SETc In the nitric oxide model, the maximum percentage inhibition of SETc on nitric oxide radicals was 29.25±0.245 at 120 µg concentration. However, ascorbic acid at 20 µg caused only 7.51±0.015 percentage inhibition which was achievable by a concentration of 30 µg of SETc. The IC 50 value of SETc from the linearity curve was found to be 191.27 µg whereas vitamin C showed IC 50 at 133.16 µg (Figure 1.c). Hydroxyl radical scavenging activity of SETc The hydroxyl radical scavenging activity of SETc at 120 µg was found to be 73.62±0.456 percentile, a very near value to that of the standard vitamin E at 20 µg (86.03±0.619). The IC 50 values of SETc and vitamin E were found to be 23.57 and 11.60 µg, respectively (Figure 1.d). Reductive ability The reducing ability of SETc was studied using the gradient concentrations of SETc ranging from 100 to 600 µg/ml. The maximum concentration of SETc (i.e., 600 µg/ml) showed a maximum absorbance of 0.799±1.578 the value which was closer to 400 µg of BHT ( Figure 1.e). In vivo Antioxidant property of SETc Lipid peroxidation From the Figure 2, it was observed that the plasma lipid peroxidation and tissue lipid peroxidation level of liver, kidney, and pancreas of SETc (1000 mg/kg/p.o,) treated group were found to be reduced as compared with that of the diabetic control (p<0.001) and also found to have moderate difference (p<0.01) in liver and no significant difference (p=ns) in kidney and pancreas peroxide level as compared with that of the normal and standard drug treated groups. Figure 2. Lipid peroxide activity of normal, diabetic, and diabetic rats treated with SETC after 30 days. n = 6; p<0.05 is considered as statistically significant. Values are expressed as mean±S.E.M. using one-way ANOVA followed by Tukey's multiple comparison method. Units are expressed as: Plasma Lipid Peroxides: mg/dl; Tissue Lipid Peroxides: nmoles of MDA liberated/min/mg protein. a = normal control Vs diabetic control, test drug and standard. b = diabetic control Vs test drug and standard. c = test drug Vs standard. * = p<0.001; @ = p<0.01; # = p<0.05; ns = non significant. Enzymic and non-enzymic antioxidants The enzymic and non-enzymic antioxidant level in the liver of diabetic rats treated with SETc was shown in Figures 3 and 4. Liver The SOD level of SETc treated rats was found to possess significant difference (p<0.001) as compared with that of normal control and possessed no significant difference (p=ns) as compared with that of diabetic control and standard drug treated groups (Figure 3a). The CAT level of SETc treated rats was found to show most significant increase (p<0.001) as compared with that of the diabetic control and possessed no significant difference (p=ns) to that of the standard group ( Figure 3b). There was significant difference in vitamin C and vitamin E levels of SETc treated group as compared with that of the normal and diabetic control (p<0.001) and there was a moderate and less significant increase in the level of vitamin C and vitamin E levels (p<0.01) and (p<0.05), respectively as compared with that of the standard drug treated group (Figures 4a and b). Kidney The SOD level of SETc treated group was found to have significant increase (p<0.001) as compared with that of the normal, diabetic control, and standard drug treated groups (Figure 3a). The catalase enzymic level of SETc treated group was found to have significant increase (p<0.01) as compared with that of the diabetic control and significant difference (p<0.01) to that of the standard drug treated group (Figure 3b). As shown in Figure 4a, the vitamin C level of SETc treated group was found to have no significant difference as compared with that of the diabetic control and standard drug treated group (p=ns). The vitamin E level of SETc treated group was found to possess no significant difference as compared with that of the normal control and standard drug treated groups and there was most significant increase (p<0.001) as compared with that of the diabetic control (Figure 4b). Pancreas The SOD level of SETc treated rats was found to have significant increase (p<0.001) as compared with that of the diabetic control (Figure 3a). There was significant difference as compared with that of the standard drug treated rats (p<0.001). There was a significant reduction in the enzymic catalase level of SETc treated group (Figure 3b), as compared with that of the diabetic control (p<0.001) and values reduced near to normal control (p=ns). From Figure 4a, there was a significant increase in the vitamin C level of SETc treated diabetic rats was observed as compared with that of diabetic control (p<0.001) and standard drug treated rats (p<0.05). Vitamin E level of SETc treated diabetic rats showed a significant increase (p<0.01) as compared with that of diabetic control and no significant difference (p=ns) to that of the standard drug treated rats (Figure 4b). Histopathology study after 30 days treatment with SETc in diabetic rats The micro anatomical changes in the liver, kidney, and pancreas of diabetic rats treated with SETc for 30 days are depicted in Figure 5. Liver A strong divergence was observed in diabetic rats' hepatocytes from the normal ones. The untreated diabetic rats showed congestion of veins surrounded by hepatocytes and necrosis with inflammation. The liver section of SETc treated rats showed less inflamed area along with normal hepatocytes that might be the repairing sign of damaged cells caused by STZ induction as compared with that of the standard drug treated group. Kidney The kidney of the diabetic control showed congestion in the glomeruli with tubular epithelial damage. And there were almostnormal glomeruli with regenerating tubular epithelium that were observed in SETc treated group, more comparable with normal control than standard drug treated group. Pancreas The pancreas of the diabetic control showed fatty infiltration of islet cells whereas SETc treated group showed less hyperplastic islet cells as compared with that of standard drug treated group. Antioxidant property of SETc by using in vitro -chemical methods The test compound SETc exhibited in vitro antioxidant activity in a concentration dependant manner up to the given concentrations studied. DPPH is a relatively stable nitrogen centered free radical that easily accepts an electron or hydrogen radical to become a stable diamagnetic molecule. DPPH radicals react with suitable reducing agents as result of which the electrons become paired off forming the corresponding hydrazine. The solution therefore loses colour stochiometrically depending on the number of electrons taken up (Blois, 2001). The IC 50 value of SETc was found to be 208.10 µg as obtained from the linearity curve. This is the concentration at which the free radical DPPH is scavenged. Superoxide anion radicals are produced endogenously by flavoenzymes such as xanthine oxidase, which converts hypoxanthine and subsequently to uric acid in ischemia-reperfusion. Superoxide is generated in vivo by several oxidative enzymes, including xanthine oxidase. In the PMS-NADH-NBT system, superoxide anion derived from dissolved oxygen by PMS-NADH coupling reaction reduces NBT (Arulmozhi et al., 2007). The decreased absorbance at 560 nm indicates the consumption of superoxide anions in the reaction mixture by SETc. Nitric oxide is an important chemical mediator generated by endothelial cells, mesophages, and neurons and involved in the regulation of various physiological processes. Excessive concentration of nitric oxide produces serious cytotoxic effects observed in various disorders including AIDS, cancer, alzheimer's, and arthritis (Sainanai et al., 1997). Oxygen reacts with NOto generate nitrite and peroxy nitrite anions, which act as free radicals. In this study, the nitrite produced by incubation of solutions of sodium nitroprusside in standard phosphate buffer at 25 o C was reduced by SETc, which might be due to antioxidant principles of the extract, competing with oxygen to react with nitric oxide thereby inhibiting the generation of nitrite. Hydroxyl radicals are the major reactive oxygen species causing lipid peroxidation and enormous biological damage (Aurand et al., 1977). The in vitro antioxidant assay using SETc showed IC 50 values more or less equal to standard drug vitamin C in nitric oxide scavenging activity whereas IC 50 values were very high in comparison to standard drugs in in vitro DPPH and superoxide radicals scavenging activity. The reductive ability of SETc was comparable to that of BHT, the standard chemical used for comparison. The overall in vitro antioxidant capacity of 1000 mg/kg/day of SETc was found to be 2046 times of its own therapeutic advantage as an antioxidant which was further to be proceeded for its ameliorating effect over diabetes in future. In vivo antioxidant property of SETc Lipid peroxidation is a free radical mediated process leading to oxidative deterioration of polyunsaturated lipids. Under normal physiological conditions, low concentrations of lipid peroxide are found in plasma and tissues. The possible source of oxidative stress in diabetes includes shifts in redox balance resulting from altered carbohydrate and lipid metabolism, increased generation of reactive oxygen species, and decreased level of antioxidant defenses such as GSH and ascorbic acid (Baynes, 1991). The increase in lipid peroxide level due to streptozotocin was not much in the liver and kidney, probably because the catalase activity is high enough to counteract the oxygen stress. On the other hand, the level of lipid peroxides in the pancreas was significantly elevated by streptozotocin, probably because of a marked reduction of catalase. Because the activities of antioxidant enzymes in the pancreas are relatively lower than those in the other organs (Wohaieb and Godin, 1987), radicals derived from streptozotocin or streptozotocin-induced diabetes may selectively attack the pancreas which leads to oxidative stress, thereby evoking the defense mechanism. In nature, the catalase activity is high in the liver, medium in the heart, low in the pancreas (Wohaieb and Godin, 1987), and high in the kidney (Suryanarayana et al., 2007). Such marked alterations in the plasma and tissue lipid peroxides level of liver, kidney, and pancreas in STZ-induced diabetic rats were effectively replaced with normal value ranges after 30 days of treatment with SETc (1000 mg/kg/p.o.). From the results, it was very clear that the test drug SETc was found to show its potential anti-oxidative role which compensates the enzymes level elevated as a defense mechanism over hyperglycemia-induced systemic and tissues specific oxidative stress, which was supported from the previous report for the effect of T. cordifolia for its possible restoration of antioxidant defence against alloxan-induced tissue damages (Stanley Mainzen Prince et al., 2004). Oxidative stress in diabetes co-exists with altered antioxidant systems, both enzymatic and non-enzymatic. However, the connection between altered antioxidant enzymes and increased oxidative stress is not straightforward, as changes (increase or decrease) in the activities of antioxidant enzymes are not always unidirectional. Thus while in some studies the activities of SOD, CAT, GPx, and GST in diabetes mellitus showed reductions in the levels of these enzymes (Okutan et al., 2005;Ozkaya et al., 2002;Sugiura et al., 2006), some other studies reported increases in the activities of these enzymes in STZ-induced diabetes (Okutan et al., 2005;Yilmaz et al., 2004). Furthermore, changes in antioxidant enzymes may be organ specific. This may be due to an organ-specific response to hyperglycemiainduced oxidative stress. Nevertheless, the relative change in activity in diabetic tissues compared with control tissues might indicate an altered antioxidant system. SOD catalyses the conversion of super oxide anion to hydrogen peroxide and oxygen. In two studies (Matkovica, 1977;1982), researchers found that rats with STZinduced diabetes had decreased SOD activity in liver, kidney, spleen, heart, pancreas, skeletal muscle, testis, and erythrocytes. Catalase is a haem-containing ubiquiter enzyme and in eukaryotes, it is found in peroxisomes. This enzyme probably serves to degrade hydrogen peroxide produced by peroxisomal oxidases to water and oxygen. The enzymic level of catalase was found to be increased in diabetic rats, especially in heart and pancreas which shows the oxidantantioxidant imbalance (Erika et al., 1999). Changes in the levels of non-enzymatic antioxidants have been observed in diabetic condition. Vitamin C is a water soluble antioxidant that primarily scavenges oxygen free radicals. Vitamin C has been reported to contribute up to 24% of the total peroxyl radical-trapping antioxidant activity (Atanaisu et al, 1998). Decreased level of vitamin C observed in diabetic condition might be due to increased utilization of vitamin C in deactivation of the increased levels of the reactive oxygen species (Chattejee and Nandi, 1991). Vitamin E is also an important radical scavenging antioxidant that interrupts the chain reaction of lipid peroxidation by reacting with the lipid peroxyl radicals (Takenaka et al., 1991). Increased utilization of the vitamin E in the plasma was due to the increased production of lipid peroxides, as a result of decreased non enzymic level of vitamin E in the tissues (Senthil kumar and Subramaniam, 2007). It was observed from the present study that SOD, CAT, vitamin C and vitamin E levels of tissues including; liver, kidney, pancreas, and heart of diabetic rats were normalized after SETc treatment for a period of 30 days. In this context, feeding SETc appears to have resulted in considerable reversal, but not complete normalization, of antioxidant enzymes that were altered in diabetic tissues. Histopathological study The SETc treatment with all its antioxidant capacity was evidenced from its positive impregnation over destructed cells by streptozotocin directly and by metabolic imbalance/modulation of gluconeogenic enzymes (Puranik et al, 2009). The resulting changes observed for the period of 30 days of drug treatment in respective animals will be a resurrecting tool for long term complications in diabetes. The present study on the antioxidant activity on both in vitro and in vivo of the SETc in diabetic rats concludes: SETc treatment showed significant free radical scavenging activity in vitro antioxidant assay. SETc treatment increased the levels of both enzymic and non-enzymic antioxidant levels in tissues. SETc treatment protected the tissues from lipid peroxidation in diabetic rats. The histopathology study clearly establishes the nontoxic and protective effect of SETc in the internal organs such as liver, kidney, and pancreas. The overall studies clearly show the antioxidant capacity of SETc around 2046 times in crude form whose particular ingredients responsible for such property and potency have to be established. Furthermore, impregnation of SETc at dose 1000 mg/kg/p.o., on diabetes therapy is yet to be determined.
v3-fos-license
2023-07-15T15:38:40.909Z
2023-07-12T00:00:00.000
259893324
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.oprd.3c00148", "pdf_hash": "c82fd61efe0686ed0d7f53b411b5dd496c3491fe", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44679", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "432affb3fcb52c79426f7276fe320ad716f355f5", "year": 2023 }
pes2o/s2orc
Practical and Scalable Two-Step Process for 6-(2-Fluoro-4-nitrophenyl)-2-oxa-6-azaspiro[3.3]heptane: A Key Intermediate of the Potent Antibiotic Drug Candidate TBI-223 A low-cost, protecting group-free route to 6-(2-fluoro-4-nitrophenyl)-2-oxa-6-azaspiro[3.3]heptane (1), the starting material for the in-development tuberculosis treatment TBI-223, is described. The key bond forming step in this route is the creation of the azetidine ring through a hydroxide-facilitated alkylation of 2-fluoro-4-nitroaniline (2) with 3,3-bis(bromomethyl)oxetane (BBMO, 3). After optimization, this ring formation reaction was demonstrated at 100 g scale with isolated yield of 87% and final product purity of >99%. The alkylating agent 3 was synthesized using an optimized procedure that starts from tribromoneopentyl alcohol (TBNPA, 4), a commercially available flame retardant. Treatment of 4 with sodium hydroxide under Schotten–Baumann conditions closed the oxetane ring, and after distillation, 3 was recovered in 72% yield and >95% purity. This new approach to compound 1 avoids the previous drawbacks associated with the synthesis of 2-oxa-6-azaspiro[3,3]heptane (5), the major cost driver used in previous routes to TBI-223. The optimization and multigram scale-up results for this new route are reported herein. The three fractions were collected, with their composition shown in Table S1 based on the GCMS data shown in Figure S1. The major impurity, 3-bromo-2-bromomethyl-1-propene (RT 2.4 min, matched to a commercial sample), is separated from the rest of the mixture in the initial stage of the distillation. The third major fraction was pure product (>96% wt% purity by GC). Yield of the pure fraction based on purity was 72% (99.7% GC A% purity). Purity after distillation Fraction No. Figure S1: GCMS chromatograms of the different fractions collected during the distillation of crude BBMO (for chromatography conditions, see below). S5 Step 2: Optimization, Impurity, and Additional Reaction Information 1 General reaction conditions: NaOH was added to a solution of aniline 2 (100 mg, 1.0 eq.) and 3 DOE Optimization of Alkylation Conditions: Data and Analysis Information in sulfolane. This mixture was then heated to the specified temperature for 3 h. 2 These are HPLC area% data for IPC samples taken of the reaction mixture. The area% are not corrected for the response factors of each compound. These DOE data were first coded and then analyzed using ordinary least squares (OLS) methods as implemented by Statsmodels, 1 a statistics packs for the Python programming language. Initially, S6 the data were fit with a full model consisting of all the main effects and their interaction terms. However, the interactions terms were not found to be statistically insignificant (p value > 0.05), so they were ignored. The OLS results for the main effect term models are shown found in Table S3 for 1 and Table S4 for 6. A sample of crude step 2 product (10 g, 90 area% of product 1 and 10 area% of impurity 6 by HPLC, Figure S5) Crystal Data and Experimental on 1. ORTEP of 1. Only unique and disordered positions with hydrogens omitted, (all data) and R 1 was 0.0412 (I≥2 (I)). Experimental. Single clear yellow prism-shaped crystals of AJA03A-Qu were used as supplied. A suitable crystal with dimensions 0.38 × 0.24 × 0.12 mm 3 was selected and mounted on a XtaLAB Synergy R, DW system, HyPix diffractometer. The crystal was kept at a steady T = 100.01(10) K during data collection. The structure was solved with the ShelXT 2014/5 (Sheldrick, 2014) solution program using dual methods and by using Olex2 1.3-alpha as the graphical interface. The model was refined with ShelXL 2018/3 using full matrix least squares minimisation on F 2 . Model. The raw data and refinement model were of good quality and excellent fit. The molecule sits on a crystallographic mirror plane that includes the azetidine and phenyl rings, the nitrogroup, the fluorine atom, and the terminal ether oxygen of the oxetane ring. The orientation of the fluorophenyl moiety was modeled with disorder between two positions. One orientation (36%) placed the fluorine in the vicinity of a nitro-oxygen (rinter-atomic = 2.62 Å) while the second orientation (64%) placed the fluorine into a small void adjacent the oxetane ring of a neighboring molecule (see Figure 2X). No evidence was found for a super-lattice that might eliminate disorder in the model. Figure 2X: Planar packing diagram showing F-disorder. Structure Quality Indicators Reflections: Data were measured using  scans using Mo K  radiation. The diffraction pattern was indexed and the total number of runs and images was based on the strategy calculation from the program The structure was solved and the space group P2 1 /m (# 11) determined by the ShelXT 2014/5 (Sheldrick, 2014) structure solution program using using dual methods and refined by full matrix least squares minimisation on F 2 using version 2018/3 of ShelXL 2018/3 . All non-hydrogen atoms were refined anisotropically. Hydrogen atom positions were calculated geometrically and refined using the riding model. Hydrogen atom positions were calculated geometrically and refined using the riding model. _refine_special_details: The molecule was found to be disordered by a 180 degree rotation around the C-N bond. It was conveniently modeled as F atom 2-component positional disorder. No restraints or constraints were applied. The value of Z' is 0.5. This means that only half of the formula unit is present in the asymmetric unit, with the other half consisting of symmetry equivalent atoms.
v3-fos-license
2019-04-27T13:12:24.006Z
2018-11-28T00:00:00.000
135341147
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-92798-5_12.pdf", "pdf_hash": "88ab025bbeba06ee89b7cd7794b390236433f246", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44680", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences", "Economics" ], "sha1": "8f894d13e0a8e3773033963d28c55d65bd7909a4", "year": 2018 }
pes2o/s2orc
What Is the Evidence Base for Climate-Smart Agriculture in East and Southern Africa? A Systematic Map More than 500 million USD will soon be invested in climate-smart agriculture (CSA) programmes in sub-Saharan Africa. Improving smallholder farm management is the core of most of these programmes. However, there has been no comprehensive information available to evaluate how changing agricultural practices increases food production, improves resilience of farming systems and livelihoods, and mitigates climate change—the goals of CSA. Here, we present a systematic map—an overview of the availability of scientific evidence—for CSA in five African countries: Tanzania, Malawi, Mozambique, Zimbabwe and Zambia. We conducted a systematic literature search of the effects of 102 technologies, including farm management practices (e.g., leguminous intercropped agroforestry, increased protein content of livestock diets, etc.), on 57 indicators consistent with CSA goals (e.g., yield, water use efficiency, carbon sequestration, etc.) as part of an effort called the “CSA Compendium”. Our search of peer-reviewed articles in Web of Science and Scopus produced 150,567 candidate papers across developing countries in the global tropics. We screened titles, abstracts and full texts against predetermined inclusion criteria, for example that the investigation took place in a tropical developing country and contains primary data on how both a CSA practice and non-CSA control affect a preselected indicator. More than 1500 papers met these criteria from Africa, of which, 153 contained data collected in one of the five countries. Mapping the studies shows geographic and topical clustering in a few locations, around relatively few measures of CSA and for a limited number of commodities, indicating potential for skewed results and highlighting gaps in the evidence. This study sets the baseline for the availability of evidence to support CSA programming in the five countries. Climate Fund). The aim is to help smallholder farmers (1) sustainably increase productivity and incomes, (2) adapt to climate variability and change and (3) mitigate climate change where possible (FAO 2013). With planned investments, political will and implementation capacity, CSA is emerging as a mechanism for coherent and coordinated action on climate change adaptation and mitigation for agriculture. Farm-and field-level management technologies are a core component of most planned CSA investments (Thierfelder et al. 2017;Kimaro et al. 2015). Farm-level technologies represent a broad category of direct activities that farmers can undertake on their fields, in livestock husbandry, or through management of communal lands. Climate-smart actions may include both the adoption of new/improved inputs and new/improved application methods, such as adopting drought resistant crop varieties, reducing stocking rates of animals, changing harvesting and postharvest storage techniques (Lipper et al. 2014). The vast number of farm-level options that might meet CSA objectives coupled with the large number of possible outcomes that fit under the three pillars of CSA, has led many development practitioners, scientists and governments to the question: "What is CSA and what is not CSA?" (Rosenstock et al. 2015a). This question, however, presents a false dichotomy. By definition, CSA is context specific and subject to the priorities of farmers, communities and governments where it is being implemented. Until now, little empirical evidence has been provided to systematically evaluate which CSA practices work where (see Branca et al. 2011 for a first attempt). Instead, CSA is often supported with case studies, anecdotes, or aggregate data, which paint an incomplete picture of both the potential and challenges of CSA (e.g., FAO 2014; Neate 2013). The lack of comprehensive information on CSA is not surprising, given the fact that it includes a wide diversity of solutions at the farm production and rural livelihood levels. Consequently many interventions that increase productivity are labelled as "CSA" without evidence on the other two objectives of CSA, at least one of which would need to be also documented to qualify any intervention as CSA. Although "triple win" interventions at the field level may be the exception rather than the rule, evidence has to be provided on all objectives to support policies and programmes that may wish to promote CSA (Arslan et al. 2017). There is an urgent need to provide decision-makers-including investors-with information to help them design programmes and policies, as well as to increase the effectiveness of development programming. In response and in this paper, we have conducted a quantitative and systematic review to map the evidence published in peer-reviewed literature on the effectiveness of technologies and management practices to achieve the objectives of increased productivity, resilience and mitigation for the five countries in East and Southern Africa: Tanzania, Malawi, Mozambique, Zimbabwe and Zambia. Our systematic map sets the benchmark on what data and evidence are available on how farm and field management practices affect indicators of CSA outcomes. A Systematic Approach This systematic map relies on a data set compiled as part of the CSA Compendium "The Compendium". The Compendium created search terms relevant to one of 102 technologies including new inputs and farm management practices (58 agronomic, 15 agroforestry, 19 livestock, 5 energy, or 5 postharvest management practices) on more than 57 outcomes in productivity, resilience or mitigation, such as, yields, gender differentiated labour use, or soil organic carbon, respectively. Studies were included based on four inclusion criteria: (1) conducted in a tropical developing country, (2) included conventional control practice and a practice being suggested as CSA, (3) contained primary data on the impacts on at least one of the indicators of interest and (4) conducted in the field (i.e., no modelling studies). Lists of the search terms for practices and outcomes and additional details on the inclusion criteria can be found in the systematic review protocol (Rosenstock et al. 2015b). Studies were identified by searching the Web of Science and Scopus databases using search terms indicative of practices and outcomes. Our search found 150,367 candidate studies, 7497 of which were included in the final Compendium library based initially on abstract/title reviews and then full text reviews. Out of these, 313 studies were conducted in one of the five countries. Data were compiled into an Excel database manually from each study. Data retrieved from the selected studies include information on location, climate, soils, crops, livestock species and outcome values for both conventional (non-CSA control) and treatment practices. Frequency and distribution of components in the data set (i.e., practices, outcomes and products) are analysed by summary statistics. The Evidence More than 150 studies met our inclusion criteria for this paper and were included in the data set analysed here. The data set contains 12,509 data points that compare a conventional practice with a potential CSA practice in a specific time and place. For example, the comparison of conservation agriculture versus conventional agriculture at Chitedze Agricultural Research Station, Malawi in 2007 (see Thierfelder et al. 2013). Studies were unevenly distributed across the five countries with a tenfold difference in the number of studies conducted in the most studied country (Tanzania) versus the least studied country (Mozambique) (Fig. 12.1). The studies were primarily conducted on research stations where 58% of data was generated compared with 42% on farmers' fields or in household surveys. This is significant because research on station under scientist-controlled conditions often outperforms the same practice in farmers' fields due to the higher quality of implementing the practices and historical management of the site (Cook et al. 2013). Thus, the evidence will generally reflect the upper bound of what can be achieved by farmers. Studies were clustered in a few locations and agroecologies within each country. This is unsurprising given the investments and infrastructure necessary to conduct field research. However, geographical clustering further indicates the potential for skew in the available evidence. With clustering, it is unlikely that the full range of CSA options are analysed, which limits the utility of the work to help decisionmakers to choose among various options. Key gaps in agroecologies include coastal and semiarid zones. Future analyses of these data should examine if the distribution of practices and agroecologies reflects key criteria such as percentage of the population that relies on the production of the agricultural output studied for food security, etc. While the data set contains information on 39 agricultural products such as milk, pulses, spices, cotton, etc., the vast majority of data comprise only a handful of products. For example, data on maize accounts for 78% of the data set ( Fig. 12.2). Pulses were second but made up only 7% of the data set. In contrast, many products (21) make up less than 2% of the data set. Therefore, we know a lot about maize production in the region but much less about other products. This presents a challenge for investments in CSA, because many of the proposed actions intend to diversify smallholder fields and farms, but this data set suggests a lack of information on crops other than maize. It also indicates that there is little evidence on switching to crops that may be more resilient or better suited to future climates, such as sorghum (0.8% of data set) and millets (no data available in these countries, despite its importance in the drylands of the region). However, it should be noted that crop switching is often studied through modeling efforts and therefore would not have been selected as part of this assessment. Regardless, there is a need for more empirical studies on maize alternatives, particularly given that maize yields Existing evidence is also limited on integrated crop and livestock systems, because 93% of the data were on crops while only 3.5% on livestock. Almost all of the data on livestock were on improved diets, with a little on improved breeds. Some of the most commonly mentioned regional livestock adaptation strategies, such as pasture management technologies and animal housing, are absent from the data set. This is an important gap to be filled as these technologies are also relevant for the mitigation pillar of CSA. Data on practices are similarly skewed with a few practices accounting for a significant percentage of the data set on 63 CSA practices. For example, studies of inorganic fertilizers are the most common (27.5% of data) and almost 3500 individual data points involved the addition of nitrogen alone (Fig. 12.3). However, this is due in part to the difference in how research is performed in different fields. Agronomic field trials on fertilizers typically use multiple types of fertilizers at many rates (e.g., 0, 20, 40, 80 kg/ha) over at least 3 years and sometimes decades (e.g., Akinnifesi et al. 2006Akinnifesi et al. , 2007Matthews et al. 1992). On the other hand, studies on livestock feeding practices typically analyse a few alternative diets over just one or two short periods (e.g., Gusha et al. 2014;Mataka et al. 2007;Sarwatt et al. 2002). Despite most data being on a relatively small number of practices, significant data are available for practices of high interest to the development community. For example, 28% of the data is on practices that diversify production systems such as rotations, intercropping and agroforestry (e.g., Myaka et al. 2006;Munisse et al. 2012;Thierfelder et al. 2013;Nyamadzawo et al. 2008;Chamshama et al. 1998). Therefore, some information exists to reduce the uncertainty about implementing such interventions. Other commonly studied practices include mulching, organic fertilizers and reduced tillage. Common recommendations for CSA interventions include packages of technologies, such as conservation agriculture or systems to intensify rice production. When multiple practices are adopted together, they can have synergistic or antagonistic effects on CSA outcomes. A significant majority (72%) of our data is from practices done in combination with at least one other CSA practice (e.g., agroforestry + mulching, intercropping + manure). This provides insights into how practices operate alone or in combination, which helps in making decisions and recommendations on best practices under specific conditions. Lastly, we analysed the distributions of outcomes. The first striking pattern is that 82% of data are related to the productivity pillar -yields, incomes, etc. (Fig. 12.4b). Contrastingly, resilience outcomes make up only 17.5% of the data, which is primarily related to soil quality (11.4%) and input-use efficiencies (4.5%). This means that there is scant evidence on many other indicators, especially those that are believed to impart some level of resilience. It is also indicative of the difficulty in defining resilience indicators in the literature. Finally, only 0.5% of the data set is related directly to mitigation outcomes, such as greenhouse gas emissions or total carbon stocks. Thus, there are major gaps in our understanding of how potential CSA practices affect resilience and mitigation outcomes across various contexts in East and Southern Africa. There is almost a complete lack of data on mitigation, which requires urgent action to calibrate low emission trajectories. One of the fundamental goals of CSA is to produce win-win or win-win-win outcomes across productivity, resilience and mitigation. However, our data set suggests that it is only possible to analyse win-win outcomes, given the dearth of information on mitigation. That is because most studies only examine a single pillar, about 32% study two pillars and less than 1% study all three (Fig. 12.4a). This is a critical insight into the evidence base of CSA because it shows the lack of co-located (in the same study) research across pillars. It is often not possible to extrapolate results on the same practice between sites because outcomes can be significantly influenced by local context (e.g., Pittelkow et al. 2015a, b;Bayala et al. 2012). Given the general lack of co-located research across CSA outcomes, aggregation techniques such as the Compendium and meta-analyses, can be used to gain insights into multiple outcomes from practices, including looking into potential trade-offs between different objectives. It was not a surprise that most studies on potential CSA practices examine yields and soil health, as they are the basis of agronomic research. Perhaps the biggest surprise in the data set is that there is a significant amount of economic information available. Nearly 20% of the papers presented economic information, derived from farm enterprise budgets, including indicators such as net returns, variable costs, net present value, etc. This subset of the data provides key information on the costs and benefits for the farmer in adopting CSA, information often missing in the discussion around programming and policy for interventions. These data will be used in future studies in combination with agronomic information to address this gap to the extent possible. Implications for Practitioners Our systematic map provides a first appraisal of the evidence base to assess the contributions of a wide set of field level technologies to CSA objectives in East and Southern Africa. Despite more than 50 years of agricultural research, this database shines a light on potential skew in our knowledge base. It also identifies key areas for future investments in research. Although the database may not be as comprehensive as desired due to shortcomings on the number of agroecologies, products or outcomes included, it does provide a wide range of information on many products, practices and outcomes, and therefore reduces the uncertainty of making decisions in the countries reflected in the analysis presented here. Over the next 6 months, the authors will conduct a quantitative meta-analysis-a statistical approach to combine information across studies-to help identify best interventions (and combinations thereof) during the design phase of programmes and policies. Thierfelder C, Cheesman S, Rusinamhodzi L (2013) Benefits and challenges of crop rotations in maize-based conservation agriculture (CA) cropping systems of southern Africa. Int J Agric Sustain 11(2):108-124 Thierfelder C, Chivenge P, Mupangwa W et al (2017) How climate-smart is conservation agriculture (CA)?-its potential to deliver on adaptation, mitigation and productivity on smallholder farms in southern Africa. Food Secur 9:1-24 Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
v3-fos-license
2019-03-22T16:08:32.408Z
2014-01-01T00:00:00.000
55668442
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://thescipub.com/pdf/10.3844/ajabssp.2014.94.100", "pdf_hash": "62293829d7eb3f55397c9a6c45ddab08cfbc1367", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44681", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Biology" ], "sha1": "f30507db5efc8cdf7015e516a6705907065ca4bb", "year": 2014 }
pes2o/s2orc
SELECTION OF BEAUVERIA BASSIANA (BALSAMO) VUILLEMIN ISOLATES FOR MANAGEMENT OF MYZUS PERSICAE (SULTZAR) (HOM.: APHIDAE) BASED ON VIRULENCE AND GROWTH RELATED CHARACTERISTICS Isolates of the entomopathogenic fungus Beauveria bassiana originated from Jordan were evaluated for their efficacy against the green peach aphid, Myzus persicae under laboratory and greenhouse conditions. Efficacy evaluation involved testing all isolates a t a concentration of 1 ×10 conidia/mL followed by concentration dependent and greenhouse bioassays fo r the top virulent isolates. Growth characteristics related to virulence were evaluated for high, inter m diate and low virulent isolates. Results showed t hat three isolates namely: BAU004, BAU018 and BAU019 we re highly virulent to the aphid in the laboratory causing more than 75% infection. In the greenhouse, the three isolates caused infection from 41.3 to 4 6.5%. For the growth characteristics, isolate BAU019 prod uced more spores than the other highly virulent one s including the commercial isolate GHA. Highly virule nt isolates also showed faster hyphal growth than l ow virulent isolates. These findings indicate that iso lates BAU004, Bau018 and BAU019 might be developed as commercial microbial insecticides for safe and e ffective control of green peach aphid. INTRODUCTION Entomopathogenic microbes such as fungi, bacteria and viruses can be developed as microbial insecticides to play a major role in Integrated Pest Management (IPM) and organic farming. Unlike chemical insecticides, these natural products are considered more safe for humans, less hazardous to the environment, less disruptive to natural controls and well not be eventually rendered ineffective due to resistance development (Hajek and Leger, 1994). The widespread fungus Beauveria bassiana (Balsamo) Vuillemin has been the focus for commercial development for many years (Goettel et al., 1997). Currently, this microbial control agent is registered for commercial use in Europe, The US and other parts of the world. B. bassiana has many characteristics that made it convenient for commercial development. It exhibit wide host ranges including many key pest species for economically important crops, can be produces on inexpensive artificial media and has long shelf life (Hajek and Leger, 1994;Goettel et al., 1997). Moreover, the infection process with B. bassiana involves the adherence of infective propagules to the host insect followed by direct penetration through the cuticle. This route of entry eliminates the need for the infective propagule ingestion by the host which is crucial for other entomopathogenic microbes such as bacteria and viruses. Therefore, B. bassiana could be more appropriate for management of insect pests with piercing sucking mouth parts which are unlikely to ingest microbes upon their feeding (Wraight and Carruthers, 2010). Aphids including the Green Peach Aphid, Myzus persicae (sultzar) feed by sucking the plant sap from the vascular bundles, have soft bodies that are less heavily sclerotized compared to other insects, sluggish and slow moving making them excellent candidates for microbial Science Publications AJABS control by entomopathogenic fungi. The green peach aphid is widespread aphid species in the temperate and tropical parts of the world. It is highly polyphagous attacking many vegetable and fruit tree crops. In spite the commercial introduction of several predators and parasitoids for green peach aphid management, chemical insecticides continue to play a major role in green peach aphid control particularly in the tropical areas of the world (Abdel-Wali et al., 2007). Due to intensive insecticidal applications, green peach aphid populations had developed resistance to many insecticides including relatively new classes such as the neonicotinoids (Foster et al., 2002;Puinean et al., 2010). Therefore, safe and ecofriendly control measures such microbial control agents are required for effective green peach aphid management. It is well known that isolates of anentomopathogenic fungus exhibit different efficacies against the same insect pest species. Therefore, one of the key factors that might lead to control failure when using microbial control agents is the inability to identify strains active at low doses (Leger and Wang, 2010). Therefore, this study evaluated 32 B. bassiana isolates for their efficacy against green peach aphid in the laboratory and greenhouses. Moreover, virulence related growth characteristics for isolates with different efficacies were determined and discussed in relation to isolate activity against green peach aphid. Experimental Material The entomopathogenic fungal isolates evaluated in the current study were isolated using the Galleria baiting method (Zimmermann, 1986) from soil samples collected from different areas in Jordan. After isolation, the isolates were cultured on B. bassiana selective medium consisting of Sabouraud's dextrose agar (SDA) amended with 1% wt/v yeast extract (SDAY), 0.55% wt/v Dodine and 0.005% wt/v chlortetracycline (Chase et al., 1986) and then stored as dry conidia in a refrigerator at 4°C. In total, 33 B. bassiana isolates were tested consisting of 32 Jordanian isolates in addition to the commercially available green peach aphid isolate (BotaniGard ® , Bio Works, USA). Before the bioassay was conducted, all the isolates were passed through Galleria melonella, reisolated from the infected insects and cultured on the selective medium for 2 wk in the dark at 24±2°C. Conidia were harvested with a spatula and stored at 4ºC until used. Shortly before the bioassay, number of conidia per unit weight of each isolate was determined by suspending six 0.1g samples taken at random in 100 mL sterile distilled water and 0.1% v/v Tween 80 and agitated on a rotary shaker at 125 rpm for 3 h. After agitation, number of conidia was determined using a haemacytometer. The viability of B. bassiana conidia was checked by adding 200µl of each conidial suspension to 2 mL of Sabouraud's dextrose broth amended with 1% wt/v yeast extract (SDY) and 0.1% v/v Tween 80. After incubation for 20 h. at 24°C in darkness germination was assessed by counting 100 spores in four different fields of view of a haemacytometer (total of 400 spores). Green peach aphid culture was established from adults and nymphs collected from naturally infested sweet pepper plants Capsicum annum L. The aphid was cultured in a controlled greenhouse compartment at 25 ± 5°C and a 16-h photoperiod on potted pepper plants, C. annum. The plants were grown in 15 cm diameter pots filled with a mixture of 1:1 sand and peat moss. When adult aphids were required for the experiments they were collected from plants using a camel hair brush after gentle propping to ensure that their piercing mouth parts were not damaged during the process. Bioassay with Fungal Isolates Evaluation of the virulence of the fungal isolates was started by a screening test for all isolates at the same concentration against the aphid adult stage.A suspension of each fungal isolate was prepared by suspending dry conidia in sterile distilled water with 0.1% v/v Tween 80. The concentration of the conidia for each suspension was then adjusted to 1×10 7 conidia/mL. Ten adult aphids were aspired from infested plants in the culture and immobilized by placing in a refrigerator for 5 min. The insects were then contained inside a double petri dish cage with a fresh tomato leaflet as described by Almazra'awi and Ateyyat (2009). While still immobilized, the insects were consistently sprayed with each tested isolate using a potter spray tower (Burckard Scientific, UK). To prevent carryover effect among isolates, the potter tower was cleanedwith 70% ethanol and sterile distilled water between spraying sessions. Sterile distilled water plus Tween 80 was used as a control. The sprayed aphids were kept in a growth chamber at temperature of 24±2°C, 65±10% RH and 16:8 hr. photoperiod for 2 d. After the incubation period, the treated aphids were collected from the cages and surface sterilized in 70% ethanol for 15 s. followed by 0.3% NaOCL with 0.05% Tween 80 for 3 min and lastly, two rinses of sterile distilled water. The surface sterilized aphids were then placed in 90 mm diameter water agar plates and incubated 24±2°C, 65±10% RH and 16:8 hr. photoperiod for 5-6 d. After the incubation period, all Science Publications AJABS treated aphids were examined under the microscope for the presence of B. bassiana. There were five replicates for each tested isolate as well as the control. Based on the efficacy of the screening test will all isolates, the four most virulent isolates and the commercial isolate (GHA) were selectedfor concentration dependent evaluation. Four concentrations ranging from 1×10 6 to 1×10 8 for each tested isolate were prepared and infection rates in Green peach aphids were evaluated as above. There were 3 replicates for each isolate as well as the control and the whole experiment was repeated twice. Greenhouse Bioassay The top three most virulent isolates based on the results from the previous bioassay and the reference green peach aphid isolate were selected for evaluation under greenhouse conditions. A suspension of each selected isolate was prepared as above and the concentration of the suspension was adjusted to 8.0×10 8 conidia/mL. Potted tomato plants at 4-5 leaf stage and infested with the aphid were used in the trials. The tomato plants were grown in 15 cm diameter pots filled with a mixture of 1:1 sand and peat moss. The plants were irrigated and fertilized as required. The plants were sprayed with the conidial suspension of the selected isolates until run off. Control treatment involved spraying infested plants with sterile distilled water with 0.1% Tween 80.The plants were kept in a controlled greenhouse inside meshed cages for 2 d. Temperature and RH were monitored inside the cages using shaded temperature/humidity probes (Hycal, Elmonte, CA, USA). After the incubation period, 10 randomly selected aphids were collected from the plants and treated as above to evaluate the infection rates. There were 3 replicate plants for each isolate as well as the control and the experiment was repeated twice. Growth Characteristics Related to Virulence Virulence related growth characteristics including spore production, speed of conidial germination and hyphal growth were studies for selected high, intermediate and low virulent isolates based on the bioassay results. High virulent isolates are those resulted in more than 70% infection rates. Intermediate and low virulent isolates are those resulted in 40-60% and less than 25% infection rates, respectively. Two isolates were selected to represent each virulence category. Therefore, the high, intermediate and low virulent isolates were BAU018 and BAU019, BAU005 and BAU021, BAU003 and BAU026, respectively. To evaluate spore production, 0.2 mL conidial suspension of each isolate was inoculated on a SDAY plate. After incubating at 22°C for 14 d, five discs (4mm diameter) were randomly removed from the culture using a sterile cork borer and placed in 10 mL sterile distilled water amended with 0.01% Tween 80. The discs were agitated at 110 rpm for 3 h on a rotary shaker to suspend the conidia. Conidial concentration in three aliquots of 0.1 mL of 10-fold serial dilutions of the aqueous suspensions was determined using a haemacytometer. The mean conidial yield per square centimeter was calculated for each isolate. Each plate served as a replicate and there were 5 plates for each isolate. The speed of conidial germination was determined by placing suspended conidia in SDY broth as described previously, but germination was assessed bihourly starting 12 h after inoculation and ending after 24 h. The time required for 50% germination to occur (TG 50 ) was calculated. There were 5 replicates for each isolate. To evaluate relative hyphal growth, 0.2 mL conidial suspension of each isolate was inoculated on SDAY plate. After incubating at 22°C for 72 h, mycelium disks, 6 mm in diameter, were cut off using a sterile cork borer and were placed in the center of freshly prepared SDAY plates. The diameter of the growing colony (exceeding the 6 mm diameter of the discs) was measured daily until sporulation (14 d) on a pre-marked line with a vernier caliper. Each plate served as a replicate and there were 5 plates for each isolate. Statistical Analysis Infection ratedata were arcsine square root transformed to meet the assumption of the ANOVA. Screening and greenhouse bioassays, conidia production and hyphal growth data were subjected to one way ANOVA. If the F-value was significant, means were separated using student-Newman-Keul (SNK) test. The concentration dependent and speed of germinationdata were analyzed using probit analysis. Type 1 error was set at 0.05% for all tests. When transformed, data were returned to the original scale for presentation in the tables and figures. Statistical analysis was done using SAS software version 9 (SAS, 2002). RESULTS Results of the screening test that involved evaluating infection rates for 33 B. bassiana isolates against the green peach aphids showed significant differences among the isolates (F 32,132 = 17.9 p<0.001). The isolates BAU004, BAU016, BAU018, BAU019 and GHA caused more than 70% infection which was significantly higher than the rest of the tested isolates. These isolates were considered highly virulent to the adult stage were selected for further evaluation against the aphid ( Table 1). The isolates BAU021, BAU 007, BAU025, BAU 005, BAU027 and BAU015 caused infection ranging from 40-60% and were considered moderately virulent to the aphid. The rest of the tested isolates resulted in less than 40% infection rate and were considered poorly virulent to the aphid ( Table 1). Evaluation of the isolates under greenhouse condition showed significant effect due to isolate (F 4,20 = 3.55p<0.024). Treatment of green peach aphid adults with isolates of B. bassianain the greenhouse resulted in 30.7 to 48.3% infection rates. The highest infection rate was achieved using the isolate GHA followed by BAU018, BAU019 and BAU004 with no significant differences among them. The least virulent isolate was BAU016 which caused significantly lower infection rate compared the BAU018 and GHA isolates ( Table 3). Studying virulence related growth characteristics for highly virulent isolates compared to less virulent ones showed significant differences in conidia production (F 5,24 = 45.8 p<0.001), daily hyphal growth (F 5,294 = 5.5 p<0.001) but not in the speed of conidia germination ( Table 4). The highly virulent isolates BAU019 and BAU018 produced significantly more spores than the rest of the less virulent isolates. However, isolate BAU019 significantly outperformed isolate BAU018 in spore production. DISCUSSION It is well known for entomopathogenic fungal isolates of the same species to exhibit different biological and ecological differences when challenged against the same insect species. Therefore, one of the first important steps in the development of an effective microbial control agent is careful evaluation and selection of the appropriate isolate based on virulence against the target pest. B. bassiana is not an exception in this route of development as many studies involved evaluation of this fungal species against insect pests for further use as a biological control agent (Liu et al., 2003;Quesada-Moraga et al., 2006). Although B. bassianais known to be able to infect the green peach aphid (Alongkorn et al., 2013), very little is known about the virulence of its different isolates against this economically important insect pest. Todorova et al. (2000) evaluated the pathogenicity of 10 B. bassiana isolates against two insect pests, the Colorado potato beetle (Leptinotarsa decemlineata Say) and the green peach aphid (M. persicae) under laboratory conditions. Six out of the ten tested isolates were found highly virulent to the two pests. The current study evaluated 33 different B. bassiana isolates under laboratory and greenhouse conditions where the green peach aphid is considered a major pest for many greenhouse crops. B. bassiana might be an excellent candidate for the development as a microbial control agent against the green peach aphid. Aphids possess piercing sucking mouth parts by which they suck plant sap from the conductive tissues. This feeding behavior might result in avoidance of the ingestion of many microbial control agents such as bacteria and viruses which need to be ingested to infect their hosts. On the contrary, entomopathogenic fungi including B. bassiana cause infection by direct penetration through their host cuticle which makes them excellent candidates as microbial control agents against pests with piercing sucking feeding behavior (Wraight and Carruthers, 2010). Furthermore, B. bassiana is considered safer and ecofriendly than chemical insecticides (Goettel et al., 1997), has wide host range attacking a variety of important pests, can be cultured on relatively inexpensive media and have long shelf life (Hajek and Leger, 1994). Several methods have been used to describe isolate variation within a species of entomopathogenic fungi. These include morphological characteristics of spores and colonies, extracellular protein profiles, pathogenicity and growth or nutrient requirements. The differences in efficacy are attributed to many factors such as the production of bioactive compounds and other physiological growth related characteristics. Evaluation of growth characteristics related to virulence showed that the highly virulence isolates such as BAU019 out performed all other isolates in spore production on artificial media. Moreover, all highly virulent isolates showed faster hyphal growth than low virulent ones. The speed of hyphal growth might be one of the factors affecting the differences in virulence among these tested isolates. The faster hyphal growth usually results in faster colonization of the infected insects leading to increased virulence. These finding coincide with previous report regarding efficacy of B. bassiana isolates against the Science Publications AJABS tarnished plant bug (Lygus lineolaris) (Liu et al., 2003). B. bassiana isolates that produced larger conidia, had higher spore production and faster spore germination and hyphal growth rate over a wide range of temperatures were generally the most virulent to tarnished plant bug (Lygus lineolaris) than the less virulent isolates (Liu et al., 2003). It was clear from the obtained results that the highly virulent isolates were more efficacious under laboratory conditions compared to the greenhouse conditions. This variation might be due to environmental factors, particularly relative humidity that greatly influence the efficacy of B. bassiana. Increasing relative humidity up to 90% or more improves the efficacy of B. bassiana under greenhouse conditions (Shipp et al., 2003). Temperature is another important factor playing a key role in fungi growth and spread (Orozco-Avitia et al., 2013). Moreover, the conidia used in the greenhouse bioassay were unformulated conidia with no additives that might improve efficacy such UV protectants and other inert ingredients used in commercial formulations. CONCLUSION In the current study, a screening bioassay procedure starting with many B. bassiana isolates was carried out to identify the most virulent isolates against the green peach aphid. The study identified three isolates (BAU018, BAU019 and BAU004) as the most promising ones for further development as microbial insecticides against the green peach aphid. When developed as microbial insecticides, these isolates could be used as control tactics in organic farming and Integrated Pest Management program for environmentally safe control of the green peach aphid. ACKNOWLEDGEMENT The researchers thank The Scientific Research Fund, Ministry of Higher Education and Scientific Research, Jordan, Grant number Z, B/1/06/2008 for funding research. We also thank Miss Reem Abbasi for technical help.
v3-fos-license
2024-06-07T15:11:40.668Z
2024-06-05T00:00:00.000
270302741
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jmhg.springeropen.com/counter/pdf/10.1186/s43042-024-00533-2", "pdf_hash": "e984b19b271f81fbaf664097e2da1bd208859b52", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44682", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "684d17a1cca9f1223ea277c20a252be96891c849", "year": 2024 }
pes2o/s2orc
Evaluation of adropin, fibroblast growth factor-1 (FGF-1), and Toll-like receptor-1 (TLR1) biomarkers in patients with inflammatory bowel disease: gene expression of TNF-α as a marker of disease severity Background Inflammatory bowel disease (IBD) is a chronic relapsing inflammatory disorder of unknown etiology and unpredictable course. The aim of the work was to assess the levels of adropin, fibroblast growth factor‑1 (FGF‑1), and Toll‑like receptor‑1 (TLR1) biomarkers in IBD patients compared to controls and evaluate the gene expression of TNF‑α as a marker of disease severity. Methods Adropin, fasting serum FGF‑1 levels, TLR1, and TNF‑α were measured in 60 IBD patients. They were also compared with 58 healthy controls matching age and gender. Moreover, the blood cells cDNA copy number of TNF‑α were determined as a marker of severity. Results Adropin and TLR1 levels were significantly lower in patients than controls. FGF‑1 was reduced but not sta‑ tistically significant. The expression of TNF‑α gene in the IBD patients was significantly increased (42%) in comparison with control samples ( P < 0.001). Conclusions Adropin, IGF‑I, and Toll‑like receptor‑1 biomarkers may have a role in the intricate pathophysiology of IBD and may possibly operate as predictors of disease activity. Thus, they may be therapeutic targets for IBD. Moreo‑ ver, the expression of TNF‑α gene can be used as a marker of severity. Introduction Inflammatory bowel diseases (IBD) are chronic intermittent inflammatory gastrointestinal disorders of unknown etiology but a clear genetic predisposition [1,2].Inflammatory bowel diseases, including ulcerative colitis and Crohn's disease, are chronic and relapsing conditions characterized by massive damage of the epithelium and the underlying mesenchyme of the intestine that pose a growing burden on healthcare systems worldwide [3].IBD is hypothesized to develop as a result of interplay between environmental, microbial, and immune-mediated factors [4].It is related with significant morbidity in Western countries and is becoming more prevalent in the developing countries, causing an increasing strain on global healthcare systems [5].The pathophysiology of inflammatory bowel disease (IBD) has been linked to the biomarkers as adropin, IGF-I, and TLR1.They play a role in the control of immunity, metabolism, and inflammation as well as the prediction of disease activity in IBD. Adropin is a peptide hormone produced by fat cells, it controls a variety of processes, such as inflammation, insulin sensitivity, vascular protection, and energy metabolism.Adropin is produced in the liver and brain, heart, and gastrointestinal tract.It assists in the reduction of inflammation by acting on immune cells [6]. Impaired epithelium healing is a crucial aspect of inflammatory bowel disease (IBD).Fibroblast growth factor-1 (FBG-1) plays a significant part in the pathophysiology of the IBD disease.Their downstream consequences are linked to several cellular functions comprising epithelial healing in response to injury [7].The growth factors are considered as possible tools for the controlling and repairing intestinal inflammation.They have a key role in cellular differentiation, angiogenesis, and proliferation [8]. TLR1 is a group of proteins that play a critical role in the pathogenesis of IBD and are involved in the innate immune response.When TLR1 binds to its ligand, it triggers a cascade of events that lead to the production of inflammatory cytokines and the activation of immune cells [9].In patients with IBD, it is thought that TLR1 levels are lower due to dysregulation of TLR1 signaling pathways, increased production of anti-TLR1 antibodies, and reduced expression of TLR1 on immune cells.The lower levels of TLR1 in patients with IBD may contribute to the development and progression of the disease. Tumor necrosis factor (TNF)-α is a multifunctional cytokine that plays a crucial role in the pathophysiology of inflammatory, autoimmune, and malignant diseases by promoting inflammatory responses [10,11].High serum TNF-α levels in IBD blood and tissue samples indicate the critical role of TNF-α in cell-mediated immunity.Several in polymorphisms exist TNF-α, most of which are found in the promoter region, and some of which affect the gene's expression level [12]. The present study aimed to estimate the levels of adropin, fibroblast growth factor-1 (FGF-1) and Toll-like receptor-1 (TLR1) biomarkers in IBD patients compared to controls and assess the gene expression of TNF-α. Methods The report included 60 patients with IBD.They were compared with 58 healthy controls matching age and gender.They were enrolled from the IBD Clinic, National Hepatology and Tropical Medicine (NHT-MRI), Egypt.The NHTMRI Research Ethics Committees gave its approval to the study.An informed consent was demanded from the patients to contribute to the research.The informed consent was given by the parents or guardians of the younger patients.The current study conforms to the guidelines established by the NHTMRI Research and Ethics Committees. A pedigree analysis was performed for all cases.Diabetes, cardiovascular conditions, and corticosteroid therapy within three months of the start of the trial were the exclusion criteria.Additionally, each member of the control group underwent a thorough medical examination and any participants who displayed any indication of inflammation were disqualified. Assessment of adropin Serum concentration of adropin was measured using enzyme-linked immune-sorbent assay (ELISA) according to SinogeneClon Biotech Co. Estimation of the level of Toll-like receptor-1(TLRs) Level of Toll-like receptor-1(TLRs) in the serum estimated according to R&D Systems, Minneapolis, MN, USA, using ELISA Kit, Catalogue Number:SL4063Hu. Isolation of RNA and reverse transcription (RT) process In accordance with the manufacturer's instructions, total RNA was isolated from the blood cells of control individuals and patients with inflammatory bowel disease (IBD) using the RNeasy Mini Kit (Qiagen, Hilden, Germany) supplemented with DNaseI (Qiagen) digestion step.After digesting DNA residues with one unit of RQ1 RNAse-free DNAse (Invitrogen, Germany), isolated total RNA was resuspended in water treated with DEPC, and photospectrometry at 260 nm was used to measure the results.A total RNA sample's purity was determined by measuring its 260/280 nm ratio, which ranged from 1.8 to 2.1.Additionally, formaldehydecontaining agarose gel electrophoresis was used to ensure integrity through the analysis of the 28S and 18S bands using ethidium bromide stain [13].Aliquots were kept at -80 °C unless they were used right away for reverse transcription (RT). RevertAid ™ First Strand cDNA Synthesis Kit (Fermentas, Germany) was used to reverse transcribe complete Poly(A) + RNA isolated from blood cells into cDNA in a total volume of 20 µl.A master mix comprising 50 mM MgCl2, 10 × RT buffer, 10 mM of each dNTP, 50 µM oligo-dT primer, 20 IU ribonuclease inhibitor (a 50 kDa recombinant enzyme to inhibit RNase activity), and 50 IU MuLV reverse transcriptase was used in conjunction with 5 µg of total RNA.The RT reaction was conducted using the method of 10 min at 25 °C and 1 h at 42 °C.[14] and concluded with a 5-min denaturation step at 99 °C.The reaction tubes holding the RT preparations were then flash-cooled in an ice chamber prior to being utilized for quantitative real-time polymerase chain reaction (qRT-PCR) cDNA amplification. Real-time PCR (qPCR) Using the StepOneTM Real-Time PCR System from Applied Biosystems (Thermo Fisher Scientific, Waltham, MA USA), the cDNA copy number of blood cells was determined.6.5 µL of distilled water, 5 µL of cDNA template, 12.5 µL of 1 × SYBR ® Premix Ex TaqTM (TaKaRa, Biotech.Co. Ltd.), 0.5 µL of 0.2 µM sense primer, 0.5 µL of 0.2 µM antisense primer, and 0.5 µL of distilled water were used to set up the PCRs in 25 µL reaction mixtures [15].There were three steps assigned to the response program.The first step was three minutes at 95.0 °C.The second step comprised 40 cycles, with each cycle being divided into three steps: (a) 15 s at 95.0 °C, (b) 30 s at 55.0 °C, and (c) 30 s at 72.0 °C.The third step consisted of 71 cycles which started at 60.0 °C and then increased about 0.5 °C every 10 s up to 95.0 °C.A control of distilled water was included in every experiment.The sequences of specific primer of the cytokine gene tumor necrosis factor alpha (TNF-α) was designed and is listed in Tables 2. The 2−ΔΔCT method was utilized to ascertain the target's relative quantification in relation to the reference [13]. The primer blast results are as follows: As the primer was designed, we got the annealing temperature which was used in the qRT-PCR program.Due to qRT-PCR accuracy and sensitivity versus conventional PCR, melt curve analysis provides the most confidence instead of gel electrophoretic in assessment of primer specificity.So, according to the following obtained melting curve from the current experiment indicating that the single peak observed for an amplicon from TNF-α specific primers (Fig. 1) is typically interpreted as representing a pure, single amplicon which is more accurate than the gel electrophoretic. Statistical analysis The statistical SPSS software for Windows, version 20.0 (SPSS, USA), was used to analyze the data.The means ± standard deviation (SD) is presented for the results.Data that were not paired were compared using the independent sample t test.P-values less than 0.05 were regarded as noteworthy. Results The age of patients ranged between 20 and 53 years (32.54 ± 10.02), whereas the age of controls was 34.00 ± 10.14 (P = 0.67).The ratio of female to male was 38:22.There was positive consanguinity in 18 patients (30%) and similarly affected family members in 19 (31.7%). Compared to the control group, patients with IBD had significantly lower serum levels of Toll-like receptor-1 (328.77± 57.604 vs 587.24 ± 67.422 μg/g, P = 0.002) (Table 1).The expression levels of TNF-α gene were increased significantly (P < 0.01) in IBD blood sample compared with healthy control samples (Fig. 2).They were: about 80% of cases had abdominal pain, diarrhea, vomiting, weight loss and depression.Bleeding was found in 24% of cases, while arthritis was noted in 83% of cases.Elevated levels of TNF-α gene expression have been related to the occurrence of gastrointestinal dysfunction such as arthritis (70%), abdominal pain (80%), diarrhea (70%) vomiting(60%) as well as with the reduced levels of adropin and Toll-like receptor-1 in IBD cases (Tables 2, 3). Discussion Immune-mediated diseases typically show a female preponderance.In our cohort, females predominated males (38:22).Studies from different countries vary in their findings regarding the association between gender and IBD [16,17].There are numerous reasons why these differences might exist.It is likely that different countries have variable environmental reasons of IBD.Another explanation is that certain groups have genetic variations that make specific individuals more prone to IBD than others.Understanding the causes of the disparities in results between studies of gender and IBD requires additional investigations.But it is undeniable that the gender plays a key role in the emergence of IBD. Consanguinity is a risk factor for IBD [18]; in our research, 18 patients (30%) reported consanguineous marriage of their parents, and 19 patients (31.7%) had similarly affected family members. Previous studies have shown that adropin levels are reduced in patients with inflammatory bowel disease [6,19].This suggests that adropin might be involved in the progression of IBD.Supporting the results of an earlier report, it was found that inflammatory bowel disease (IBD) patients had considerably lower serum adropin levels compared to the control group (73.80 ± 5.99 vs 88.32 ± 6.48 μg/g, P = 0.001).A precise explanation for how adropin level is decreased in IBD remains unknown.Though, inflammation is considered to be a potential factor that causes the damage of adropin-producing cells.More studies are needed to elucidate the role of adropin in the pathophysiology or prognosis of IBD. IGF-I is a hormone that plays a role in growth, development, and metabolism [6]. In our study, serum fibroblast growth factor-1 levels were lower in IBD patients in comparison with the control group (5.30 ± 5.44 vs 7.83 ± 7.66 μg/g, P = 0.409).Similarly previous study [20] suggested that reduced serum levels of growth factor I (IGF-I) are common in patients with IBD, most likely due to gastrointestinal dysfunction, growth hormone resistance, and chronic inflammation. Confirming the findings of previous studies, this report found that serum Toll-like receptor-1 levels were significantly lower in IBD patients in comparison with the control group (328.77± 57.604 vs 587.24 ± 67.422 μg/g, P = 0.002).TLR1 may be a contributing factor to the development and progression of IBD.Therefore, potential therapeutic targets for TLR1 in IBD could motivate the development of new therapies.TNF-α is the proinflammatory cytokine that has been studied the most and has been proven to play a role in IBD [21].The etiology of IBD, a complicated multifactorial illness, is poorly understood.It has been demonstrated that allelic variations in cytokine genes affect gene expression, which in turn affects the severity and susceptibility to inflammatory disorders.The innate and adaptive immune systems are regulated by a vast family of cytokines.TNF-α causes proinflammatory consequences in chronic intestinal inflammation, angiogenesis, T cell and macrophage activation, and epithelial cell destruction. Conclusion Adropin, IGF-I, and Toll-like receptor-1 (TLR1) biomarkers may play a role in the complex pathophysiology of inflammatory bowel disease (IBD) and potentially serve as predictors of disease activity.As such, they may be therapeutic targets for IBD.Moreover, TNF-α gene expression plays a critical role in the pathogenesis of IBD and can be used as a marker of severity, which may be targeted by existing immunomodulatory therapies.Therefore, anti-TNF-α antibodies likely exert their therapeutic effects in inflamed intestinal mucosal tissues.However, larger-scale studies are needed to assess the significance of these findings.The limitations of the present study are small sample size and the cross-sectional design study which disables establishment of causal relationship. Fig. 2 Fig. 2 The alterations of TNF-α gene in blood samples of inflammatory bowel disease (IBD) patients.Data are presented as mean ± SEM. a,b : Mean values with unlike superscript letters were significantly different Table 1 The biochemical parameters in IBD and control group
v3-fos-license
2019-03-11T17:20:16.688Z
2019-01-01T00:00:00.000
73482459
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1177/2325958218822306", "pdf_hash": "3af336308e30648a5b32dadf1f469ab49892a27d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44683", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "2ab2a1249e0a97afb4d660e29571b1f44df89d2e", "year": 2019 }
pes2o/s2orc
How Much Do They Know? An Analysis of the Accuracy of HIV Knowledge among Youth Affected by HIV in South Africa HIV/AIDS prevalence rates in South Africa are among the highest in the world. The key to reducing transmission is the dissemination of accurate knowledge. Here, we investigate the accuracy of HIV/AIDS knowledge among youth affected by the disease. Data from the Fourth South African National HIV, Behaviour and Health Survey (2012) are used and a weighted sample of 4 095 447 youth (15-24 years old) who have known or cared for someone with HIV/AIDS are analyzed. Results show that more than one-third (40.37%) of youth in South Africa are affected by the disease. One-quarter of the affected youth have 75% accurate knowledge of the virus, while only 10% have 100% accurate knowledge. Rural place of residence (odds ratio [OR] = 0.61) and looking for work (OR = 0.39) are less likely to have accurate knowledge. Youth without disabilities (OR = 2.46), in cohabiting (OR = 1.69), and in dating (OR = 1.70) relationships are more likely to have accurate knowledge. In conclusion, in order to reduce HIV incidence and combat HIV myths, efforts to improve the accuracy of HIV knowledge among youth affected by the disease are needed. There should be more community-based campaigns to target unemployed youth in the country. Introduction Accurate knowledge of HIV/AIDS is pivotal to prevent further transmission of the disease. HIV knowledge is the correct information regarding modes of transmission, high-risk behaviors, and prevention and care strategies. 1 The need for reliable, valid, and accurate tools for testing HIV knowledge has led to the development many different questionnaire scales over the years. [2][3][4] These tools are conceptualized to evaluate the extent to which knowledge exists by age-group, sex, and other subpopulations in a number of different countries. [5][6][7] These tools have also been adopted and adapted in various surveys, including population-based household and sexual behavior surveys. 8,9 A number of studies have identified the determinants of HIV knowledge. [10][11][12][13] One study found that women with more education and who are wealthy are more likely to know about HIV protective behaviors, including consistent condom use, and are less likely to have misconceptions about modes of transmission. 11 Education, in particular the ability to read, makes written information, including pamphlets, billboards, newspaper articles, and health briefs and policy documents, easier to comprehend. Education also makes it possible to critically evaluate information that is disseminated and decide what correct and incorrect knowledge is. Another study of young, urban women in Kenya found that older youth (20-24 years old), having been for at least 1 HIV test in their lifetime and knowing someone, or knowing someone who died from AIDS increases the likelihood of comprehensive HIV knowledge. 12 Older youth are more likely to have completed secondary education compared to adolescents (15-19 years old). For this reason, more education could translate into being able to comprehend more information regarding the disease. In addition, HIV-testing facilities offer trained consultation and support. Therefore, persons who have tested have access to a trained consultant or nurse to whom they may pose questions about the disease. Finally, knowing someone with the disease allows for dialogue and conversation regarding modes of transmission and protective behaviors between 2 or more individuals, one of whom would be speaking from experience. Among youth in South Africa, the dissemination of accurate knowledge has been hindered by the perpetuation of AIDS myths and misinformation. [14][15][16] Of notable importance is the impact false information has on sexual violence in the country, with 1 study reporting that 9.4% of adolescent boys, who believed that rape is a cure for HIV, demonstrated sexual violence toward a female partner. 17 And with current youth HIV prevalence rates in the population being as high as 7.1%, there is need to reassess the accuracy of HIV knowledge among youth in the country. 18 Further, while youth tend to have some or incomplete knowledge about HIV and how it is transmitted, evidence suggests that youth affected by the virus, that is knowing someone with HIV/AIDS in their households and/or communities, have better knowledge than youth who are not affected. 19,20 However, even among youth affected by the disease, knowledge is not 100% accurate, yet these youth can play an important role as peer educators in preventing the spread of the disease within their households and communities, provided that they have the most accurate (100%) knowledge of HIV and AIDS. Therefore, the primary purpose of this study is to identify the factors associated with HIV knowledge accuracy among youth affected by the disease in South Africa. Study Design This is a cross-sectional study using the Fourth South African National HIV, Behaviour and Health Survey, 2012 (http:// www.hsrc.ac.za/en/researchdata/). Survey and Sample The 2012 survey is the fourth in the series of national household HIV surveys conducted by a consortium of scientists led by the Human Sciences Research Council. 21 The data pertain to the HIV status, demographic, socioeconomic, and behavioral characteristics of the sample. 21 In the 2012 survey, over 38 000 people of all ages were interviewed. 21 The number of youth who participated in the survey is (unweighted) 8221. For this study, a weighted sample of youth affected by HIV was determined through positive responses (yes) to any of the following questions "What has made you take the problem of HIV/AIDS more seriously?", with response options of "knowing or talking to someone with HIV/AIDS"; "Caring for a person with HIV/ AIDS"; "Knowing someone who has died of AIDS." A weighted total sample of 4 095 447 youth affected by HIV/AIDS was identified. The percentage of youth affected by HIV/AIDS is 40.37%. Variable Definitions Dependent Variable. The dependent variable in this study is "accuracy of HIV knowledge" which is defined as having correct knowledge of all (100%) questions from the survey. Correct answers to each of the questions have been determined from similar studies that have used the same questions. 12,22 Using 8 questions on HIV knowledge, a variable showing the percentage of correct answers was created. The survey asked a number of questions on HIV knowledge, attitudes, perceptions and behaviors (KAPB). 21 There were only 8 questions pertaining to knowledge specifically, the rest were measurements of attitudes, perceptions, and behaviors and had been adopted from a United Nations Programme on HIV and AIDS recommended source. 23 This study specifically focuses on the knowledge aspect of the KAPB regarding HIV and AIDS. The reason why knowledge is isolated as an outcome of this study is because knowledge can shape attitudes, perceptions, and even behaviors, particularly among youth. 24 Knowledge therefore is a key initial component to shaping KAPB among youth affected by the disease. The questions are: (1) "Can AIDS be cured?"; (2) "Can a person reduce their risk of HIV by having fewer sexual partners?"; (3) "Can a healthy-looking person have HIV?"; (4) "Can HIV be transmitted from a mother to her unborn baby?"; (5) "Can the risk of HIV transmission be reduced by having sex with only one uninfected partner who has no other partners?"; (6) "Can a person get HIV by sharing food with someone who is infected?"; (7) "Can a person reduce the risk of getting HIV by using a condom every time he/she has sex?"; and (8)"Can medical male circumcision reduce the risk of HIV infection in males?". Independent Variables. The predictor variables for the study include age (15-19 and 20-24 years old); sex (male or female); race (African, colored, white, Indian or Asian or other); place of residence (urban or rural); self-reported disability status (yes or no); employment status (employed, not looking for work, student, looking for work, unable to work); education status (in school, not in school-completed grade 12 or not in school for another reason); marital status (married, cohabiting, dating and not living together, single or other); and source of HIV knowledge (peers or family, religious institution or community and AIDS organization, clinic or hospital). For the last variable, 11 categories of sources of information were grouped into the 3 variable responses. Statistical Analysis Frequency and percentage distributions are used to show the characteristics of the sample. Cross-tabulations are done to show the distribution of the sample by independent variables, and statistical significance (P values) is shown. An adjusted binary logistic regression model was fitted at a 95% level of significance to identify the likelihood of accurate (100%) HIV knowledge by respondents' characteristics. Accurate knowledge (1) in the model is measured as correct responses to all 8 questions while, 0 to 7 correct answers is considered inaccurate knowledge (0). All tests were done using STATA version 13. Figure 1 shows that 36.01% of male and 44.78% of female youth are affected by HIV. A total of 40.37% of all youth in the country are affected by HIV. Table 1 shows that most youth (83.35%) are aware that a person with HIV can look healthy. A further 80.67% are aware that HIV cannot be transmitted by sharing food with an HIV-positive person. However, less than half of the sample (48.13%) is aware that medical male circumcision reduces the risk of HIV infection among males. Figure 2 shows that only 11% of youth who are affected by HIV have 100% accuracy of HIV knowledge. Most youth, 25% of the sample, has 75% accurate knowledge. This is equivalent to getting 6 of the 8 questions correct. (17.38%), married (13.84%), and source their information from peers and family members (19.64%). By race, the African population has the lowest percentage of complete accuracy at 10.56%, with Indian/Asian youth having the highest at 40.21%. Finally, only 10.49% of youth with 100% accuracy obtain their information from an AIDS organization or clinic. Discussion The aim of this article is to identify the level and factors associated with accurate HIV knowledge among youth affected by the disease. More than one-third of all youth in the country are affected by HIV/AIDS. This is plausible considering HIV infection rates in the country are high at 12.7% of the total population. 25 Also, HIV disclosure is becoming less stigmatized in the country with research showing that among HIV-positive females in the Western Cape province, disclosure of status to some family, friends, and partners was common practice, and tacit exposure, seen through taking medication in front of others, was not uncommon. 26 This makes it more likely that youth in the country would be aware of someone living with or who has died from the disease in their households and communities. A small percentage of youth affected by HIV/AIDS have, by this conceptualization, accurate knowledge of the disease. Another study using the same data, but on all youth, found that HIV knowledge has been decreasing over time, from 31.5% accurate knowledge in 2008 to 26.8% in 2012 18 AIDS myths and misconceptions create skepticism and make youth less likely to believe in research-based knowledge that gets disseminated. [27][28][29] While more than two-thirds of the sample are aware that an HIV-positive person can appear healthy and know that sharing food with an HIV-positive person does not spread the disease, still less than half know that medical male circumcision can reduce the risk of infection. This could be due to the recentness of the finding and the consequent debate that surrounds the generalizability of the results. At present scholars and practitioners are moot on the issue; however, the empirical consensus is appearing to skew on the side of male circumcision being a protective factor. [30][31][32] More positively, youth in sexually active relationships (cohabiting, dating, and married) are more likely to have accurate knowledge than single youth. This is encouraging as it suggests that sexually active youth are aware of HIV/ AIDS and that knowledge can be disseminated to their partners. However, research has shown that knowledge of HIV does not always translate into practicing protective behaviors. 33,34 One study found that young females who have knowledge of the disease but are in physically abusive relationships are unable to negotiate condom use with their partners. 35 Another study found that while youth know that condoms prevent the transmission of HIV, they prefer not to use them. 36 This literature and the result from this study suggest the need to further investigate the protective behaviors of couples who have accurate HIV knowledge. This study contributes to the existing literature first by identifying the exact knowledge that youth are lacking. From the results, there is more that needs to be done to promote male circumcision information and practices as a viable preventative strategy. Second, the study found that youth with no employment have less knowledge than employed youth. For this reason, residential community-based programs designed to target youth who do not have places of employment should be increased. With the current unemployment rate of youth being as high as 32.4%, 37 the goal should be to disseminate HIV knowledge in areas of social interactions instead of at formal places of work and schools. The study has a few limitations. First, accuracy of HIV knowledge is based on a few questions, which do not include injecting drug use and blood transfusion modes of transmission. A survey, including these questions, should be designed to capture more comprehensive knowledge. Second, although sources of knowledge did not prove statistically significant, this study does show a more detailed study of these sources should be conducted to identify what and how much information is disseminated by different members of the youth's networks. In conclusion, in order for South Africa to reduce the impact of AIDS myths and misconceptions on incidence rates, efforts to increase HIV knowledge need to be made. Youth affected by the disease in their households and communities do not have 100% HIV knowledge. However, youth affected by HIV could become peer-educators and assist in disseminating knowledge in their networks and communities. And while there is no guarantee that knowledge dissemination will directly change sexual behaviors, there is the possibility that more knowledge of HIV will work indirectly to change sexual practices through dismissing AIDS myths and encouraging evidence-based safer sex habits. For this reason, youth should receive more information through public campaigns and strategies to properly protect themselves from the disease.
v3-fos-license
2020-02-27T09:08:37.009Z
2020-02-27T00:00:00.000
240837168
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/60c748624c8919b500ad2ef3/original/dataset-of-standard-tests-of-nafion-112-membrane-and-membrane-electrode-assembly-mea-activation-tests-of-proton-exchange-membrane-pem-fuel-cell.pdf", "pdf_hash": "e39f6ac9a75357c12b189faded8ca51a9dea15f8", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44684", "s2fieldsofstudy": [ "Engineering" ], "sha1": "4d9afafbddb722cbc81e337e1a0c04c81e3b0c4f", "year": 2020 }
pes2o/s2orc
Dataset of standard tests of Nafion 112 membrane and Membrane Electrode Assembly (MEA) activation tests of Proton Exchange Membrane (PEM) fuel cell : Reported data in this paper are about Nafion 112 membrane standard tests and MEA activation tests of PEM fuel cell in various operation condition. Dataset include two general electrochemical analysis method, Polarization and Impedance curves. In this dataset, effect of different pressure of H 2 /O 2 gas, different voltages and various humidity conditions in several steps are considered. Details of experimental methods has been explained in this paper. Behavior of PEM fuel cell during distinct operation condition tests, activation procedure and different operation condition before and after activation analysis can be concluded from data. In Polarization curves, voltage and power density change as a function of flows of H 2 /O 2 and relative humidity . Resistance of the used equivalent circuit of fuel cell can be calculated from Impedance data. Thus, experimental response of the cell is obvious in the presented data, which is useful in depth analysis, simulation and material performance investigation in PEM fuel cell researches. 1-Data Introduction The experimentally resulted data shows the performance of a PEMFC at several percent of membrane compression, different applied voltages, different pressure of H2/O2 gas, and various humidity conditions of cathode and air, which can be used to study behavior of PEMFC that is necessary to research and development of fuel cells. In other words, the dataset help researchers and specialists who investigate and work on PEM fuel cells [1]. Polarization and Impedance curves have obtained from specific empirical operation condition. MEA structure defines as composition of anode, membrane and cathode. Temperature of anode, cathode and cell, pressure and flow rate of H2/O2 (ml.min -1 ) have been considered as operation condition during the evaluation. In Polarization curves, cell voltage (V) per current density (mA.cm -2 ) and cell power density (mW.cm -2 ) per current density (mA.cm -2 ) has been obtained at various relative humidity, gas pressure and membrane compression. Impedance analysis were done at the end of the each activation set and procedure at different cell voltages, relative humidity and H2/O2 pressure. Also, in each activation procedure, analysis has been accomplished by repeat of activation sets [2]. Obtained data can be useful for simulation of PEMFC and simulation has important role in scientific and applied studies. The report provides necessary results and experimentally parameters such as temperature of anode, cathode and cell, pressure and flow rate of gasses, relative humidity, power density, current density, voltage and resistances of cells which are obligatory data for electrochemical, material, mechanical and electrical simulation of PEMFCs. Hence, obtained data used for simulation of PEM in OPEM [3] simulation software produced by Electrochemistry Simulation (ECSIM) organization research team and those are compatible with the most of used models ,especially, Amphlett model [4] in OPEM software. The reported dataset is available on ECSIM organization GitHub account [5]. This work is licensed under a Creative Commons if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material [6]. (1)(2), the values are: power density, current density and resistance. 2-2-Activation test of MEA The start procedure for a new fuel cell membrane electrode assembly MEA may vary somewhat from application to application. What is important in any research or production environment is to be consistent with break in procedure that can be used. How the MEA is initially 9 Figure (4-2) Polarization curves at end of each activation set in 17 sets. 2-2-3 Activation test MEA at Constant Voltage 0.6 V 2-2-3-1 Experimental design, method and details Composition of anode including CP (TGP-0120), Pt/C 20%, 30% Nafion, Activation of MEA at constant voltage 0.6 V repeated in exact MEA structure and operation condition but, after a treatment procedure. In first treatment method, electrodes were ultrasonicated in 10% Isopropyl solution for 60 min at 60 o C. In Second treatment method, ultrasonication of electrodes was in water for 60 minutes at 60 o C but, this treatment method has some differences in MEA structure, anode composition is A16: CP (TGP-0120), Pt/C 20%, 30% Nafion, and 1.98 mg DL.cm -2 with 0.396 mg .cm -2 catalyst loading. Also, cathode composition is A16: CP (TGP-0120), Pt/C 20%, 30% Nafion, and 1.98 mg DL.cm -2 with 0.4 DL.cm -2 catalyst loading. Figures (6-2) and show polarization curves at the end of activation set in 9 sets for ultra-sonication in Isopropyl solution and water, respectively. Tables (5-2) and (6-2) represents extracted data from polarization curves of figures (6-2) and (7-2), in order. In final step of activation test at constant voltage 0.6 V, analysis was done at different MEA structure without treatment procedure. Anode components are C39: CP (TGP-0120), Pt/C 20%, 30% Nafion with 0.42 mg.cm -2 catalyst loading. Cathode composition is A16: CP (TGP-0120), Pt/C 20%, 30% Nafion with 0.396 mg.cm -2 catalyst loading. Table (7-2) and figure are related to polarization curve at the end of activation set in 9 sets. loading. In activation procedure, 10 minutes OCV time and then, 60 minutes constant voltage 0.6 V was applied. In next steps, 14 minutes cycling potential between 0.7-0.5 V repeated for 10 times and a constant current 0.2 A.cm -2 for 18 hours was applied. Table (
v3-fos-license
2020-06-24T13:06:08.626Z
2020-01-01T00:00:00.000
219981094
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.ochsnerjournal.org/content/ochjnl/20/2/232.full.pdf", "pdf_hash": "44ab990e4fee61ec20273b864d99287b16c6211f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44686", "s2fieldsofstudy": [ "Medicine" ], "sha1": "89fe9cf1d19d6f6556cbda845cf8e7c2140db421", "year": 2020 }
pes2o/s2orc
Pulmonary Adenocarcinoma Presenting as a Pineal Gland Mass With Obstructive Hydrocephalus Background: Adenocarcinoma is the most prevalent type of non–small cell carcinoma of the lungs. Patients with lung adenocarcinoma often present with cough, dyspnea, pain, and weight loss. They can also present with signs and symptoms of brain metastasis because the lungs are one of the most common origins of metastatic brain cancer. We describe a rare case of adenocarcinoma of the lungs presenting with pineal region metastasis. Case Report: A 61-year-old male presented to the emergency department with dizzy spells and gait disturbance. Magnetic resonance imaging of the brain demonstrated a solitary mass in the pineal region with marked obstructive hydrocephalus. A stereotactic biopsy was performed, and metastatic adenocarcinoma consistent with pulmonary origin was diagnosed. Computed tomography scan of the chest revealed a spiculated mass. The patient died shortly after the diagnosis was made. Conclusion: The pineal region is an unusual site for brain metastasis. Although such metastasis has rarely been described, it should be considered in the differential diagnosis of pineal region tumors, especially for patients with suggestive clinical or histopathologic features of primary malignancy elsewhere. INTRODUCTION Adenocarcinoma of the lungs accounts for nearly 40% of all lung cancers and is the most prevalent type of nonsmall cell carcinoma of the lung. 1 These patients commonly present with cough, dyspnea, and weight loss. 2 Pulmonary carcinomas are the most common primary source of brain metastasis 3 ; however, metastasis to the pineal gland is very rare, accounting for only 0.4% of all brain metastases. 1 We describe a unique presentation of pulmonary adenocarcinoma in a patient presenting with ataxia, presyncopal spells, and falls secondary to pineal metastasis discovered on brain imaging. CASE REPORT A 61-year-old male presented to the emergency department (ED) with a 3-week history of early-morning headaches, presyncopal episodes, blurry vision, and gait disturbance. He also described nausea, dizziness, presyncope, and multiple falls during the same period. The patient denied history of trauma or symptoms of palpitations, weakness, or speech difficulties. The patient's medical history was pertinent for coronary artery disease, chronic obstructive pulmonary disease, peripheral vascular disease, hypertension, and nicotine use disorder. He was an active cigarette smoker with an 84 pack-year smoking history but no alcohol use. His home medications included albuterol, atorvastatin, umeclidiniumvilanterol, pantoprazole, aspirin, and clopidogrel. Vital signs were normal. Physical examination was notable for a bilateral sixth nerve palsy with bilateral papilledema, mild ataxia of both lower extremities accompanied by truncal ataxia, and hyperreflexia in both upper and lower extremities with upgoing plantar reflexes bilaterally. The patient was alert and oriented, could follow commands, and had no aphasia or dysarthria. His pupils were equal, round, and reactive to light bilaterally. Head computed tomography (CT) scan performed in the ED showed a soft tissue mass in the pineal gland region with lateral and third ventricular enlargement. Brain magnetic resonance imaging (MRI) showed a 2.1 cm × 2.2 cm pineal mass with marked hydrocephalus from compression of the aqueduct and associated vasogenic brain edema ( Figures 1A, 1B, and 1C). CT imaging of the chest, abdomen, and pelvis with intravenous (IV) contrast showed a 2.0 cm × 2.0 cm left hilar spiculated nodule with adjacent lymphadenopathy concerning for malignancy ( Figure 1D). The neurosurgery service evaluated the patient and recommended brain biopsy. The patient requested a second opinion and was transferred to a referral center with neurosurgical expertise. At that center, the patient underwent endoscopic ventriculostomy to relieve hydrocephalus and pineal mass region biopsy. Histologic examination was suggestive of an epithelioid neoplasm with low proliferative activity. The immunophenotype of the tumor was positive for cytokeratin CAM 5.2, cytokeratin OSCAR, and epithelial membrane antigen (EMA), and negative for synaptophysin, glial fibrillary acidic protein (GFAP), neurofilament, S100, and OCT 3/4. Differential diagnosis was low-grade neoplasm such as papillary tumor of the pineal region (PTPR) based on immunoreactivity for cytokeratins and negativity for GFAP and synaptophysin. Postoperatively, the patient's gait and headaches markedly improved, but he had persistent diplopia. Alternate eye patching was recommended. The patient was discharged with plans for repeat outpatient brain imaging to monitor the size of the tumor given the negative initial brain biopsy. Four weeks later, the patient presented to the ED after a fall. He was found to have a right hip fracture. The patient reported that shortly after his brain biopsy, his neurologic Pulmonary Adenocarcinoma Presenting as a Pineal Gland Mass symptoms had worsened. He required a walker to ambulate and fell more often because of imbalance and ataxia. He continued to have headaches, dizziness, and double vision. MRI of the brain in the ED showed an increase in mass size from 2.1 cm × 2.2 cm to 3.8 cm × 3.3 cm with vasogenic edema around the pineal mass. A decision to proceed with repeat stereotactic biopsy of the pineal tumor was made. Postoperatively, the patient failed to wake up from anesthesia. He had large pupils on neurologic examination. He was transferred to the neurointensive care unit for close mon-itoring and mechanical ventilation. Repeat CT scan of the head revealed hemorrhage within the mass, as well as small intraventricular hemorrhage. Serial CT head scans demonstrated progressive enlargement of the lateral ventricles. An emergent external ventricular drain was inserted; however, the patient did not show any clinical improvement. Results of the second biopsy demonstrated fragments of solid tumor intermixed with brain parenchyma and blood. The neoplastic cells had eosinophilic cytoplasm, marked nuclear pleomorphism with irregular nuclear contours, conspicuous nucleoli, and occasional atypical mitoses. The neoplastic cells stained positive for pankeratin, CK7, and thyroid transcription factor-1 (TTF-1). The neoplastic cells were negative for CK20, p40, placental alkaline phosphatase, S100, GFAP, chromogranin A, and synaptophysin. Given the positivity of the CK7 and TTF-1 immunohistochemical stains, the tumor was diagnosed as metastatic adenocarcinoma consistent with pulmonary origin (Figure 2). The patient remained comatose for 48 hours after the procedure. After discussion with the family about treatment options and prognosis, the family opted for comfort care. The patient was extubated and died shortly thereafter. DISCUSSION Pineal metastases from lung cancer are extremely rare. Patients with pineal metastases are typically asymptomatic, and the cancer is found incidentally on autopsy in most patients. 1 Small cell carcinoma is the most reported lung cancer associated with pineal metastasis. 1 Other histologic types, including squamous cell carcinoma 4 and adenocarcinoma, 1 have also been reported. Our case describes an atypical presentation of lung adenocarcinoma with hydrocephalus caused by mass effect of metastasis to the pineal gland. In addition, the initial pineal gland biopsy had a broad differential, including a PTPR secondary to the nonspecific staining pattern of the small tumor fragments, with immunoreactivity for cytokeratins and negativity for GFAP and synaptophysin. 5 PTPR is a rare entity itself, and in 2007, the World Health Organization described PTPR as "a rare neuroepithelial tumor of the pineal region in adults, characterized by papillary architecture and epithelial cytology, immunopositivity for cytokeratin and ultra-structural features suggesting ependymal differentiation." 6 Looking retrospectively, the markers in our patient might have suggested a metastatic etiology initially, considering EMA positivity and GFAP negativity. 5 However, the rarity of both tumor types-primary pineal tumors and metastatic carcinoma-in this region made reaching the correct diagnosis challenging. Our patient had an isolated brain metastasis to the pineal region, and the unremarkable features of the first pineal biopsy might have obscured the clinical picture initially, leading to a delay in diagnosis. However, multiple biopsies from the same mass can clarify the diagnosis in cases of rare tumors, as the second biopsy in our case did yield a diagnosis. Another important consideration is the aggressive nature of the adenocarcinoma in this patient, manifested by the increase in size of the pineal mass during the 4-week interval after the first presentation. The rapid progression of this tumor demonstrates that the time window can be very narrow from the time of presentation to initiation of treatment with such malignancies. Because the incidence of primary pineal tumors is low in older patients, any mass in the pineal gland region should always prompt a detailed investigation to look for any primary malignancy, as both types of tumors have drastically different treatment modalities. Because the lungs are the most common primary site for brain metastasis, followed by breast cancer and malignant melanoma, 1 the index of suspicion should always be high for a possible pineal metastasis, and workup should be done to look for a primary source. In cases of metastatic malignancy to the brain, after examination of the histology on hematoxylin and eosin stained slides, immunohistochemical stains may be used to help elucidate the site of origin. In metastatic nonmucinous pulmonary adenocarcinoma, TTF-1 is typically positive in >85% of tumor cells. Also helpful in the diagnosis of pulmonary adenocarcinoma is positivity of napsin A and CK7, with negative staining of CK20. 7 CONCLUSION The pineal gland is an extremely rare site for both primary brain tumor and metastatic cancers. In patients with high suspicion for malignant pineal gland tumor, repeat pineal biopsy might be required if the first biopsy is not consistent with the overall clinical picture. As lungs are the most common primary site for brain metastasis, chest imaging and immunohistochemical staining are useful tools for diagnosis of metastatic lung cancer to the pineal gland.
v3-fos-license
2018-04-03T00:00:38.067Z
2016-12-01T00:00:00.000
8386957
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167155&type=printable", "pdf_hash": "f56f6c79a812c1836829578d5dca04d88e11d30f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44687", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "f56f6c79a812c1836829578d5dca04d88e11d30f", "year": 2016 }
pes2o/s2orc
Evaluation of Focal Liver Reaction after Proton Beam Therapy for Hepatocellular Carcinoma Examined Using Gd-EOB-DTPA Enhanced Hepatic Magnetic Resonance Imaging Background Proton beam therapy (PBT) achieves good local control for hepatocellular carcinoma (HCC), and toxicity tends to be lower than for photon radiotherapy. Focal liver parenchymal damage in radiotherapy is described as the focal liver reaction (FLR); the threshold doses (TDs) for FLR in the background liver have been analyzed in stereotactic ablative body radiotherapy and brachytherapy. To develop a safer approach for PBT, both TD and liver volume changes are considered clinically important in predicting the extent of damage before treatment, and subsequently in reducing background liver damage. We investigated appearance time, TDs and volume changes regarding FLR after PBT for HCC. Material and Methods Patients who were treated using PBT and were followed up using gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA MRI) after PBT were enrolled. Sixty-eight lesions in 58 patients were eligible for analysis. MRI was acquired at the end of treatment, and at 1, 2, 3 and 6 months after PBT. We defined the FLR as a clearly depicted hypointense area on the hepatobiliary phase of Gd-EOB-DTPA MRI, and we monitored TDs and volume changes in the FLR area and the residual liver outside of the FLR area. Results FLR was depicted in all lesions at 3 months after PBT. In FLR expressed as the 2-Gy equivalent dose (α/β = 3 Gy), TDs did not differ significantly (27.0±6.4 CGE [10 fractions [Fr] vs. 30.5±7.3 CGE [20 Fr]). There were also no correlations between the TDs and clinical factors, and no significant differences between Child-Pugh A and B scores. The volume of the FLR area decreased and the residual liver volume increased, particularly during the initial 3 months. Conclusion This study established the FLR dose for liver with HCC, which might be useful in the prediction of remnant liver volume for PBT. Introduction Recently, highly conformal radiotherapy, used as stereotactic ablative body radiotherapy (SABR), has been delivered safely and effectively for hepatocellular carcinoma (HCC) [1]. Furthermore, particle beam therapies such as proton beam therapy (PBT) and carbon ion therapy have been reported to achieve good local control regarding HCC [2,3]. In a systematic review and meta-analysis, toxicity tended to be lower using such particle beam radiotherapies relative to photon radiotherapy [4]. However, damage to the liver parenchyma in PBT has not been well evaluated. The focal liver parenchymal effect after SABR appears as a low-density area on computed tomography (CT) or a hypointense area during the hepatobiliary phase of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging (Gd-EOB-DTPA MRI). This effect is described as the focal liver reaction (FLR) [5][6][7], and is a useful marker for predicting liver parenchymal damage in radiotherapy. For this purpose, using the hepatobiliary phase of Gd-EOB-DTPA MRI, the threshold dose (TD) for the background liver has been analyzed in patients with metastatic liver tumors and HCC associated with chronic liver disease in SABR and brachytherapy [7,8]. In PBT, Yuan et al. were the first to report on FLR and MRI-based dosimetric proton end-of-range verification for the liver [9], but they did not examine TD in their analysis. Previous reports concerning the TD in photon therapy analyzed using Gd-EOB-DTPA MRI have shown shrinkage in the volume of the irradiated liver [8,10,11]. Thus, the volume of the FLR in the liver would also decrease after PBT. Consequently, we hypothesized that for analyzing TD in relation to FLR, the expected volumetric change of the irradiated liver parenchyma should be taken into account. Additionally, Imada et al. have reported that compensatory enlargement in the non-irradiated liver after carbon ion therapy contributes to an improved prognosis [12]. Taken together, to develop a safer approach to PBT, both the FLR TD and volume change in the liver irradiated at doses exceeding the TD or in non-irradiated liver are considered to be clinically important in predicting the extent of the damage before treatment, and subsequently reducing background liver damage. In the present study, we attempted to investigate the appearance time, TDs and volume changes in the FLR using Gd-EOB-DTPA-enhanced MRI after PBT for hepatocellular carcinoma. Patients and clinical examination This retrospective analysis of the data was approved by the institutional review board of our institution, and written informed consent was obtained from each patient. Between March 2011 and August 2015, patients who were treated using PBT for HCC at total doses of 66 cobalt Gy equivalent (CGE)/10 fractions (Fr) or 76 CGE/20 Fr, and followed up using Gd-EOB-DTPA MRI within 3 months after PBT were enrolled. Patients were not eligible for this study if they had the following characteristics: HCC treated using PBT in combination with transcatheter arterial chemoembolization (TACE); HCC <2 cm distant from the digestive tract; or they did not receive follow-up MRI in our institution; or they were treated repeatedly as a result of HCC recurrence or new HCC lesions within 6 months after the first PBT. Fiftyeight patients were considered eligible for analysis (Table 1; Fig 1). The diagnosis of HCC was made clinically by means of early nodular staining regarding the arterial dominant phase and "wash out" in the equilibrium phase of dynamic CT and/or MRI, [13]. The initial workup for these patients generally included a thorough medical history and physical examination. All patients underwent blood tests, including complete blood cell counts, liver and renal function tests and determination of electrolytes, hepatitis B and C virus titers, AFP and PIVKA-II. Abdominal enhanced CT and MRI were performed. Pretreatment imaging and PBT planning The patients were positioned in the supine position and immobilized using a custom-induced vacuum-lock bag and a low-temperature thermoplastic body cast (Esform: Engineering System Co., Nagano, Japan). Respiratory synchronized 4-dimensional CT (4D-CT) (Aquilion LB TSX-201A: Toshiba Medical Systems Co., Tochigi, Japan) was obtained during the expiratory phase under the following conditions: 120 kv; 300 mA; and 2.0-mm-thick consecutive slices for treatment planning of PBT. Respiratory gating was controlled by abdominal wall motion with the laser sensor of a respiratory gating system (AZ-733V: Anzai Medical Co., Tokyo, Japan). MRI examination was performed after CT examination for planning. The magnetic resonance apparatus used was a 1.5 Tesla system (Signa HDx 1.5T Optima Edition: General Electric Healthcare, Waukesha, USA). The patient received a dose of 0.1 ml/kg of Gd-EOB-DTPA (Primovist: Bayer Schering Pharma, Berlin, Germany) injected at 1.5 ml/sec. The hepatobiliary phase was acquired at 15 min after post-contrast T1-weighted acquisition, performed with the 3-dimensional spoiled gradient-recalled acquisition in steady state. Liver acquisition was performed with volume acceleration, extended volume with fat-saturation (LAVA-XV; repetition time/echo time 4.3/2.0 ms; flip angle = 15 degrees; field of view 32×32-40×40 cm; matrix 320×192×88 or 96; interpolated to 512× 512; acquisition time 16-23 sec), and the slice thickness was 4 mm with a slice gap of 2 mm; there was cessation of respiration during the expiratory phase for approximately 20 sec. For PBT planning, a 3-dimensional treatment planning system (Xio-N: Elekta, Stockholm, Sweden; Mitsubishi Electric Corporation, Kobe, Japan) was used. Diagnostic CT or planning MRIs were fused with planning CT images acquired by 4D-CT at the expiratory phase for target delineation with rigid registration. Gross tumor volume (GTV) was defined by MRI using dynamic contrast-enhanced images and with Gd-EOB-DTPA during the hepatobiliary phase. The clinical target volume included a 5-mm radial expansion of the GTV to target possible microscopic disease extension. To compensate for respiratory movement, ITV margins calculated using respiratory movement analysis with planning 4D-CT and the planning target volume (PTV) was expanded by 5 mm in all directions with an additional 5-to 7-mm margin in the craniocaudal direction. Some patients with daughter lesions were irradiated once using PBT as the combined PTV. The proton beam treatment plan mainly involved two or three ports, so the border of the treatment area resembled a straight line; furthermore, the irradiated area was planned so that it included the tumor blood drainage area to avoid local recurrence. Therefore, the FLR area resembled the defects area after surgical segmental resection (Fig 2). A total dose of 76 CGE in 20 Fr was selected for tumors within 2 cm of the porta hepatis, and 66 CGE in 10 Fr for tumors located in peripheral segments of the liver, using an irradiation schedule of 5 Fr per week. The radiation dose was prescribed in CGE using a relative biological effectiveness value of 1.1, based on our preclinical experiments. The total dose at the isocenter was prescribed to cover 95% of the PTV. The PBT system used a synchrotron and a passive scattering method (Proton Beam System: Mitsubishi Electric Corporation, Kobe, Japan). Daily irradiation was performed via more than two ports (with the exception of the plan using a one port beam). A respiratory gating system (AZ-733V) was used to synchronize treatment in the expiratory phase. Gd-EOB-DTPA MRI analysis for FLR appearance time and threshold doses Gd-EOB-DTPA MRI analysis to determine the appearance time of FLR and TD analysis. The analysis was performed by two radiation oncologists with over 10 years of experience by discussion and consensus (S.T. and K.Y.), using commercially available software (MIM Maestro: MIM Vista Corp, Cleveland, OH, USA). TD analysis was carried out using the visual decision method and volume change analysis using the volume data by means of the visual decision method. We defined the FLR as a clearly depicted hyposignal intensity area relative to the surrounding liver parenchyma in the hepatobiliary phase of Gd-EOB-DTPA MRI after PBT (Fig 2). To determine the appearance time of FLR, the rate of visualization of FLR among the MRI examinations at each time point was analyzed. In this study, the FLR TD was analyzed using MR images obtained at 3 months after PBT. In defining the TDs, we compared the dose contours on the planning MRI with the FLR contour on the subsequent MRI. The dose contours made from the isodose lines of the prescribed dose were created on the planning CT and the dose contours were transferred onto each MRI using rigid registration. We estimated the isodose lines for the v-TD using the positional relationship information, while visually comparing the positions of the dose lines with the FLR contours. Anatomically, we referred to the relevant blood vessels, hepatic lobules, ligaments and cysts. In relation to the v-TD definition method, isodose lines are displayed in every 10% line (from 10% to 100% dose lines). These dose lines were compared with the FLR contour (red) on the planning MRI Two isodose lines resembling the v-TD were selected in every 10% line. In Fig 3b, the isodose lines are displayed as 50% (red), 40% (green), 30% (blue) and 20% (yellow) lines. In this case, similar lines were the 40% and 50% lines. The medial dose line (45% line; brown) between the selected two doses was added (Fig 3b). Finally using these three isodose lines, the dose distribution contours that were most similar to the FLR contour were defined. The dose contour that was most similar to the FLR contour for each 5% line was defined as the v-TD for each patient by the radiation oncologists by consensus. If the selection of the v-TD contours using the three isodose lines was difficult, the lowest one was adopted. In this case, the v-TD was defined by the 45% dose line as 29.7 CGE (Fig 3b). Next the v-TD was calculated from the total dose for each protocol. To determine intra-observer reproducibility of v-TD measurement, the analyses were repeated again at 1 month intervals, and the mean values of each data set were used. To compare the v-TD of 66 CGE and 76 CGE, we used equivalent doses in 2 CGE fractions (EQD2), which takes into account the total dose and the dose per fraction. EQD2 was calculated using the following equation: EQD2 = D × ([d + α/β)/(2 + α/β]), as derived from the linear-quadratic (LQ) model, where D is the total dose, d is the fractionation dose, α is the linear component of cell killing, β is the quadratic component of cell killing and the α/β ratio is the effect on the normal tissue of a dose of 3 Gy [14]. In volume analysis, the contours of the whole liver were set to include the hepatic vein, hepatic portion of the inferior vena cava and the second branch of the portal vein. On the follow-up MRI scan, the contours of the FLR area were made. Two radiation oncologists (S.T. and K.Y.) delineated the whole liver and the FLR on MRI, and the volumes were calculated. Based on the v-TD defined by MRI at 3 months after PBT, we hypothesized the irradiated liver area receiving a dose greater than the v-TD isodose lines at planning MRI as the destined FLR area (dFLR). The volume of the dFLR was calculated and compared with the FLR volume on follow-up MRI. To determine intra-observer reproducibility when measuring FLR volume, FLR volumes were delineated and measured twice at 1 month intervals, and the mean values of each set of measurements were used. The liver volume outside of the dFLR or FLR area (residual liver) was also calculated. In addition, the change in each volume was compared at each MRI examination interval (Fig 4). We hypothesized that the FLR volume after PBT was gradually decreased in a manner resembling exponential decay with a decay constant of -1/T, and finally converged after a long time lapse. Based on this hypothesis, the volume was expressed by the following equation: where t is the time (months) after PBT, T is the mean lifetime of the FLR volume change, and ΔV and V R are the volume change and the residual volume at the eternal time, respectively. The parameters ΔV, V R and T were evaluated by fitting the relative FLR volume at 1 month and 6 months for all possible lesions by means of least squares method; the relative FLR volume was obtained each month and divided by the volume at 3 months, and thus the volume at 3 months was equal to 1. Then the V(0) data obtained by the extrapolation of the function at month zero were defined as the calculated FLR (cFLR). To evaluate the reasonability of the equation using exponential decay, V(0) data were also obtained using a linear function for comparison. cFLR data were compared with the dFLR volume calculated using the v-TD to evaluate the appropriateness of our visual anatomical method. Statistical methods In defining the TD method concerning the isodose lines, the data values were acquired in multiples of 5%; we defined the TD as a categorical variable. To characterize the reproducibility of TD dose line definition, weighted kappa statistics were used [15]. Correlations among TDs expressed in EQD2 and clinical factors (gender, age, chronic liver disease with viral infection, alcoholic liver, Child-Pugh (CP) score, CP class, prior treatment, prior treatment in field and whole liver volume), planning factors (dose per Fr, tumor size, PTV volume) and volume change factors at follow-up Gd-EOB-DTPA MRI (volume change of the whole liver, the FLR area and outside of the FLR area) were assessed using Pearson correlation coefficients and multiple regression analysis. To determine intra-observer reproducibility when measuring the FLR volume, FLR volume analysis was carried out using Pearson's correlation coefficient and the Bland-Altman method [16]. Comparisons between two independent groups were analyzed using the Mann-Whitney U test, and related data were analyzed using the Wilcoxon signed-rank test to identify all differences. A p-value of <0.05 was considered statistically significant. Statistical analyses were performed using IBM SPSS 20.0 (IBM SPSS, Chicago, IL, USA) software. Results Gd-EOB-DTPA MRI was performed after the completion of PBT at intervals of 1, 2, 3 and 6 months at the patient's convenience. One-hundred twenty two follow-up MRI images obtained at the end of treatment and at 1, 2, 3 and 6 months after PBT were analyzed, namely 13, 34, 9, 42 and 24 MRI images at each time point, respectively. In the analysis of FLR appearance, the patients examined within 3 months after PBT were enrolled; 58 patients were analyzed using 123 follow-up MRI images. The median time of appearance of FLR was 3 (range, 0-3) months. The rate of appearance of FLR at each time point was as follows: 8% (1/13) at the end of treatment; 47% (16/34) at 1 month; 67% (6/9) at 2 months; 100% (42/42) at 3 months; and 100% (24/24) at 6 months after treatment (Figs 1 and 2). At 3 months after treatment all lesions had developed FLR. Consequently, the following analysis of the TD regarding the FLR was used at 3 months after PBT. In the TD analysis, the patients examined at 3 months after PBT were enrolled; 42 patients were analyzed (Fig 1). The v-TD for the prescribed dose had a median value of 40% (range, 30-50%; 19.8-38.0 CGE) and EQD2 had a median value of 27.5±7.0 (range, 18.9-41.6) CGE. The difference between median relative TD and absolute TD was the result of the two different levels of prescribed doses ( Table 2; Fig 5). There were also no correlations between the TDs and clinical factors with multiple regression analysis, and no statistical differences between CP-A and CP-B (Table 2). In volume change analysis, the patients examined at both 3 and 6 months after PBT were enrolled; 24 patients were analyzed (Table 3). In FLR volume analysis, we found a good reproducibility between intra-observer error for the FLR volume (mean difference, 2.2% [range, −6.0-13.8] cm 3 ; Pearson's correlation coefficient, R(2) = 0.99]). The Bland-Altman plot involving the method comparison study is shown in Fig 6. The bias (mean difference) ± twice the precision (95% limits of agreement: 2 SD of difference) equaled −0.7±6.2 cm 3 ; this demonstrated the good reproducibility of the FLR visual analysis. In the Eq (1) analysis, the patients examined at 1, 3 and 6 months after PBT were enrolled; seven cases that exhibited FLR changes were analyzed (Fig 1). Fig 7 shows the volume change in the dFLR and cFLR as a function of time (months) after PBT; all of the FLR volumes evaluated at the subsequent MRI were calculated relative to the volume at 3 months. The curve in (1), and the value of T and ΔV was estimated to be 1.7±1.0 months and 2.0±1.3 months, respectively, where the errors corresponds to one standard deviation. The relatively large errors were caused by the large variation in the volume at one month among seven cases; This results in that the value of V(0) was estimated to be 2.6 +-1.2. In contrast the value of V(0) evaluated using a linear function was 1.3±0.2 and had a smaller value than the V(1) data. The relative dFLR volume among 7 cases calculated from v-TD for 30% and 50% of prescription dose were estimated to be 2.4±0.3 and 1.7+-0.4, respectively, and were found to be similar (within the standard deviation) to the V(0) of Eq (1) rather than the value of a linear function. In addition, to evaluate the appropriateness of the exponential decay function, we found that individual data sets at 1-, 3-, 6-, 9-and 12-month time points were well fitted by the equation, but not by a linear function. The results are shown in S2 and S3 Figs, where 25 points for five cases are presented and the thick solid curves have been fitted using Eq (1). Thus, we consider that it may be reasonable to use Eq (1) for the evaluation of V(0) rather than a linear function. This might indicate that the volume change differs from that calculated using a linear function. The volume of the FLR area decreased and the residual liver increased over a period of 6 months, especially during the initial 3 months (Table 3). Discussion Recently, the possible utility of Gd-EOB-DTPA MRI for estimating liver functional reserve has been reported [17]. One of the major histopathological changes caused by liver irradiation is sinusoidal obstruction syndrome (SOS) (formerly known as hepatic veno-occlusive disease) [17,18]. A recent study indicated that the primary damage site in SOS is the centrilobular (zone 3 of the liver parenchyma) sinusoidal endothelial cell. Table 2). The relative dFLR volume calculated from v-TD for 30% and 50% of the prescription dose were estimated to be 2.4±0.3 and 1.7±0.4, respectively, and were found to be similar (within the standard deviation) to the V(0) (cFLR) of Eq (1) rather than the value of a linear function. Abbreviations; FLR: Focal liver reaction; PBT: Proton beam therapy; Eq: equation; dFLR: destined focal liver reaction area; v-TD: visual-threshold dose; cFLR: calculated FLR. Gd-EOB-DTPA is incorporated into hepatocytes mainly by organic anion transporting polypeptide (OATP). It has been clarified that there is a highly significant correlation between Gd-EOB-DTPA uptake and OATP1B3 expression in HCC cells and/or normal hepatocytes, and also between the grade of OATP1B3 expression and the enhancement ratio (signal intensity) during the hepatobiliary phase of Gd-EOB-DTPA MRI [19,20]. OATP1B3 is known to be predominantly expressed in zone 3 hepatocytes in normal liver parenchyma [21]. We believe that these factors are responsible for the well-defined visualization of the FLR during the hepatobiliary phase of Gd-EOB-DTPA MRI; thus, we were able to evaluate the damage to the liver caused by PBT irradiation. However, Richter et al. reported a hypothesis concerning the potential molecular mechanism responsible for radiation-induced changes in hepatocyte-specific Gd-EOB-DTPA [22]; a number of other factors may exist in relation to this issue [23]. Considering the volume change, the volume of the FLR area decreased over a period of 6 months after therapy; this change was more rapid during the initial 3 months. The volume for the residual liver increased, especially during the initial 3 months. According to a study on adult living donor liver transplantation [24], rapid initial regeneration of the remnant donor liver occurred in a similar manner to that reported in the current study. This finding might be of use as a reference concerning the time interval for repetitive PBT for an additional HCC lesion. There are some data regarding TD that have been reported in previous studies using various protocols ( Table 4). The median appearance time for FLR in the present study was 3 (range, 0-3) months. This appearance time may be dose-dependent and influenced by the different time periods used for treatment delivery. Single high-dose irradiation might cause damage earlier than fractionated irradiation using the same biologically effective dose treatment. Our findings and those of other reports are consistent with this theory. However, there are differences in the timing of the appearance of FLR in each patient. The timing did not show a uniform trend; for example, cases with lower or higher TD did not have the tendency to exhibit the earlier appearance of FLR. Timing may have been affected by background liver conditions (e.g., fibrosis, hemodynamics and liver function). In EQD2 using the LQ model, although previous analyses have incorporated various α/β ratios [7,8,11,25], the most suitable α/β ratio for normal liver or liver with chronic disease remains unknown. A range of TD values have been reported in these studies, likely attributable to the influence of different TD determination methods with or without considering the volume change. The TD expressed in EQD2 is approximately 30 CGE, and these data are not conflicting considering the classical liver tolerance dose; the mean liver dose for radiation-induced liver disease involving whole liver irradiation was 30 Gy [26]. A dose calculation method involving definition of the TD should be adaptable to a volume change in the FLR and residual liver after irradiation. There is a risk that the TD for FLR can be overestimated in comparisons of the planning dose lines at planning with the FLR demarcation lines after irradiation in the fusion images from planning CT or MRI scans and follow-up MRI images. Because the FLR area shrinks over time and the smaller volume of the shrunken FLR was compared with the planning dose lines or dose-volume histogram, the calculated TD can be higher than the actual TD. Our technique entails simple visual judgments regarding TD definition without use of rigid or non-rigid fusion techniques; thus, it contains ambiguous factors, but it can flexibly take into consideration anatomical change in the relevant blood vessels, hepatic lobules and ligaments; consequently, it can adapt to volume change in the GTV and liver parenchyma irradiated to a higher dose than the TD. Seidensticker et al. reported on a possible time dependence regarding the TD as a result of volume change in the FLR, resulting in a difference in TD at each time point after irradiation [8]. They did not consider the volume reduction in the irradiated liver at doses higher than the TD. Other studies involving TD after radiotherapy did not consider the FLR volume change after irradiation in the TD calculation method [6,7,11]. Considering the nature of the FLR volume change after irradiation, there were some uncertainties with respect to the method used to define the TD. However, in our comparison of the dFLR volume from the v-TD and V(0) data from the equation there were no inconsistencies. This indicates that our v-TD definition method is appropriate. Furthermore, it was difficult to compare 1) the TD for normal liver with that for liver with chronic disease, and 2) the results of single-fraction high-dose irradiation with our results using the LQ model. Uncertainties also stemmed from the following: differences in the irradiated volume of the target area; the use of combination treatment with TACE; the daily reproducibility of planning conditions; and the differences in the characteristics of photon and proton beams. However, these issues could not be resolved in the current study, which to our knowledge, is the first report on FLR TDs after PBT for HCC accompanied by chronic liver disease. In our results, the TDs did not correlate with any factors and did not significantly differ between patients with CP-A and CP-B. This may be attributable in part to the smaller number of patients in the CP-B group than in the CP-A group in our study. Otherwise, the liver in both CP-A and CP-B patients might be similarly vulnerable to PBT. The present study had several limitations. The FLR and background liver volumes change with time after PBT. The irradiated area shrinks and the background liver enlarges, so the trend in volume change in each region differs in the same liver. Therefore, using the TD definition method, the application to non-rigid registration is difficult and we used rigid registration following the estimation method, taking into consideration each volume change. Accordingly, our TD definition method entailed a potential error. In addition, we did not acquire MRI images at all monthly follow-up periods for all patients; therefore, the patient sample size was limited at some time points, especially at 2 months after MRI. However, in relation to PBT for HCC, the TD calculated in the current study and the volume analysis data could promote greater safety and less invasive radiation exposure to background liver. Further study is necessary to clarify the present analysis involving PBT, with more frequent (e.g., weekly) MRI follow-ups after PBT. Conclusions FLR was detectable in all cases at 3 months after PBT in Gd-EOB-DTPA MRI scans. The volume of the FLR area decreased and the residual liver volume increased over a 6-month period after treatment, especially during the initial 3 months. Using an α/β ratio of 3, FLR doses expressed in EQD2 were nearly 30 CGE in the liver of patients with HCC. These data might be useful in the prediction of the remnant liver volume. The isodose lines are displayed as 60%, 50% and 40% lines. These dose lines were compared with the FLR contour on the planning MRI. (c) The medial dose line (45% line; brown) between the selected two doses (40% and 50% lines) was added. Finally, using these three isodose lines, the dose distribution contours that were most similar to the FLR contour were defined. In this case, TD was defined by the 45% dose line as 29.7 CGE. FLR contour, red; 60% isodose line, pink; 50% isodose line, red; 45% isodose line, brown; 40% isodose line, green. Abbreviation; TD: Threshold dose; FLR: Focal liver reaction; Gd-EOB-DTPA MRI: Gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging; PBT: Proton beam therapy; CGE: Cobalt Gy equivalent. (TIF) S2 Fig. Volume change in the FLR as a function of time (months) after PBT. All data points at 1-, 3-, 6-, 9-and 12-month time points in five patients are presented; they were fitted using Eq (1). The square plots (red) show the FLR volume relative to the volume at 3 months (V(1), V(3), V(6), V(9) and V(12) data), and the circles (black) with error bars denote the mean and the standard deviations of the volume. The solid and dashed curve shows the mean value and one standard deviation, respectively of the data calculated using Eq (1). The two direction arrow show the range of dFLR calculated using the v-TD (refer to Table 2
v3-fos-license
2021-08-02T00:06:26.411Z
2021-05-01T00:00:00.000
236591901
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1911-8074/14/5/223/pdf", "pdf_hash": "109bd9ea0aa9c88bb860a6121093c2c4bec4f558", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44689", "s2fieldsofstudy": [ "Economics" ], "sha1": "cc88dca312e5029e1222f622e5ff8040960c999c", "year": 2021 }
pes2o/s2orc
Short-Term Capital Flows, Exchange Rate Expectation and Currency Internationalization: Evidence from China : This paper intended to employ a portfolio approach to assess the effect of exchange rate expectation on Chinese RMB internationalization and empirically test the interactive effects among short-term capital flows, RMB appreciation expectation and the internationalization process using a VAR model with monthly data ranging from February 2004 to December 2020. The results suggest that RMB exchange rate appreciation could lead to an increase in the foreign demand for RMB and RMB denominated assets, while RMB internationalization would attract more short-term capital inflow due to the reduced transaction costs. The empirical evidence from the VAR model estimation confirms the finding that expected RMB appreciation induces short-term capital inflow and promotes RMB internationalization. The robustness checks confirm the evidence. The results have important policy implication for RMB internationalization and for maintaining a sound and stable financial system. a contemporaneous increase in the expectation of around 0.03 percentage points from the first horizon onwards. The shock impact is also quite persistent, with RMB exchange rate expectation on average about 0.04 percentage points higher from the first horizon onwards. This finding suggests that the increase of offshore market RMB deposits could lead to RMB appreciation expectation. The right panel shows that RMB exchange rate expectation decreases in response to the adverse shocks of short-term capital flows. A one-standard-deviation capital flows shock can cause exchange rate expectation contemporaneously to drop by as low as over 0.01 percentage points within the first month. The impact is reversed to become positive from the third horizon onwards though the magnitude remains small. The results imply that short-term capital inflow could lead to a short-lived RMB depreciation expectation. Introduction The global financial crisis (GFC) in 2008 has revealed the flaw of the existing international monetary system and created a greater awareness worldwide of the importance of establishing both a more resilient domestic economic and financial system and a betterfunctioning global financial system. Since the GFC, the Chinese government has adopted several measures to reform its exchange rate system and promote the RMB internationalization process. It has been documented that there are three major determinants for internationalizing a currency, i.e., the economic size of the country in terms of output or trade, the rate of return proxied by the domestic inflation or exchange rate, and the depth of financial markets in terms of foreign exchange market turnover (Krugman 1984;Eichengreen 2011). It remains an interesting question to ask whether or not China has fulfilled these criteria. China began its economic reforms in the late 1970s, which have successfully transformed the country into an important trading nation and manufacturing center in the world over the past three decades. With its nearly double-digit growth rates for more than three decades from 1979 to 2014, China is now the world's second largest economy and largest economy in merchandise trade and foreign exchange reserves. Despite China's recent economic slowdown, the IMF forecasts that China will continue to be the largest contributor to global GDP growth. In addition, the rate of return for holding RMB over the past ten years has been one of the highest, which explains the strong portfolio capital inflows since 2004. Moreover, the domestic inflation rate has been well managed (Prasad and Wei 2005). Probably the only remaining issue is about China's capital control, especially over the flows of portfolio investment. However, it is still debatable whether a full capital account convertibility is a prerequisite for achieving an international currency status when one reviews the internationalization process of the US dollar. As a matter of fact, when the IMF announced the inclusion of RMB into its Special Drawing Rights (SDR) reserve currency basket effective in October 2016, it officially recognizes that the RMB has met the criterion of being "freely usable", reflecting China's expanding role in global trade and the substantial increase in the international use and trading of the renminbi. This lends support to the view that full capital account liberalization is not a necessary condition for an international currency. Nevertheless, given China's increasing importance as an economic power and major trading nation, the Chinese government has recently reaffirmed its determination to gradually liberalize its financial sector and move towards full capital account convertibility. In recent years, the Chinese government has implemented measures for encouraging the use of the RMB in cross-border transactions and the creation of an RMB offshore market in Hong Kong, Singapore and other international financial centers to push the internationalization of the Chinese RMB. According to the People's Bank of China, the share of China's cross-border trade settled in Chinese RMB increased from almost zero in 2009 to about 17% by 2013 and the offshore RMB deposits amount to RMB 1987 billion. By the end of May 2015, the PBoC signed bilateral domestic currency swap agreements with 32 foreign central banks or monetary authorities of different counties or areas, with a total amount worth RMB3.1 trillion (Ho et al. 2017). The recent initiatives to liberalize the capital account and push RMB internationalization include the "Shanghai-Hong Kong Stock Connect" and "Shenzhen-Hong Kong Stock Connect". According to the Bank for International Settlements (BIS), the RMB was the 8th most actively traded currency in the 2016 triennial survey of foreign exchange turnover, and became the second-most used currency in traditional trade finance and the number 5th payment currency of the world (The People's Bank of China 2015). More significantly, in 2004, banks in Hong Kong started to offer RMB retail banking services. The scope of the RMB business in Hong Kong has been expanded twice, in 2005 and in 2007, with Hong Kong now possessing a RMB bond market outside Mainland China. Till the end of 2019, the cross-border RMB settlement amounted to RMB 19.67 trillion, increasing by 24.1% on a yearly basis. The total receipts reached RMB 10.02 trillion, with a notable increase of 25.1% while the total payments were RMB 9.65 trillion, growing by 23% on a yearly basis. In September 2019, the restrictions for investment quotas as well as pilot countries or regions for the RMB Qualified Foreign Institutional Investors (QFII) were removed (PBoC 2020). The QFII program was launched by the Chinese government in 2002 to enable qualified foreign institutional investors to gain direct access to China's capital markets. The program was administered by the China Securities Regulatory Commission (CSRC). The decision to remove QFII implies that the barriers on the use of the RMB abroad have been gradually reduced. The rising significance of the RMB is viewed as a natural response to the growing weight of China's trade and investment flows in the world economy and also a result of its rapid economic and financial integration with the rest of the world. Along with RMB exchange rate system reform and internationalization of the currency, another issue is about the short-term capital flows. Several existing studies have examined the factors that affect international short-term capital flows from different perspectives, such as interest rate spreads between home and abroad, exchange rate and its expectation, and assets price (Prasad and Wei 2005;Bouvatier 2010;Fang et al. 2012). Since 2005 when China switched its exchange rate system from the dollar-peg to a basket of currencies, the RMB has appreciated in nominal terms by over 34 percent against the US dollar and by 42 percent on a real (inflation-adjusted) basis between 2005 and 2013. According to the Bank for International Settlements (BIS), the RMB real effective exchange rate has appreciated by 8.2 percent from July 2008 to May 2010, and by 16.9 per cent between June 2010 and May 2013. Due to a strong market expectation of RMB appreciation, there were more short-term capital inflows to China in recent years. However, this expectation has changed to RMB depreciation from the beginning of 2014, which has led to the short-term capital outflow since then. Although exchange rate expectation is an important fact affecting the short-term capital flow, the relationship between exchange rate expectation and short-term capital flow is no longer straightforward when we incorporate RMB internationalization in our analysis. As aforementioned, China has implemented measures to promote the internationalization process of the Chinese currency, including encouraging the use of cross-border trade settlement in RMB since 2009, and outward direct investment and the development of the RMB offshore market. These measures have advanced the RMB internationalization process, which would bring more volatility and risks to China's financial system (Ho et al. 2017(Ho et al. , 2018Qin et al. 2018;Zhou et al. 2021). By the end of 2015, the RMB internationalization index, a comprehensive quantitative indicator of international acceptability, composed by the International Monetary Research Institute at Renmin University of China, reached 3.6 in the fourth quarter, rising from 0.02 in the first quarter of 2010. The progress of RMB internationalization helps facilitate the short-term capital flows, and is believed to have new interactive effects on the exchange rate expectation and capital flows nexus. The purpose of this study is to assess from a portfolio approach perspective the effect of exchange rate expectation on Chinese RMB internationalization, and empirically test the interactive effects among short-term capital flows, RMB appreciation expectation and internationalization process. Our results show that RMB exchange rate appreciation could lead to an increase in the foreign demand for RMB and RMB denominated assets, while RMB internationalization would help attract more short-term capital inflow due to the reduced transaction costs. This study and the findings have important policy implications for the process of RMB internationalization and short-term capital flows, especially regarding how to manage the destabilizing effect of the short-term capital flows. The remainder of this paper is organized as follows. Section 2 provides a brief literature review. Section 3 discusses the theoretical framework and the models used in this study. In Section 4 we discuss the data and the empirical results from our models. Section 5 concludes with some policy implications. Literature Review A large body of literature has discussed the relationship between exchange rate and short-term capital flow. Reinhart and Calvo (2000) showed that the changes of exchange rate expectation is the most important reason for international speculative capital flow compared to other factors. Of three types of motivations, currency arbitrage is a more active transaction than interest arbitrage and cross-rate arbitrage that led to international speculative capital flow (Chen and Yun 2009;Fang et al. 2012;Lv and Xu 2012). Some studies found that the most important factor that impacts short-term capital flow is the RMB appreciation expectation (Wang and He 2007;Zhang and Tan 2013;Prasad and Wei 2005). Both Bouvatier (2010) and Sun and Zhang (2006) reported that the RMB appreciation expectation, economic growth rates, interest rates spread and stock markets discrepancy are the main factors that affected cash flow between Hong Kong and the mainland of China from 1993 to 2004. On the other hand, short-term capital flows also influence the exchange rate level and changes. Combes et al. (2012) maintained that capital inflow causes appreciation of real effective exchange rate (REER) in the emerging markets. Golley and Tyers (2007) argued that financial capital flow is the main pushing factor for RMB exchange rate appreciation in the short run. Zhu and Liu (2010) examined the interactive relationship between exchange rate expectation and short-term capital flows, and found evidence that there exists a self-reinforcement cycle mechanism among the short-term capital inflow, RMB exchange rate appreciation and expectation, and financial assets price. Jiang et al. (2021) reported evidence suggesting that external policy uncertainty and political instability can also have spillover effects on international investors and hence on China's foreign exchange reserves hoarding and exchange rate expectation. When a currency becomes an international currency, the relationship between exchange rate and short-term capital flow may change. First, exchange rate and currency internationalization will become interactive. Cohen (2012) stated that the key to the success of currency internationalization is the market confidence over the currency. Garber (2011) maintained that the strong market expectation about RMB exchange rate appreciation in the future is the main reason for the Hong Kong RMB offshore market boom. Jiang et al. (2012) argued that RMB exchange rate appreciation expectation may increase RMB deposits in Hong Kong by attracting overseas institutional investment in RMB assets or outbound importers converting foreign exchange into RMB in advance so as to increase the RMB deposits in Hong Kong. RMB exchange rate appreciation expectation enhances the RMB acceptance level by foreign investors. About the effect of currency internationalization on exchange rate, Maziad et al. (2011) andFrankel (2012) indicated that the currency will appreciate when it becomes internationalized, as investors will increase the demand for this currency or currency denominated assets. For instance, US dollar internationalization led to the dollar's appreciation (Wang et al. 2012). Lardy and Douglass (2011) found that RMB offshore holding increases RMB appreciation pressure. Recent empirical studies have documented the interaction effect between currency internationalization and exchange rate expectation, and concluded that promotion of RMB internationalization occupies a predominant role in raising RMB exchange rate expectation (Sha and Liu 2014). Second, several recent studies have examined more closely the interactive relationship between currency internationalization and short-term capital flows. discussed the dilemma of cross border capital flow management, and attributes the interest rate and exchange rate arbitrage problem to the home and abroad rates discrepancy. Xiang and Zhu (2013) reported that the cross-border trade RMB settlement aggravates the short-term capital flow fluctuation between the offshore market in Hong Kong and the onshore market in the mainland of China and increases economic uncertainty. The more fluctuation of short-term capital flow during the process of RMB internationalization causes currency and interest rate arbitrage, enlarges the risk in the short-term capital market and makes capital flow channels more diversified (Guo and Zhu 2012). It is also believed that the cross-border trade settlement in RMB and the RMB offshore market development in Hong Kong provide another channel for the short-term capital flows and reflect the relaxation of China's de facto capital control policy (Yu 2011;Zhang and Xu 2012). There are some studies examining the effect of short-term capital flows on currency internationalization, but the results are mixed. From the perspective of currency competitiveness, financial transaction convertibility and short-term capital inflow may facilitate the development of domestic financial institutions and financial markets. On the other hand, a mature financial system seems to be the prerequisite for a currency internationalization as it will meet the demand of the international investors for currency diversification (Frankel 2012;Genberg 2009). argued that it will not be sustainable to supply RMB in the offshore market via trade account only, and the cross-border capital flows in RMB would be another venue for RMB internationalization. Otherwise, the "Triffin Dilemma" will be inevitable again (Ma and Xu 2012). However, Hellmann et al. (1994) maintained that to liberalize the capital account would trigger domestic financial system risk when the country's economic or financial as well as regulation conditions are not mature enough. The fluctuations of cross-border short-term capital flows have a destabilizing effect on the domestic economy, which eventually affects the economic base of RMB internationalization (Yu 2012;Zhang 2012). Some studies on the Japanese Yen internationalization show that there is a reverse "U"-type relationship between cross-border short-term flow and international reserve status of the Yen. At the initial stage of Japan's capital account liberalization, the capital flows were stable, and the international status of the Japanese Yen was enhanced. However, in the late stage, the process of Japanese Yen internationalization appeared to be retrogressive, rather than progressive, when its rapid cross-border financial deregulation led to large and volatile short-term capital flows that could have a destabilizing impact (Jia 2014). Yet, as far as we are aware, there is little or no evidence on the dynamic interaction effects among currency internationalization, short-term capital flows and exchange rate expectation. Therefore, this study intended to address this important issue and explore the interactive effects of these three elements which would allow us to draw some important policy implications for the RMB internationalization process and the regulation of the short-term capital flows. Interaction of RMB Exchange Rate Expectation and RMB Internationalization By definition, the RMB internationalization refers to the process of taking the RMB outside of China as an international currency and allowing nonresidents to hold and use the RMB extensively overseas as a major pricing and settlement currency for trade, investments and reserves. In particular, in this study we simplified the RMB internationalization process by focusing on the analysis of the foreign demand for RMB and substitution effect of other currencies for RMB. The foreign demand for RMB can be divided into two components; one is for the cross-border trade settlement and outward direct investment in RMB, and another is for the speculative purpose by the foreign investors for the RMB denominated assets (Jiang et al. 2012). We adopted the assets portfolio balance model to explore the effect of the expected change of RMB exchange rate on RMB internationalization and how the latter affects the short-term capital flows. The assets portfolio balance model was proposed by McKinnon and Oates (1966), Girton and Henderson (1976), and developed by Girton and Roper (1981), Cuddington (1983) and Zervoyianni (1993) to include currency substitution between foreign and domestic assets in the model. When the rate of asset return changes, the domestic investors will adjust their portfolio through substituting domestic assets for foreign exchange or foreign bonds, which will affect the demand for currencies. Based on the assets portfolio model of Cuddington (1983) and Adebiyi (2005), we set up a foreign demand model for RMB. We assume that, (i) the domestic investors in a foreign country pursue to maximize their return from holding the RMB at a given level of risk; (ii) out of their total wealth, the domestic investors hold four different types of assets consisting of domestic assets, local bonds, foreign exchange (RMB) and RMB denominated assets; and (iii) investors can freely convert from one asset to another by changing the relative composition of the portfolios. In this assets portfolio balance model, the demand for assets is determined by the relative rate of return of the assets, income and total wealth. For domestic investors, the demand function for RMB is specified as follows: where M d denotes the logarithm of nominal demand for RMB, r is the yield for local bonds, er is the rate of expected RMB appreciation, r* refers to the interest rate for RMB denominated bonds, Y refers to the logarithm of nominal income and W is the total wealth of the domestic investors. In this model, we set the nominal return of local currency as 0. For domestic investors, r, er and (er + r*) are respectively the nominal return of local bonds, RMB and RMB denominated bonds. When local bonds return r rises, the domestic investors increase their demand for local bonds and reduce their demand for RMB and RMB denominated assets, hence, a 1 < 0. When the rate of expected RMB appreciation er increases, the local price of RMB is expected to rise, which leads to an increase in demand for RMB by local investors, so a 2 > 0. When the return of RMB denominated bonds (er + r*) increases, the demand for RMB denominated bonds increases and demand for RMB decreases; thus, it is expected that a 3 < 0. RMB demand function also depends on foreign income Y and the total wealth W, with an expected positive sign, i.e., a 4 > 0, and a 5 > 0. In a similar fashion, we specify the demand function for RMB denominated bonds as follows: When both r and er decrease and (er + r*) increases, domestic investors' demand for RMB denominated bonds increase. Both Equations (1) and (2) suggest that RMB appreciation expectation will lead to an increase in demand for RMB and RMB denominated bonds. However, one may note that the expected change of RMB exchange rate can affect the demand for RMB and RMB denominated bonds both directly and indirectly, a 2 representing the direct effect of RMB appreciation expectation. When er increases, it will raise the domestic demand for the RMB and decrease the demand for RMB denominated bonds. However, the rise of er also affects the demand for RMB denominated bonds indirectly through its impact on the increase in (er + r*), which then leads to an increase in demand for RMB bonds and a decrease in demand for the RMB. Both direct and indirect effects affect the demand positively. Now we turn to the impacts of RMB internationalization on RMB exchange rate expectation. As aforementioned, the process of RMB internationalization implies that the RMB is to be used worldwide by non-residents for payment and investment, suggesting that foreign demand for RMB will increase, which will lead to an increase in RMB appreciation expectation. On the other hand, currency internationalization could also be driven by market or government (Cohen 1971). When RMB internationalization is viewed as government driven, investors may not have full confidence about the RMB, which may lead to RMB depreciation expectation. Therefore, the effect of RMB internationalization on RMB exchange rate expectation is undetermined based on the above analysis. Interaction of RMB Internationalization and Short-Term Capital Flows We relate our model to the assets portfolio theory, currency crisis theory and the theory of interest rate parity (IRP). The short-term capital inflow to a country can be caused either by cross-border interest rate arbitraging activities or by currency arbitraging activities. Moreover, short-term capital inflow is also subject to the capital account regulation. The stricter the capital control, the higher the cost of capital flows. We specify the short-term capital flows function as follows: lnF = lnA + a 1 ln(e) + a 2 ln(i − i * ) + a 3 ln(er) + a 4 ln(ar) + a 5 lnc where F denotes the short-term capital net inflow, e indicates the local currency exchange rate, i is domestic interest rate, i* is foreign interest rate, er is the local currency expected appreciation rate, ar is the asset market return and c is the cost of capital flows. Equation (5) shows that short-term capital inflow is determined by the exchange rate level, the interest rate spread at home and abroad, the expected exchange rate changes, assets price and capital flow cost. The partial derivative of short-term capital flow to each variable is as follows: It shows that the short-term capital net inflow has a positive relationship with the local currency exchange rate, interest rate spread at home and abroad, domestic currency exchange rate appreciation expectation and assets market return rate, but is negatively related to capital flow cost. The RMB internationalization will help reduce the transaction costs of capital flows (Yu 2011;Zhang and Xu 2012) as, c = c(s), and ∂c(s)/∂s < 0 There are two channels through which the RMB internationalization process can reduce the transaction costs of capital flows. One is the cross-border RMB payment facility and another is the RMB offshore market, which facilitates interest rate and currency arbitraging activities. It is believed that the RMB offshore market is less regulated than the onshore market. Although the financial assets traded on both markets are essentially the same, the price deviations between the two markets can be viewed as different responses of the market players either to the different market conditions or even the same information but with different interpretation. These interest rate and currency arbitrage activities derived from both the RMB onshore and offshore markets are the important cause to the short-term capital flows, which provides the major sources of offshore RMB liquidity under the effective control of China's capital account. On the other hand, the RMB internationalization provides an additional source of liquidity to the offshore RMB market, in addition to the QFII program. The recent establishment of the Qianhai Shenzhen-Hong Kong Modern Service Industry Cooperation Zone aims to create a modern service industry zone, with a particular focus on the development of innovative financial instruments and financial products, while experimenting with the expansion of offshore RMB fund flowback channels and the internationalization of Qianhai's financial market. Foreign investors may convert their foreign exchange into RMB on the offshore markets, then backflow into the onshore market. With the process of the RMB internationalization, the transaction costs of the capital flows have been reduced, defined as follows, ∂F ∂s With the rising short-term capital inflow, it will undoubtedly affect China's financial market and hence the RMB onshore market. It is arguable that the short-term capital inflow will not only contribute in terms of liquidity to the onshore financial markets, but also help promote the competitiveness and acceptance of the RMB in the global market. Moreover, the short-term capital inflow may expedite the development of the financial institutions and market in China, and help promote its currency internationalization (Genberg 2009;Frankel 2012). Finally, the short-term capital inflow may help raise the market expectation for RMB appreciation and accelerate the process of RMB internationalization. We further investigated the dynamic relationship among these variables by employing a VAR model. We constructed a 3-variable VAR model, including RMB exchange rate expectation, RMB internationalization and short-term capital flow. The empirical analysis is based on Stata 13 and we use monthly series of data spanning from February 2004 to December 2020. Data Description There are three methods to calculate short-term capital flow: direct, indirect way and mixed methods. In this study, we adopted the indirect way to estimate the short-term capital flows, which is proxied by the increases in foreign exchange reserve minus export surplus and foreign direct investment. Exchange rate expectation can be proxied by the forward margin. In this study, we used the RMB non-deliverable forward exchange rate (NDF) to calculate the forward margin as follows, Expect = one year NDF exchange rate o f RMB against USD spot exchange rate o f RMB against USD − 1 If Expect is larger than zero, it means that offshore market investors have appreciation expectation for RMB. Daily data for a one year NDF exchange rate of USD against RMB was obtained from Wind Database, which is used to derive the one year NDF monthly data. We also collected the spot exchange rate of USD against RMB from WIND Database. There are several measures for RMB internationalization, as reported in Table 1. Although RMB settlement of cross-border trade and the Standard Chartered RMB global indicator RGI can meet the need of the empirical study, the time span is too short and only available for the recent years. We decided to follow the method in Sha and Liu (2014) and use the RMB offshore market deposits as the proxy for the RMB internationalization. Although there are several major RMB offshore markets including Hong Kong, London, Seoul, Frankfurt and Singapore, the Hong Kong market plays a dominant role and accounts for about 80% of the global RMB deposit business (SWIFT 2012). It also offers the longest and consistent time series as compared to other variables in our sample. Thus, it is reasonable to select the Hong Kong RMB offshore market deposits as the proxy. We collected the monthly RMB deposits data from the Monthly Statistical Bulletin of the Hong Kong Monetary Authority. In order to avoid the possible heteroscedasticity problems, we took natural logarithms for Hong Kong offshore market RMB deposits. We have conducted the ADF tests to check the time-series properties of the endogenous variables and the results of unit-root test are reported in Table 2. We choose lag order p based on information criterion LR, FPE, AIC and SC, and the optimal lag order is 2. As it can be seen in Table 2, the results of unit root tests indicate that all the three variables are stationary in level though at slightly different significance levels. Given that all of the series are I(0), we proceed to VAR estimation rather than conduct cointegration tests as by definition these stationary variables cannot be cointegrated (see, for instance, Lütkepohl 2006; Ito and Sato 2008; Zhang and Sato 2012). Table 3 reports the basic summary statistics of the three main variables. As can be seen in table 3, the average value of RMB appreciation expectation (Expect) is 0.041 and the maximum value is 0.465. The average value of RMB internationalization, as proxied by lnCurrency, is 12.169, with a maximum value of 13.819. The average value of short-term capital flow (Shortcapital) is 0.014 and the maximum value equals 0.113. Impulse Response Effect Analysis We employed the impulse response analysis and variance decomposition method to examine the dynamic relationship between RMB exchange rate expectation, RMB internationalization and short-term capital flow. Figure 1 reports the results of impulse response of the short-term capital flow to the shocks of RMB exchange rate expectation and RMB internationalization. The left panel of Figure 1 shows that the short-term capital flows respond positively to the shock of RMB exchange rate expectation. However, the positive response is short-lived, lasting for only one horizon and then falling immediately back to a close to zero level, though it experiences a short rise again in the third horizon. The results indicate that RMB appreciation expectation can drive short-term capital inflow. The right panel indicates that short-term capital flow has a positive response to a one-standard-deviation structural shock of RMB internationalization. The positive shock impact lasts for four horizons before turning into negative. The results suggest that the RMB internationalization process could lead to an increase in the short-term capital inflow. Impulse Response Effect Analysis We employed the impulse response analysis and variance decomposition method to examine the dynamic relationship between RMB exchange rate expectation, RMB internationalization and short-term capital flow. Figure 1 reports the results of impulse response of the short-term capital flow to the shocks of RMB exchange rate expectation and RMB internationalization. The left panel of Figure 1 shows that the short-term capital flows respond positively to the shock of RMB exchange rate expectation. However, the positive response is short-lived, lasting for only one horizon and then falling immediately back to a close to zero level, though it experiences a short rise again in the third horizon. The results indicate that RMB appreciation expectation can drive short-term capital inflow. The right panel indicates that short-term capital flow has a positive response to a onestandard-deviation structural shock of RMB internationalization. The positive shock impact lasts for four horizons before turning into negative. The results suggest that the RMB internationalization process could lead to an increase in the short-term capital inflow. Shortcapital response to Expect Shortcapital response to lnCurrency Figure 1. Impulse Response Function of Shortcapital to Expect and lnCurrency. Figure 2 presents the impulse response of RMB exchange rate expectation to RMB internationalization and short-term capital flow shocks. The left panel shows that RMB exchange rate expectation has a positive response to the shock of offshore market RMB deposits. A one-standard-deviation shock to RMB exchange rate expectation corresponds to a contemporaneous increase in the expectation of around 0.03 percentage points from the first horizon onwards. The shock impact is also quite persistent, with RMB exchange rate expectation on average about 0.04 percentage points higher from the first horizon Figure 2 presents the impulse response of RMB exchange rate expectation to RMB internationalization and short-term capital flow shocks. The left panel shows that RMB exchange rate expectation has a positive response to the shock of offshore market RMB deposits. A one-standard-deviation shock to RMB exchange rate expectation corresponds to a contemporaneous increase in the expectation of around 0.03 percentage points from the first horizon onwards. The shock impact is also quite persistent, with RMB exchange rate expectation on average about 0.04 percentage points higher from the first horizon onwards. This finding suggests that the increase of offshore market RMB deposits could lead to RMB appreciation expectation. The right panel shows that RMB exchange rate expectation decreases in response to the adverse shocks of short-term capital flows. A one-standarddeviation capital flows shock can cause exchange rate expectation contemporaneously to drop by as low as over 0.01 percentage points within the first month. The impact is reversed to become positive from the third horizon onwards though the magnitude remains small. The results imply that short-term capital inflow could lead to a short-lived RMB depreciation expectation. lead to RMB appreciation expectation. The right panel shows that RMB exchange rate expectation decreases in response to the adverse shocks of short-term capital flows. A onestandard-deviation capital flows shock can cause exchange rate expectation contemporaneously to drop by as low as over 0.01 percentage points within the first month. The impact is reversed to become positive from the third horizon onwards though the magnitude remains small. The results imply that short-term capital inflow could lead to a short-lived RMB depreciation expectation. Response of Expect to lnCurrency Response of Expect to Shortcapital Figure 2. Impulse Response Function of Expect to lnCurrency and Shortcapital. Figure 3 reports the impulse response of RMB internationalization to RMB exchange rate expectation shock and short-term capital flow shock. As can be seen from the left panel of Figure 3, the response of RMB internationalization to RMB exchange rate expectation shock is positive with an increasing trend over the horizons. The shock impact peaks at around 0.012 percentage points in the second horizon and is persistent at this level throughout the rest of the horizons. The results confirm that RMB appreciation expectation has a significant positive effect on RMB internationalization. The right panel of Figure 3 shows that, given a positive shock of short-term capital flow, offshore market RMB deposit responds positively, rising up to 0.04 percentage points in the fourth horizon and continuing to rise thereafter. Thus, short-term capital inflow could promote RMB internationalization and the positive shock impact is persistent. Figure 3 reports the impulse response of RMB internationalization to RMB exchange rate expectation shock and short-term capital flow shock. As can be seen from the left panel of Figure 3, the response of RMB internationalization to RMB exchange rate expectation shock is positive with an increasing trend over the horizons. The shock impact peaks at around 0.012 percentage points in the second horizon and is persistent at this level throughout the rest of the horizons. The results confirm that RMB appreciation expectation has a significant positive effect on RMB internationalization. The right panel of Figure 3 shows that, given a positive shock of short-term capital flow, offshore market RMB deposit responds positively, rising up to 0.04 percentage points in the fourth horizon and continuing to rise thereafter. Thus, short-term capital inflow could promote RMB internationalization and the positive shock impact is persistent. In sum, our results show that the RMB appreciation expectation could lead to shortterm capital inflow through increasing RMB internationalization. Variance Decomposition The results for variance decomposition for RMB internationalization, exchange rate expectation and short-term capital flows are reported in Table 4. It can be seen in Table 4 that RMB exchange rate expectation can only explain 1.077% of the variation in short-term capital flows and 1.836% in RMB internationalization, while RMB internationalization ac- In sum, our results show that the RMB appreciation expectation could lead to shortterm capital inflow through increasing RMB internationalization. Variance Decomposition The results for variance decomposition for RMB internationalization, exchange rate expectation and short-term capital flows are reported in Table 4. It can be seen in Table 4 that RMB exchange rate expectation can only explain 1.077% of the variation in short-term capital flows and 1.836% in RMB internationalization, while RMB internationalization accounts for 3.605% of the short-term capital flows variance and 0.497% of the change in RMB exchange rate expectation, respectively. On the other hand, short-term capital flows can explain over 21% of the variation in RMB internationalization. The findings confirm that RMB internalization is strongly associated with the short-term capital flow, while RMB internationalization has limited impact on RMB exchange rate expectation. Robustness Checks We now turn to the robustness checks of these interactive relationship between the interested variables. Firstly, we used global market share of RMB international payments as the proxy of RMB internationalization, and re-ran the VAR model. Figure 4 reports the results. The left panel of Figure 4 shows that RMB internationalization now responds positively to the RMB appreciation expectation shock. Similarly, as can be seen in the right panel of Figure 4, the RMB internationalization affects the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with that from our baseline model. J. Risk Financial Manag. 2021, 14, x FOR PEER REVIEW 12 of 16 that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with that from our baseline model. Response of lnCurrency1 to Expect Response of Shortcapital to lnCurrency1 Secondly, we assessed the significance and sign of the estimates. Figure 5 reports the VAR model estimation results. As can be seen from Table 5, the coefficient of first-order lag of lnCurrency is significantly positive at the 10% level. It indicates that RMB internationalization has a positive effect on RMB appreciation expectation. The coefficient of firstorder lag of Shortcapital is significantly positive at the 1% level, implying that short-term capital flow has a positive effect on RMB internationalization. Secondly, we assessed the significance and sign of the estimates. Figure 5 reports the VAR model estimation results. As can be seen from Table 5, the coefficient of first-order lag of lnCurrency is significantly positive at the 10% level. It indicates that RMB internationalization has a positive effect on RMB appreciation expectation. The coefficient of first-order lag of Shortcapital is significantly positive at the 1% level, implying that short-term capital flow has a positive effect on RMB internationalization. J. Risk Financial Manag. 2021, 14, x FOR PEER REVIEW 13 of 16 the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with the conclusion from our baseline model. Response of lnCurrency to Expect Response of Shortcapital to lnCurrency Figure 5. Impulse Response of lnCurrency to Expect and Shortcapital to lnCurrency. Finally, we considered other factors that may affect RMB internationalization, such as international trade. We constructed a four-variables VAR model, including RMB appreciation expectation, RMB internationalization, short-term capital flow and trade. The proxy of international trade is the growth rate of total imports and exports. The results are reported in Figure 6. The left panel of Figure 6 shows that RMB internationalization responds positively to the RMB appreciation expectation shock. Similarly, as can be seen in the right panel of Figure 6, the RMB internationalization affects the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with the basic conclusion in the above analysis. Response of lnCurrency to Expect Response of Shortcapital to lnCurrency Figure 6. Impulse Response of lnCurrency to Expect and Shortcapital to lnCurrency. Conclusions In this paper, we have empirically examined the contemporaneous relationships among the RMB exchange rate expectation, currency internationalization and short-term capital flows. Using the assets portfolio balance model, we found strong evidence that Thirdly, RMB was included in the Special Drawing Rights (SDR) currency basket in October 2016. We re-estimate the model by using monthly data from October 2016 to December 2020. The results are reported Figure 5. The left panel of Figure 5 shows that RMB internationalization now responds positively to the RMB appreciation expectation shock. Similarly, as can be seen in the right panel of Figure 5, the RMB internationalization affects the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with the conclusion from our baseline model. Finally, we considered other factors that may affect RMB internationalization, such as international trade. We constructed a four-variables VAR model, including RMB appreciation expectation, RMB internationalization, short-term capital flow and trade. The proxy of international trade is the growth rate of total imports and exports. The results are reported in Figure 6. The left panel of Figure 6 shows that RMB internationalization responds positively to the RMB appreciation expectation shock. Similarly, as can be seen in the right panel of Figure 6, the RMB internationalization affects the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with the basic conclusion in the above analysis. Finally, we considered other factors that may affect RMB internationalization, such as international trade. We constructed a four-variables VAR model, including RMB appreciation expectation, RMB internationalization, short-term capital flow and trade. The proxy of international trade is the growth rate of total imports and exports. The results are reported in Figure 6. The left panel of Figure 6 shows that RMB internationalization responds positively to the RMB appreciation expectation shock. Similarly, as can be seen in the right panel of Figure 6, the RMB internationalization affects the short-term capital inflow positively, suggesting that the RMB internationalization may lead to an increase in the short-term capital inflow. This finding is consistent with the basic conclusion in the above analysis. Response of lnCurrency to Expect Response of Shortcapital to lnCurrency Figure 6. Impulse Response of lnCurrency to Expect and Shortcapital to lnCurrency. Conclusions In this paper, we have empirically examined the contemporaneous relationships among the RMB exchange rate expectation, currency internationalization and short-term capital flows. Using the assets portfolio balance model, we found strong evidence that Figure 6. Impulse Response of lnCurrency to Expect and Shortcapital to lnCurrency. Conclusions In this paper, we have empirically examined the contemporaneous relationships among the RMB exchange rate expectation, currency internationalization and short-term capital flows. Using the assets portfolio balance model, we found strong evidence that RMB appreciation will lead to an increase in foreign demand for RMB and RMB denominated assets. With the short-term capital flow determination model, we also found some evidence that the degree of RMB internationalization may lead to an increase in the short-term capital inflow to China due to the reduction in transaction costs. The results from our VAR model show that RMB appreciation expectation will promote the RMB internationalization process, which in turn increases the short-term capital inflow. It was also found that RMB internationalization may lead to currency appreciation expectation. This is consistent with the conclusion of Frankel (2012), Lardy and Douglass (2011), Sha and Liu (2014). Our findings also suggest that, with the process of RMB internationalization, the relationship between exchange rate expectation and short-term capital flows has become no longer straightforward and it needs to be assessed in the context of RMB internationalization. Our findings have some important policy implications. First, with a higher level of RMB internationalization, it becomes more important for the PBoC to pay close attention to the short-term capital flows, as high volatility in short-term capital flows has a destabilizing effect on the economy. This requires the central bank to conduct appropriate prudential financial regulations during the process of currency internationalization to cope with such a destabilizing effect. Second, although the government plays an important role in promoting the internationalization of RMB, the process essentially has to be market driven and to reflect China's rising importance in the global economy and financial system. The challenges for the monetary authority are essentially how to conduct the monetary and exchange rate policies in the process of internationalizing RMB, and how to inform the financial market of the policy changes with minimal "shocking" effects on both the RMB onshore and offshore markets. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
v3-fos-license
2016-05-14T00:39:34.503Z
2014-01-01T00:00:00.000
835895
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1003891&type=printable", "pdf_hash": "a1807320a562cb7348dfc58176eed9288a9ba38b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44691", "s2fieldsofstudy": [ "Biology" ], "sha1": "a1807320a562cb7348dfc58176eed9288a9ba38b", "year": 2014 }
pes2o/s2orc
Parvovirus-Induced Depletion of Cyclin B1 Prevents Mitotic Entry of Infected Cells Parvoviruses halt cell cycle progression following initiation of their replication during S-phase and continue to replicate their genomes for extended periods of time in arrested cells. The parvovirus minute virus of mice (MVM) induces a DNA damage response that is required for viral replication and induction of the S/G2 cell cycle block. However, p21 and Chk1, major effectors typically associated with S-phase and G2-phase cell cycle arrest in response to diverse DNA damage stimuli, are either down-regulated, or inactivated, respectively, during MVM infection. This suggested that parvoviruses can modulate cell cycle progression by another mechanism. In this work we show that the MVM-induced, p21- and Chk1-independent, cell cycle block proceeds via a two-step process unlike that seen in response to other DNA-damaging agents or virus infections. MVM infection induced Chk2 activation early in infection which led to a transient S-phase block associated with proteasome-mediated CDC25A degradation. This step was necessary for efficient viral replication; however, Chk2 activation and CDC25A loss were not sufficient to keep infected cells in the sustained G2-arrested state which characterizes this infection. Rather, although the phosphorylation of CDK1 that normally inhibits entry into mitosis was lost, the MVM induced DDR resulted first in a targeted mis-localization and then significant depletion of cyclin B1, thus directly inhibiting cyclin B1-CDK1 complex function and preventing mitotic entry. MVM infection thus uses a novel strategy to ensure a pseudo S-phase, pre-mitotic, nuclear environment for sustained viral replication. Introduction Parvoviruses are the only known viruses of vertebrates that contain single-stranded linear DNA genomes, and they present novel replicative DNA structures to cells during infection [1,2]. Unlike the DNA tumor viruses, parvoviruses do not drive quiescent cells into S-phase [3]. However, following S-phase entry, cellular DNA polymerase, presumably DNA pol d, converts the single stranded viral DNA genome into a double stranded molecule that serves as a template for transcription of the viral genes. The NS1 protein is the main viral replicator protein for the parvovirus minute virus of mice (MVM), interacting specifically with the viral genome to process its various replication intermediates. Parvoviruses establish replication factories in the nucleus (termed Autonomous Parvovirus-Associated Replication, or APAR, bodies) where active transcription of viral genes and viral replication takes place [4][5][6]. Viral replication induces a cellular DNA damage response which serves to prepare the nuclear environment for effective parvovirus takeover [7][8][9][10][11]. Following MVM infection, cellular genome replication soon ceases while viral replication continues for extended periods of time [12]. In order for viral replication to be sustained in infected cells, the cellular environment, including the replication machinery and raw materials for replication, must remain readily available. Thus, normal cell cycle progression must be altered. Parvoviruses employ varied mechanisms to disrupt normal cell cycle progression, sometimes in different ways depending on the type of cell infected [13]. Adeno-associated virus type 2 (AAV2) induces a S-phase block dependent upon Rep 78 nicking of cellular DNA and inhibitory stabilization of cell division cycle 25 A (CDC25A) [14]. B19 infection in semi-permissive cells causes a cell cycle arrest in G2 associated with accumulation of cyclins A, B1, and phosphorylated cyclin-dependent kinase 1 (CDK1) [15]. In the more permissive CD36 EPO cell line, B19 infection results in a G2 arrest primarily mediated by the viral NS1 protein through a mechanism that involves deregulation of the E2F proteins [16] independent of DNA damage signaling [11]. Minute virus of canines (MVC), a member of the Bocavirus genus of the Parvoviridae also induces a G2/M arrest that is associated with accumulation of cyclins and maintenance of inhibitory phosphorylation of CDK1 [17]. Interestingly, MVC G2 arrest is not dependent on the viral NS1 protein or on viral replication, but rather can be mediated by the viral genome per se -inoculation of UV-irradiated viral genomes was sufficient to induce a G2/M arrest. More recently, MVC was shown to induce a Structural Maintenance of Chromosome protein 1 (SMC1)-mediated S-phase arrest to enhance its replication [18]. MVM NS1 has been shown to inhibit cellular DNA replication, and effects on both cellular DNA integrity [19] and the DNA polymerase-a complex have been reported [20]. MVM infection has also been reported to cause a cell cycle arrest prior to mitosis [21]; however, the mechanism by which this occurs and its role in viral replication has not been fully characterized. Two B-type cyclins exist in mammals, cyclin B1 and B2. Whereas cyclin B is an essential gene, cyclin B2-null mice develop normally, suggesting that B1 may compensate for cyclin B2's function in development [22]. Entry into mitosis requires both the accumulation of cyclin B1 and the activation of its associated CDK1 kinase via removal of its inhibitory phosphorylation [23]. This phosphorylation event is dependent on Wee1, which inhibits CDK1 by phosphorylating it on tyrosine 15; the CDC25 phosphatases antagonize Wee1 and activate CDK1 by removing the phosphorylation mark [24][25][26][27][28]. Thus, maintenance of CDK1 inhibitory phosphorylation is a major mechanism of G2-arrest in response to various DNA damaging agents [29,30]. This is achieved mainly via the activated Chk1 kinase which inhibits the function of the CDC25 phosphatases, although Chk2 has been reported to inhibit these phosphatases as well [31,32]. MVM induces a robust DDR in infected cells coordinated by ATM and characterized by phosphorylation of H2AX, Nbs1, Chk2 and p53 [7]. The DDR contributed to G2 arrest and was also required for robust viral replication in infected cells [7]. Surprisingly, the Chk1 kinase, which governs G2 arrest in response to myriad of DNA damage responses, was not activated to detectable levels during MVM infection [7]. Furthermore, the G2 cell cycle arrest observed in MVM-infected murine A9 cells was not a consequence of p53-mediated up-regulation of p21 [33]. In this report we further characterize the cell cycle perturbations that take place following infection with MVM. MVM infection presents the cell with sustained DNA damage signaling evidenced by increasing phosphorylation of H2AX throughout the course of infection [7]. But MVM infection also represents an atypical system in which two of the major players required for sustaining a G2 block in response to persistent DNA damage signaling, p21 and Chk1, are down-regulated or inactivated, respectively. How does viral infection sustain a cell cycle block in the absence of Chk1 activation or p21 up-regulation? We show here that the Chk2 protein was activated and recruited into MVM APAR bodies during infection. Chk2 activation was important at an early point in parvovirus infection, necessary to induce a transient Sphase block which was associated with CDC25A degradation. This early S-phase arrest was important for viral replication; however, Chk2 activation and CDC25A loss were not sufficient to sustain the marked G2 arrest seen following MVM infection. Rather, we have found that although the phosphorylation of CDK1 that normally inhibits entry into mitosis was lost as infection progressed, the MVM induced DDR resulted first in a targeted mis-localization and then a significant depletion of cyclin B1, thus directly inhibiting cyclin B1-CDK1 complex function and preventing mitotic entry. In this manner, MVM infection ensured a pseudo S-phase, pre-mitotic, nuclear environment for sustained viral replication. Results Chk2 activation mediated an S-phase arrest in MVM infected cells which facilitated viral replication Chk2 was activated during MVM infection. The autonomous parvovirus MVM induces a DNA-damage response (DDR) in infected cells that results in cell cycle arrest prior to mitosis [7]. During this extended period the CDK inhibitor p21, which typically plays an important role in p53-mediated G2 arrest, was targeted for degradation in a proteasome-dependent manner [33]. This suggested that the MVM-induced cell cycle block was independent of the p53-p21 signaling axis. As described above, Chk1 and Chk2 are additional critical downstream checkpoint proteins known to regulate cell cycle progression. In MVM infected, but not mock-infected, murine cells we found that Chk2 exhibited the altered electrophoretic mobility associated with activation ( Figure 1A, panel c, compare lanes 3 & 5 to lanes 2 and 4). Chk2 activation increased as the infection progressed ( Figure 1A, panel c, compare lanes 5 to 3), and could be reversed by treatment with calf intestinal phosphatase ( Figure 1A, panel c, lane 6). Infection did not result in a total shift of the Chk2 species into the slower migrating form, as was found in cells treated with the radiomimetic neocarzinostatin (NCS) ( Figure 1A, panel c, lane 7), likely because during infection of a para-synchronous population not all cells enter Sphase and support MVM replication in a perfectly uniform manner. MVM infection was confirmed by NS1 expression ( Figure 1A, panel a, lanes 3, 5, 6), and an ongoing MVM-induced DDR was also confirmed by robust phosphorylation of H2AX on serine 139 (cH2AX) ( Figure 1A, panel d, compare lanes 3 and 5 to lane 6) that was similar, or higher (at 24 h pi), than that observed following NCS treatment ( Figure 1A, panel d, compare lane 5 to 7). Chk2 activation is characterized by its phosphorylation on threonine 68 [34]. Because of species-specific antibody restrictions, this phosphorylation marker was examined in MVM-infected human NB324K cells. (MVM induces a DDR in the permissive NB324K cell line that is indistinguishable to that induced in murine cells [7]). 24 hours after infection, phosphorylation of Chk2 on threonine 68 was clearly apparent (Figure 1B, lane 2; phosphorylated human Chk2 did not exhibit altered electrophoretic mobility under these gel conditions). Phosphorylated Chk2 also localized to MVM APAR bodies, sites of ongoing virus replication. As shown in Figure 1C, MVM infected, but not mock infected, NB324K cells showed reactivity with the anti-Chk2 T68 antibody ( Figure 1C, middle panel). Chk2 T68 staining was increased in the entire nucleus but showed greater staining intensity and localization within nuclear APAR bodies, identified by the presence of MVM NS1 ( Figure 1C, top panel). The specificity of this antibody was validated by confirming reactivity with NCS-activated samples and the loss of this reactivity upon further addition of the Chk2 inhibitor (data not shown). We did not observe redistribution of the total Chk2 protein to APAR bodies in infected cells (data not shown), suggesting that only the activated Chk2 protein was re-localized. We could not detect activation of Chk1 during MVM infection of NB324K cells in these experiments (data not shown), as also previously reported for MVM infection of murine A9 cells [7], arguing against a major role for this kinase in MVM-induced cell Author Summary DNA viruses induce cellular DNA damage responses that can present a block to infection that must be overcome, or alternatively, can be utilized to viral advantage. Parvoviruses, the only known viruses of vertebrates that contain single-stranded linear DNA genomes, induce a robust DNA damage response (DDR) that features a cell cycle arrest that facilitates their replication. We show that the autonomous parvovirus MVM-induced cell cycle arrest is caused by a novel two-step mechanism that ensures a pseudo S phase, pre-mitotic, nuclear environment for sustained viral replication. A feature of this arrest is virallyinduced depletion of the critical cell cycle regulator cyclin B1. Parvoviruses are important infectious agents that infect many vertebrate species including humans, and our study makes an important contribution to how these viruses achieve productive infection in host cells. cycle arrest. Similarly, we failed to observe activation, as detected by its phosphorylation, of SMC1, a chromosomal protein that is a component of the cohesin complex which has been implicated in cell cycle arrest particularly in S-phase. SMC1 was recently shown to be important for S-phase arrest in bocavirus infected cells [18]. However, A9 cell extracts from the same experiment shown in Figure 1A exhibited only a slight increase in SMC1 phosphorylation above mock-infected cell levels at each time point tested (detectable only upon extended exposure, Figure 1D, top panel, compare lanes 2 & 3 or lanes 3 & 4), and these levels never exceeded the background seen as a consequence of the synchronization procedure ( Figure 1D, top panel, lane 1). MVM infection did not alter amounts of total SMC1 protein ( Figure 1D, bottom panel). As expected, control treatment with NCS resulted in significant phosphorylation of SMC1 ( Figure 1B lane 6). These results suggested that MVM-induced murine cell cycle arrest likely occurs independently of high levels of activated SMC1. Chk2 activation during MVM infection resulted in a transient S-phase arrest associated with degradation of CDC25A. MVM infection of asynchronous murine A9 cells resulted in a substantial increases in the amount of cells in both S and G2 phases compared to non-infected controls (from 13.5% to 24% for S phase and 24% to 42% for G2 phase respectively, Figure 2A, compare mock to MVM). In these experiments 70 to A9 cells were para-synchronized in G0 as described in Materials and Methods. Cells were then mock-infected or infected with MVMp at an MOI of 10 at the time of release (T0, lane 1). As a positive control for Chk2 activation, A9 cells were treated with 150 ng/ml of the radiomimetic neocarzinostatin (NCS, lane 7) for 1 hour. Treatment with calf intestinal phosphatase (CIP, lane 6) was done for 1 hour at 37uC. Cells were harvested at the indicated time points after release, lysed in modified RIPA buffer. Protein content was measured using Bradford assay and equal amounts of protein were loaded in each well for immunoblotting. Western blot analysis was carried out using antibodies directed against NS1 (panel a), Chk2 (panel c) and H2AX phosphorylated on serine 139 (cH2AX, panel d). Equal loading was confirmed by blotting for actin (panel b). (B) Chk2 activation following MVM infection in permissive human NB324K cells. NB324K cells were mock infected or infected with MVM at an MOI of 10. Cells were harvested 24 hours post infection. Western blot analysis using antibodies directed against NS1, Actin, Chk2 phosphorylated on threonine 68 (Chk2-P-T68) and total Chk2 protein (Chk2) as indicated is shown. (C) Activated Chk2 localized within MVM APAR bodies. NB324K cells were infected for 24 hours before fixation and processing for immunofluorescence. APAR bodies were detected with antibodies to NS1. Nuclei were stained with TOPRO-3. Staining with an antibody to phosphorylated Chk2, observed only in infected cells, was prominent in distinct foci which co-localized with APAR bodies and also in a pan-nuclear pattern (merge panel). All images were captured using an objective of 636. (D) SMC1 is not significantly activated following MVM infection of murine A9 cells. Lysates from Figure 1A were blotted with antibodies directed against total SMC1 protein (SMC1) and SMC1 protein phosphorylated on serine 957 (SMC1-P-S957). Short and long exposures of the same blot are shown. doi:10.1371/journal.ppat.1003891.g001 80% of the cells were infected as determined by immunofluorescence for NS1 at this time point (data not shown). Addition of Chk2 inhibitor II affected infected cell cycle progression, but not by significantly reducing accumulation in G2. Rather, inhibitor treatment resulted in a substantial reduction in the amount of cells that accumulated in S-phase (from 24% to 12%, Figure 2A) without altering the cell cycle distribution of uninfected cells. This result suggested that in MVM infected cells activated Chk2 played a role during S-phase transition, rather than mediating cell cycle arrest in G2. The inhibitor was found to induce a modest increase in the percentage of cells resident in G2 (from 34% to 41%, Figure 3A), consistent with transition from an earlier transient block. More cells remained in G1 phase than expected for reasons that are not yet clear. An S-phase arrest during MVM infection has been noted previously by others [21]. Two parallel pathways have been primarily implicated in cell cycle arrest in S-phase following DNA damage -the Nbs1/SMC1 pathway and the ATM/Chk1-Chk2/CDC25A pathway [35]. Since inhibition of Chk2 kinase activity diminished the percentage of infected cells in S-phase, and because we failed to observe Figure 2B. Additionally, lysates were blotted with antibodies directed against p53 phosphorylated on serine 15 (p53-P-S15, panel f) and RPA32 phosphorylated on serine 4/8 (RPA32-P-S4/8, panel g). doi:10.1371/journal.ppat.1003891.g002 significant increases in SMC1 phosphorylation above background levels, we examined downstream events in the Chk2 signaling pathway. Both Chk1 and Chk2 activation have been implicated in CDC25A degradation following ionizing radiation [35][36][37]. As can be seen in Figure 2B, there was a significant loss of CDC25A in MVM infected A9 cells beginning at 19 h pi, when virus replication (as evidenced by NS1 expression) became prominent, and continued through 25 hours post infection (h pi) ( Figure 2B, panels a and d, compare lanes 2 and 4 with lanes 1 and 3). H2AX phosphorylation confirmed an ongoing MVM-induced DDR ( Figure 2B, panel f, lanes 4 and 6). The Chk2 mobility shift indicative of Chk2 activation was not clearly visible until 19 h pi in the experiment shown ( Figure 2B, panel c, lanes 2 and 4), although we have observed it at earlier time points in other experiments (data not shown). We did not observe a reduction in the levels of the related protein CDC25C ( Figure 2B, panel e), suggesting that the loss was specific to CDC25A. The loss of CDC25A could be substantially reversed by treatment with MG132 ( Figure 2C, compare lanes 2 and 3), indicating that it had been targeted to the proteasome. Treatment with Chk2 inhibitor II, which prevented the activation of Chk2 (as evidenced by the loss of Chk2 with an altered mobility; Figure 2D, panel c, compare lanes 2 and 3), prevented the reduction in CDC25A levels observed following MVM infection of either para-synchronous ( Figure 2D, panel d, compare lanes 2 to 3), or asynchronous (data not shown) A9 cells, without affecting the levels of the related CDC25C protein ( Figure 2D, panel e). The Chk2 inhibitor did not substantially affect accumulation of NS1 ( Figure 2D, panel A), nor did it inhibit the MVM-induced DDR as indicated by levels of phosphorylated RPA32 and cH2AX ( Figure 2D, panels g and h), suggesting that it performed its function subsequent to the initiation of genome replication. However, we did observe a reduction in activated p53 levels ( Figure 2D, panel f, lanes 2 and 3), which may have been due to the previously reported role of Chk2 in phosphorylating p53 [38]. These results suggested that Chk2 activation resulted in reduced levels of CDC25A during MVM infection, as has been observed following treatment with ionizing radiation [36]. The Chk2-mediated S-phase arrest facilitated MVM replication. siRNA knockdown of Chk2 in A9 cells resulted in significant depletion of Chk2 protein in infected cells compared to control siRNA treated cells ( Figure 3A, lower panel, compare lane 1 and 2). This resulted in an approximate two-fold reduction in MVM replication compared to cells treated with control siRNA ( Figure 3A, upper panel, compare lane 1 and 2). Pre-treatment of asynchronously growing cells with the Chk2 inhibitor resulted in at least a four-fold reduction in accumulation of monomer replicative forms ( Figure 3B, lane 2) compared to vehicle treated cells ( Figure 3B, lane 1). In these experiments, NS1 accumulated levels were not drastically reduced even though there was reduced replication (data not shown, see also Figure 2B -lanes 2 and 3) which was likely due to the very stable nature of the NS1 protein [39]. Taken together, these results suggested that activation of Chk2 during infection induced a transient S-phase block which is The MVM-induced G2/M cell cycle block featured initial mis-localization and subsequent loss of cyclin B1 MVM infection induced a sustained pre-mitotic block even though the inhibitory phosphorylation of CDK1 was lost. In addition to a transient block within S-phase, MVM infection results in an essentially complete block to the entry of infected cells into mitosis [21]. Whereas CDK2, which is mainly activated by CDC25A [40], plays important roles in S-phase progression, transit from G2 to mitosis in normal cycling cells is governed by activity of the CDK1 (also called cdc2) kinase in complex with its mitotic cyclin B1 [23]. This kinase can be subject to various regulatory mechanisms in response to DNA damaging drugs and during certain viral infections. In normal cycling cells in the absence of DNA damage, the Wee1 kinase phosphorylates CDK1 on Tyr 15 rendering it inactive. When cells cycle to the G2/M border, it is primarily the CDC25C phosphatase that removes this inhibitory phosphorylation, thus promoting activation of CDK1 and mitotic entry [40]. In order to determine the mechanism of G2-arrest following MVM infection, we first examined the activity and phosphorylation status of CDK1 over the course of infection. CDK1 kinase assays using histone H1 as a substrate demonstrated that at both early (24 h pi) and late (32 h pi) time points following MVM infection, CDK1 activity was reduced, to levels seen following control doxorubicin treatment which blocks cells in G2 ( 1 and 3). However, surprisingly, by 32 h pi, even though cells were arrested and CDK1 kinase activity was reduced even further (Figure 4, panels a and i, compare lanes 1 and 2), the inhibitory phosphorylation of CDK1 was significantly reduced (Figure 4, panel e, compare lanes 1 and 2) -a state normally associated with cell cycle progression. This suggested that the cell cycle block seen at later times during MVM infection may have been the result of another mechanism. Expression of NS1 confirmed ongoing viral infection (Figure 4, panel c, lanes 1 and 2). Virus-mediated down-regulation of cyclin B1 prevented mitotic entry. These results suggested that the kinase component of the cyclin B1-CDK1 complex was likely not the critical regulatory target for sustaining the MVM-induced G2 cell cycle block at later time points during infection, and so we turned our attention to cyclin B1 itself. Following MVM infection of parasynchronized murine A9 cells, cyclin B1 levels were seen to accumulate at 18 h pi, increasing through the 24 h time point ( Figure 5, panel e, lanes 4 and 5). However, remarkably, by 33 h pi, there was a dramatic reduction in the accumulated levels of cyclin B1 ( Figure 5A, panel e, compare lanes 5 and 6). This was unexpected as cyclin B1 expression normally peaks in G2 in cycling cells [23]. Because cyclin B1 is a required cofactor for the G2 to M transition, its depletion would account for the sustained loss of kinase activity we observed (Figure 4, panels a and i, lanes 1 and 2), and the apparent failure of dephosphorylated CDK1 (see Figure 5A, panel b, lane 6) to promote mitotic entry in MVM infected cells. In these experiments total CDK1 expression became detectable around 12 h post release (when cells had begun to cycle into S-phase) ( Figure 5A, panel c, lane 3), while the inhibitory tyrosine-15 phosphorylation was again lost by 33 h pi ( Figure 5A, panel b, lane 6). NS1 expression was used as a marker for infection ( Figure 5A, panel a), and cyclin A levels indicated entry into Sphase and was sustained thereafter ( Figure 5A, panel d, lanes 3-6). To confirm that MVM infected cells in which cyclin B1 was lost had not proceeded into mitosis, we performed a nocodazole trap experiment. Cells that are blocked prior to mitosis can be differentiated from normal cycling cells by immunostaining with an antibody to histone H3 phosphorylated on serine 28, which is a marker for cells present in mitosis [42]. As expected, uninfected cells (which only had a relatively low percentage of cells in mitosis) showed little histone H3 phosphorylation ( Figure 5B, lane 1), while nocodazole treatment resulted in an increase in phosphorylated histone H3 ( Figure 5B, lane 3), consistent with an accumulation of cells in mitosis. In contrast, MVM infected cells did not exhibit histone H3 phosphorylation either in the absence ( Figure 5B, lane 2), or presence ( Figure 5B, lane 4), of nocodazole, indicating that MVM infected cells were blocked prior to mitotic entry, presumably due to the absence of cyclin B1. Typically, the highest amounts of cyclin B1 are found in G2/M cells since cyclin B1 transcription begins in S-phase and peaks at the G2/M border [23]. In contrast to MVM infected cells, arrest of uninfected A9 cells at the G2/M border by doxorubicin resulted in dramatic elevation of cyclin B1 protein compared to mock asynchronous cells ( Figure 5C, compare lanes 1 & 2). This result confirmed that the loss of cyclin B1 observed following MVM infection was a specific, virally-induced event, and likely the cause rather than the consequence of the cell cycle block. A reduction in cyclin B1 mRNA was also apparent during MVM infection as early as 24 hpi ( Figure 5D, compare lanes 1 & 2). To compare cell cycle alterations with mock infected cells in parallel also at later time points, mock cells were treated with nocodazole at 19 h post release to prevent entry into the next cell cycle phase. Cells were processed for western blotting and RNAse protection assays in parallel. The results demonstrated a correlation between the reduction in cyclin B1 protein and RNA both at MVM infection induced premature nuclear entry and recruitment to APAR bodies of cyclin B1. Under normal conditions, both the expressed levels and localization of cyclin B1 are tightly regulated. In normal cycling cells, cyclin B1 levels progressively increase in the cytoplasm of S and G2 phase cells and only begin to accumulate in the nucleus just prior to nuclear envelope breakdown. It has recently been shown that nuclear import of the cyclin B1/CDK1 complex is dependent on its own kinase activity [43,44]. At early times during MVM infection, prior to the loss of cyclin B1, we have shown that the complex was inactive due to the inhibitory phosphorylation of CDK1 (Figure 4, panels a and i, lanes 1 and 2), thus, we expected that at these times cyclin B1 would remain cytoplasmic. However, surprisingly, at 24 h post MVM infection, when 70-80% of cells were infected, we observed that the cyclin B1 that was present displayed a nuclear localization in approximately 90% of infected cells (see Figure 6B), and in many cells showed co-localization with MVM NS1 in APAR replication centers ( Figure 6A, panels a and b). At this time point we also found many infected cells that showed reduced cyclin B1 staining (see Figure 6A, panel a). Consistent with the results shown in Figure 5A, by 30 h pi there was very little staining of cyclin B1 apparent, with some cells exhibiting undetectable cyclin B1 in these assays ( Figure 6A, panel d). In contrast, doxorubicin treated cells blocked in G2 exhibited both an increased accumulation and cytoplasmic localization of cyclin B1 ( Figure 6A, panel e) which was restricted to the cytoplasm in all cells with detectable cyclin B1 expression ( Figure 6B). Together, these results suggest that MVM infection led to an early mislocalization of cyclin B1. It is not known whether mis-localization of cyclin B1 at early times during infection is a necessary prelude to its eventual loss. Discussion DNA viruses induce cellular DDRs that can present a block to infection that must be overcome, or alternatively can be utilized to viral advantage [45]. Many viruses, including parvoviruses, have been shown to induce DDR-dependent cell cycle alterations in infected cells, employing varying mechanisms [46,47]. Due to their small genome sizes, parvoviruses do not encode their own polymerases, and must rely on cellular proteins in order to replicate their genomes [3]. A few hours after S phase entry, viral replication commences and soon thereafter, a DNA damage response characterized by phosphorylation of a number of cellular DDR-related proteins is initiated [7][8][9]. Viral replication is required for full induction of the MVM-dependent DDR; however, the specific trigger of this response is not yet clear. Coincident with viral replication and an ongoing DDR, cellular replication begins to decline. Infected cells undergo a transient Sphase block and eventually become fully arrested prior to mitosis [1,7,21]. A poorly understood feature of autonomous parvovirus replication is that it is sustained for long periods of time in a premitotic nuclear environment following cessation of cellular DNA replication [12]. MVM genome replication is a source of ongoing DDR induction yet p21 and Chk1, major players typically associated with S-phase and G2-phase cell cycle arrest in response to diverse DNA damage stimuli, are either down-regulated, or inactivated, respectively, during infection [7,33]. We have shown that MVM infection induced a cell cycle block independently of these two proteins via a two-step mechanism which is unlike that seen following other DNA-damaging agents or virus infection. The Chk2 protein was first activated and recruited to MVM replication centers during infection. Chk2 activation was necessary to induce a transient S-phase block associated with CDC25A degradation which was necessary for full levels of viral replication. Chk2 activation and CDC25A loss, however, were not sufficient to induce the dramatic G2 arrest seen following MVM infection. While the Y15 phosphorylation of CDK1 that normally inhibits entry into mitosis was lost as infection progressed, the MVM induced DDR resulted in the near-complete depletion of cyclin B1, thus directly inhibiting cyclin B1-CDK1 complex function. This was likely to be the main cause of the more permanent G2 arrest that ensured that infected cells did not proceed into mitosis. The intra-S-phase reduction in cellular DNA replication is a general response to DNA damage stimuli that helps affected cells to correct replication defects before proceeding to G2 and M [35]. Our data suggests that parvoviruses may exploit this response in order to prime the cell to support its replication. Upon DDR induction in MVM infected cells, Chk2 was phosphorylated by ATM. There was a pan-nuclear increase in phospho-Chk2 staining and, importantly, accumulation of the phosphorylated protein in APAR bodies where viral replication takes place. Chk2 activation resulted in proteasome-mediated CDC25A degradation. Since CDC25A phosphatase is necessary for activating CDK2 [40], loss of CDC25A in MVM infected cells likely led to reduced CDK2 activity, which would be predicted to inhibit cellular DNA replication machinery [48] and cell cycle progression [49]. Additionally, Chk2 activity was recently shown to directly inhibit the replicative helicase complex, providing an alternative basis for Chk2-mediated S-phase arrest [50]. Our inhibition of Chk2 kinase activity thus abrogated the virusmediated S-phase arrest resulting in a subsequent reduction in viral replication. Complete inhibition of CDK2 during S-phase would likely prevent viral replication. But, in order to have complete inhibition of CDK2 activity and a total S-phase block, both the ATM/Chk1- Mislocalization of cyclin B1. Panels a to d; para-synchronized A9 cells were infected with MVMp (MOI of 10) for 24 or 30 hours before being fixed and processed for immunofluorescence. APAR bodies were detected with antibodies to NS1. Representative images are shown. G2 cells (identified by prominent cytoplasmic staining of cyclin B1) are shown juxtaposed to infected cells. In infected cells which showed cyclin B1 staining, cyclin B1 was nuclear and observed within distinct foci which co-localized with APAR bodies. Also, cyclin B1 staining intensity was reduced in many infected cells (compared to G2 cells, see figure 6C) and was completely absent in some (see Figure 6D, compare infected cell on the far right with the one adjacent). All images were captured using an objective of 636. Panel e; A9 cells were treated with 200 nM doxorubicin (doxo) for 28 hours and processed as above. Chk2/CDC25A pathway and the ATM/Nbs1/SMC1 pathway must be inactivated [35,36,51,52]. We did not observe significant activation of SMC1 during MVM infection. Indeed in preliminary experiments we have found reduced, but not complete, inactivation of CDK2 activity following infection (Adeyemi and Pintel, unpublished). Following IR treatment, inactivation of either the Chk2-CDC25A pathway or the SMC1 pathway alone results in a partial radio-resistant DNA synthesis (RDS) phenotype -a scenario in which a reduced level of DNA synthesis still takes place following radiation treatment [51]. We recently found that the potent CDK inhibitor p21, which inhibits repair synthesis, is targeted for degradation during MVM infection [33], and our earlier work has shown that complete abrogation of CDK2 activity via roscovitine treatment reduced virus replication [33]. Thus, it is possible that activation of Chk2 but not SMC1 during MVM infection may have allowed the low levels of CDK2 activity necessary to maintain synthesis of viral DNA, while still limiting cellular DNA replication in these cells. In addition to transiently blocking cells in S phase, parvovirus infection results in a marked pre-mitotic cell cycle arrest [7,21]. Activity of the cyclin B1/CDK1 complex, which is required for mitotic entry, is dependent on the levels of cyclin B1, its localization in the cell, and regulatory phosphorylation of CDK1 [23]. Typically, maintenance of the inhibitory phosphorylation of CDK1 serves to halt cells in G2 phase following DNA damage stimuli [30]. The inactive complex is retained in the cytoplasm until the source of the damage stimulus is removed [23]. As expected, at late times during MVM infection kinase assays demonstrated that CDK1 activity was inhibited; surprisingly however, we found that the inhibitory phosphorylation of CDK1 was lost. Instead, we observed a dramatic reduction in cyclin B1 levels at this time, ensuring inactivity of the cyclin B1/CDK1 complex and a block to mitotic entry. Arrest of uninfected A9 cells at the G2/M border by doxorubicin resulted in dramatic elevation of cyclin B1 protein suggesting that the loss of cyclin B1 observed following MVM infection was a specific, virally-induced event, and likely the cause rather than the consequence of the cell cycle block. Failure to sustain the inhibitory phosphorylation of CDK1 may have been due to absence of Chk1 activation. Chk1 exerts its inhibitory effects on CDK1 via inhibitory phosphorylation of CDC25C [29,40], however we have been unable to detect Chk1 activation in MVM infected cells. Chk2 has also been shown to phosphorylate CDC25C in vitro [31]; however, our data suggest that Chk2 activity is unable to sustain a permanent G2 arrest during MVM infection. Ectopic expression of the MVM NS1 protein by itself led to an increase rather than decrease in cyclin B1 levels (data not shown), suggesting that an event mediated by viral replication and induction of the DDR triggered the reduction of cyclin B1 levels in MVM infected cells. Reduction in cyclin B1 appeared to correlate with a prior depletion of cyclin B1 mRNA. The cause of cyclin B1 loss is not yet clear. In preliminary experiments, siRNA depletion of p53 partially restored cyclin B1 levels, suggesting that p53 activation may be involved in cyclin B1 RNA depletion (Adeyemi, Fuller, and Pintel, in preparation). Typically, cyclin B1 levels progressively accumulate in cells during S-phase, peaking in late G2 and early mitosis [23]. MVM infection did not appear to prevent the initial increase in cyclin B1 protein levels; cyclin B1 was detected early and increased through 24 h pi before beginning to decrease. Depletion of cyclin B1 mRNA, however, occurred as early as 24 h after infection. Interestingly, in cells that had not yet lost cyclin B1, we observed nuclear localization of cyclin B1 and co-localization with NS1 in APAR replication bodies. This was unexpected because cytoplasmic retention of cyclin B1 is a general mechanism governing cell cycle arrest during infection by various viruses and DNA damage stimuli [46,47]. Furthermore, while it has recently been shown that that cyclin B1/CDK1 activation is necessary for nuclear localization of the complex [43,44,53], in MVM infected cells cyclin B1 showed nuclear localization even though the kinase remained inactive. It is not yet clear whether the mis-localization of cyclin B1 at early times during infection is a necessary step in its subsequent depletion. There is substantial evidence indicating that the sustained S/G2 cell cycle block seen following MVM infection is essential for its replication. Abrogation of MVM-induced DNA damage signaling by caffeine and ATM inhibitors, which by themselves did not affect cell cycle distribution of mock infected cells, led to a significant reduction in both viral replication and virus-induced S/ G2 arrest [7]. Here we have shown that inhibition of Chk2, whose activity appears to function after the onset of MVM replication (as evidenced by the initial accumulation of NS1), and which blocks subsequent S-phase progression, significantly reduced MVM replication without substantially affecting the proportion of cells blocked in G2 phase. We have not excluded, however, that Chk2 activity affects MVM replication in ways independent of an Sphase block. Similarly, a recent report has also shown that S-phase arrest following MVC infection is critical for its replication [18]. We assume that depletion of cyclin B1 is also a necessary feature of MVM infection; however, how reduced cyclin B1 levels facilitate infection is not yet clear. A number of other DNA viruses have recently been shown to exploit a G2 arrest to promote their replication [16,46,54]. In conclusion, we have shown that during MVM infection Chk2 activation led to a transient S-phase block which was associated with CDC25A degradation and was necessary for viral replication. Chk2 activation and CDC25A loss were alone not sufficient to sustain the G2 arrest seen following MVM infection. Rather, although the phosphorylation of CDK1 that normally inhibits entry into mitosis was lost as infection progressed, the MVMinduced DDR resulted first in a targeted mis-localization and then significant depletion of cyclin B1, thus directly inhibiting cyclin B1-CDK1 complex function and preventing mitotic entry. MVM infection thus uses a novel strategy to ensure a pseudo S-phase, pre-mitotic, nuclear environment for sustained viral replication. Cell lines, viruses and virus infections Murine A9 and human NB324K cells were propagated as previously described. Wild-type MVMp was propagated following transfection of the viral infectious clone in NB324K cells and titered by plaque assay on A9 cells as previously described [7]. Infections were carried out at an MOI of 10 unless otherwise indicated. Where indicated, reinfection was blocked by addition of viral capsid neutralizing antibodies to the media. Cell synchronization and drug treatments A9 cells were para-synchronized in G0 by isoleucine deprivation. Unless otherwise indicated Doxorubicin (Sigma) was used at a final concentration of 200 nM. Chk2 inhibitor II was obtained from Sigma and used at a final concentration of 10 mM. Nocodazole was obtained from Calbiochem and used at a final concentration of 150 ng/ml, and controls were treated with the DMSO vehicle. siRNA transfections ON-TARGET plus SMART pool siRNAs directed against mouse Chk2 (cat # L-0406034-00) was obtained from Dharmacon. A9 cells plated in isoleucine-deprived media in 60 mm dishes were transfected at the day of plating with 40 nm of siRNA using HiPerfect transfection reagent (Invitrogen). Transfections were repeated 24 hours later and the next day the cells were released into complete media and processed as described in the figure legends. Immunoblot analyses Cells grown and infected in 60 mm dishes were harvested and lysed in modified RIPA buffer containing 20 mM Tris HCL pH 7.5, 150 mM NaCL, 10% glycerol, 1% NP-40, 1% sodium deoxycholate, 0.1% SDS, 1 mM EDTA, 10 mM trisodium pyrophosphate, 20 mM sodium fluoride, 2 mM sodium orthovanadate and 16 protease inhibitor cocktail (Sigma). Alternatively, cells were lysed in 2% SDS lysis buffer directly on cell culture dishes as previously described. Protein concentrations were quantified by Bradford assay and equal amounts of lysates were loaded in wells and used for western blot analyses as previously described [7]. Kinase assays Kinase assays were performed using a previously described protocol [55]. Briefly, CDK1 was immunoprecipitated from equal amounts of infected or drug-treated A9 cell lysates using 4 mg of CDK1 antibody (Cat # Ab18, Abcam). Histone H1 (Millipore) was used as a substrate for kinase assays. Immunofluorescence For immunofluorescence, NB324K cells or para-synchronized A9 cells were grown on glass coverslips in 35 mm dishes and infected with MVMp using an MOI of 10. After 24-30 hr, cells were washed with PBS, fixed with 4% paraformaldehyde for 15 min and extracted with 0.5% Triton X-100 in PBS for 10 min. After blocking in PBS containing 3% BSA, cells were stained with the indicated antibodies. Nuclei were visualized by staining with either DAPI or TOPRO3. The coverslips were mounted in Fluoromount-G (Southern Biotech) and images were acquired using a Zeiss LSM 510 Meta confocal microscope. All images were captured using an objective of 636. Cell cycle analyses An hour before infection, cells were pre-treated with the 10 mM Chk2 inhibitor II or vehicle (DMSO) control. Cells were then mock infected or infected at an MOI of 10. After 24 hours, cells were harvested and fixed in 4% formaldehyde for 15 min at room temperature. Alternatively, cells were fixed in 70% ethanol for 15 min on ice. Cells were then pelleted, washed in PBS and resuspended in 50 mg/ml propidium iodide solution containing 0.1 mg/ml RNAase A as well as 0.05% Trition X-100 for 40 min at 37uC. Cells were resuspended in PBS and flow cytometry was performed using FACScan (BD Biosciences). Data were analyzed using Summit software (Beckman Coulter). Analysis of viral DNA Cell pellets from 60 mm dishes were split in two, with one half used for western blot analysis and the other half for Southern blot analysis. Southern blots were carried out as previously described using whole MVM genome probes. Loading of DNA samples was normalized using a nanodrop spectrophotometer, and results and quantifications were standardized using probes against mitochondrial DNA as described [56]. RNAse protection assay (RPA) RPA's were performed as previously described [57]. Total RNA was isolated using Trizol reagent (Invitrogen). Murine cyclin B1 cDNA was obtained from Origene. Nucleotide 1 to 180 of murine cyclin B1 cDNA was cloned into pGEM3Z to make an antisense probe. Northern blots Total RNA was extracted using Trizol reagent. Northern blots were performed as previously described [58]. Cyclin B1 was detected by probing with 32 P-labeled full length murine cyclin B1 cDNA.
v3-fos-license
2018-04-03T01:50:39.290Z
2015-03-01T00:00:00.000
1383715
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/jeph/2015/191856.pdf", "pdf_hash": "3594c8238e6c4b47a6a873231db641dac048e008", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44694", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1bcd4569220ccaaf1560798a0bd6daf08e00f560", "year": 2015 }
pes2o/s2orc
Successful Strategies to Engage Research Partners for Translating Evidence into Action in Community Health: A Critical Review Objectives. To undertake a critical review describing key strategies supporting development of participatory research (PR) teams to engage partners for creation and translation of action-oriented knowledge. Methods. Sources are four leading PR practitioners identified via bibliometric analysis. Authors' publications were identified in January 1995–October 2009 in PubMed, Embase, ISI Web of Science and CAB databases, and books. Works were limited to those with a process description describing a research project and practitioners were first, second, third, or last author. Results. Adapting and applying the “Reliability Tested Guidelines for Assessing Participatory Research Projects” to retained records identified five key strategies: developing advisory committees of researchers and intended research users; developing research agreements; using formal and informal group facilitation techniques; hiring co-researchers/partners from community; and ensuring frequent communication. Other less frequently mentioned strategies were also identified. Conclusion. This review is the first time these guidelines were used to identify key strategies supporting PR projects. They proved effective at identifying and evaluating engagement strategies as reported by completed research projects. Adapting these guidelines identified gaps where the tool was unable to assess fundamental PR elements of power dynamics, equity of resources, and member turnover. Our resulting template serves as a new tool to measure partnerships. Introduction The creation and timely translation of action-oriented knowledge can rest on meaningful engagement with end-users, even before the research begins [1,2]. Participatory research (PR) (following Cargo and Mercer [3] and Green et al [4] and we use PR as an umbrella term to include all partnered research, including community-based participatory research (CBPR), action research, participatory action research, participatory evaluation, community engagement and patient engagement), and community engagement continue to attract increased attention as an approach to research, requiring formation of teams of researchers in partnerships with those affected by the issue under study in the community [3][4][5] and those who will utilize the results to effect change [6,7]. Overall, the literature suggests that the PR partnership approach increases the relevance of research questions [3,5,8], with the potential for effective knowledge translation [9,10], leading to faster uptake of evidence into practice [11]. For these reasons research granting agencies, including the National Institutes of Health (NIH), the Patient Centered Outcomes Research Institute (PCORI), and the Canadian Institutes of Health Research (CIHR), are increasingly requiring that researchers partner with community members, Figure 1 patients, health professionals, health organisations, and policy makers, resulting in many more researchers adopting a participatory approach. In 1995, Green and colleagues developed guidelines intended to allow reviewers of funding agencies to assess stakeholders' engagement in PR projects [4,12]. In 2008, these guidelines were further refined and reliability was tested to develop the Reliability Tested Guidelines for Assessing Participatory Research Projects [13] as a tool to (i) help funding agencies and peer reviewers to assess the participatory nature of proposals submitted for funding as participatory research; (ii) aid evaluators in assessing the extent to which projects meet participatory research criteria; and (iii) assist researchers and intended users of the research (i.e., nonacademic partners) in strengthening the participatory nature of their project proposals and applications for funding [12,13]. In 2009 van Olphen et al. [14] applied these guidelines for the first time, to a single project to assess to what extent their research was participatory as perceived by community, advocacy, and scientific partners. The authors concluded that this had been a very useful undertaking and that "further research should focus on the adaptation of PR principles to assist in evaluating the process and outcomes of PR [14]. " As the principles of the PR approach are used in a wide variety of research and contexts, there is a need to explore the following questions: What are the key processes of PR and what are the practical ways to achieve equitable partnerships? What processes support the constant negotiation between all team members for research goals and objectives, partner roles and responsibilities, and decision-making procedures, together with balancing knowledge generation with the need for action? Therefore, the purpose of this study is to build on recommendations [14] and use the 2008 Reliability Tested Guidelines to undertake a critical literature review of PR projects to synthesize key practical strategies that foster a successful PR process, resulting in continuous discussions between partners that will in turn facilitate knowledge translation activities throughout the research [15]. Data Sources. A critical review goes beyond the description of primary studies and includes an empirical analysis for exploring new ideas [16]. While critical reviews are criticized for their nonsystematic approach, "the 'critical' component of this type of review is key to its value" [16]. To begin, a multidisciplinary bibliographic database (ISI Web of Science) was searched using the phrase "participatory research" for all articles from 1995 (when the initial PR guidelines were published) until October 2009 (which was the year after the Reliability Tested Guidelines were published). Results of this search yielded 1866 publications. These were then imported into CiteSpace-a bibliometric network analysis tool (http://cluster.cis.drexel.edu/∼cchen/citespace/)which generated a map of author-citation frequency. Results contained foundational PR scholars such as Paulo Freire and theoreticians including Peter Reason as well as those with practical PR experience. Our selection tool eliminated theoretical/foundational authors and retained only authors that have conducted practical PR studies. For this review we needed to limit the size of the study and chose to retain only the top four leading PR practitioners using their CiteSpace centrality scores: Barbara A. Israel, Meredith Minkler, Nina Wallerstein, and Ann C. Macaulay ( Figure 1). Next, a librarian-mediated search was conducted for all published materials by these four authors in PubMed, Embase, ISI Web of Science, PsychInfo, and CAB (Ovid database) for abstracts between January 1995 and October 2009. In addition we also reviewed chapters from books edited by these authors [17][18][19]. Duplicates were removed, for a total of 151 records (title, authors, source, and abstract). (table 1) Step V. Does the text contain any useful excerpts? Yes No Excluded Retained texts Step I. (n = 151) Step III. Step II. (n = 140) (n = 72) Step IV. 2.2. Study Selection. A staged selection process was then completed to limit the sample using eligibility criteria. First, records were excluded when one of the abovementioned PR leaders was neither one of the first three authors nor the last author ( = 11) to ensure that the leader had substantive input into the work. The second step excluded records that were not PR related ( = 68). The third step excluded records that did not contain any description of the PR process ( = 9) or records that contained only the theory of PR ( = 7). In the final step, records were excluded when they did not contain useful excerpts ( = 2), leaving 54 retained records ( Figure 2). Data Extraction. We conducted a deductive qualitative thematic analysis to extract useful data from our sample of documents [20]. For each of the 54 retained documents, relevant excerpts were selected and compiled in a Word document and organized by theme. These themes were derived from the partnership-related dimensions of the Reliability Tested Guidelines for Assessing Participatory Research Projects [13]. These guidelines contain 25 questions, 21 of which target the PR partnership process, making them very suitable to serve as themes for data extraction and analysis. These questions informed our coding scheme to identify PR process strategies. Using a coding grid based upon these questions (Table 1), partnership process-specific excerpts from the retained documents were extracted for analysis. Each retained document was reviewed in its entirety, and all excerpts in those documents that directly answered one of the questions were extracted and compiled in a matrix of "data by theme" for further analysis. Data coding was nonexclusive, and each excerpt could be coded to one or more questions on the coding grid. Data Analysis. Data abstraction and coding were undertaken by one author (David Parry) using nonspecialized software (MS Word), which is appropriate for a deductive qualitative data analysis using a limited number of themes (codes). Each excerpt extracted from the retained documents was assigned to one or more themes, which was verified by a second author (Pierre Pluye or Jon Salsberg). Disagreements were discussed for possible resolution, and any that could not be resolved were adjudicated by a third party (Jon Salsberg for Pierre Pluye and vice versa). Using a constant comparative technique, themes were collapsed into overarching categories. These categories were generated through initial and focused coding techniques by comparing and contrasting text segments and sorting codes into conceptually meaningful units [21]. For example, subthemes such as "advisory committee, " "steering committee, " and "planning committee" were all grouped under the main theme "committee. " Table 2 presents the references of the 54 documents that were retained for analysis and are organized by the four main authors. From these documents, 186 excerpts were assigned or coded to one or more than one theme. Of those, there was agreement between the reviewers for 180 (97%) of excerpts. For the six remaining excerpts where there was disagreement, consensus was reached on five, and final judgment was sought from a third author (Jon Salsberg) for one excerpt. The five most frequently mentioned strategies for fostering a researcher-community partnership are listed (unranked) and described in Table 3. These are forming an advisory board, developing a research agreement, using group facilitation techniques, hiring from the community, and having frequent meetings. Results The remaining less frequently mentioned strategies are summarized in Table 4, which we felt could not be collapsed into categories without losing individual substance. However we consider these examples as also being extremely important for researchers to put into practice, including the need for researchers to make active efforts to reach out and learn about their partners and their communities; facilitating engagement by being flexible and working around schedules of the partners; understanding community priorities and culture; establishing clear lines of communication; speaking frankly and agreeing to disagree; building community capacity; supporting partners interpretation of data; publishing results in community; including nonacademic partners as copresenters and coauthors; working with community partners to build resources based on results; using the results to influence policy; and regular evaluation of the partnerships. Discussion This is a first step in a larger research agenda to identify variation in PR practices across contexts and partnership stages that could in the future be drawn on to answer the question of efficacy of PR practices. As this review was exploratory and not systematic, we decided to include a purposeful sample of included studies. CiteSpace helped us to elicit a criterion for a purposeful sampling. The rationale was that most cited papers for our review played a role similar to "key informants" in primary research. Given that this study had limited resources, we focused on the top four authors (most popular "key information resources"). From the four authors identified, committees such as steering committees and advisory committees are the most frequently mentioned strategy as a way to engage key stakeholders around the table from the beginning-including patients, practitioners, service managers, communities and the public, and policy makers. The second most frequently mentioned strategy is drafting research agreements, which some recommend should be done early in the partnership in order to avoid misunderstandings and because the process of developing written agreements or partnership principles is in itself a partnership building process [72,74]. However, the authors of this paper are also aware of teams who have not wanted a written agreement, either for cultural reasons where a verbal agreement is deemed very final, or due to the fact that it could be construed to imply lack of trust between the researchers and the partners. Our review results show that group facilitation is often suggested as a way to offer equal opportunity for partners to participate in discussions and to afford more reserved partners the chance to voice their opinion. Facilitation includes informal group discussions and formal techniques with many techniques borrowed from management. Hiring staff from the community increases credibility of the research, adds cultural relevance, builds capacity, promotes empowerment, provides work, brings in finances, and integrates knowledge translation throughout the process. Finally, frequent meetings are essential to maintain open communication as research evolves and to manage different expectations. Table 4 shows many other additional practical strategies and supports the importance of meeting the needs of various partnerships in a wide range of contexts. It also emphasises the need for researchers to learn more about community issues and fully engage community members throughout the research process including interpretation of data and dissemination of results both internally within the community where the research was undertaken and externally. To our knowledge, this is the first time the Reliability Tested Guidelines have been used to undertake a critical literature review to document PR partnership processes. The strengths of this review include (i) using a bibliometric methodology to identify leading PR practitioners, (ii) a comprehensive identification of PR studies conducted by these Table 1: Data abstraction questions and rationale based on Mercer and colleagues [13] guidelines. Project management. Although the question is about an explicit agreement, the main theme is project management. This is a key question for PR process. N.B. This question was modified half way into data collection. Decision to modify was mitigated by the fact that we had already captured whether written agreements existed or not. Intended users learning about research methods. Reference to "opportunity" is removed because it is irrelevant here as it is not being applied as an evaluation tool 6 Journal of Environmental and Public Health (2) Development of research agreements (i) Before the research begins, clearly spell out researchers and partner roles and responsibilities, outline how decisions will be made (e.g., by consensus or by voting), and set out what to do if conflict arises (ii) Research agreements may also include plans for data ownership and control, interpretation of data, and procedures for resolving disagreement over research results (iii) Developing agreements is seen as a trust-building exercise (3) Use of group facilitation techniques (i) Can be both a formal and an informal process to ensure meaningful involvement and participation of partners (ii) Formal facilitation includes focus groups, workshops, and nominal group techniques (iii) Informal techniques include circulating agendas ahead of time, small group work, and one-on-one informal discussions (4) Hiring staff from the community of study (i) Hiring local persons as project staff recognizes community members' abilities to establish good relationships with individual participants for recruitment and ongoing data collection (ii) Projects hire well-respected community members as a "community champions, " field coordinators, intervention staff, interviewers, and group cofacilitators, for data collection and analysis. (5) Frequent communication (i) Communication between partners through regular group meetings to keep all partners updated on progress and changes in procedures and as a way of discussing concerns and challenges (ii) Other methods include telephone calls to partners who missed meetings to bring them up-to-date and prompt circulation of meeting minutes and newsletters authors, (iii) a transparent selection of relevant documents describing PR partnership processes, and (iv) a reproducible deductive qualitative data thematic analysis using the Reliability Tested Guidelines as basis for a coding scheme to analyze relevant excerpts from retained documents. This critical review has also identified that the four authors reviewed utilise these processes and also reestablished the Reliability Tested Guidelines as reliable criteria by which to measure partnerships. It is noteworthy that four or fewer excerpts were identified for the following Reliability Tested Guidelines dimensions: mutual learning (Q8), conflict resolution over interpretation of results (Q14), and data ownership and sharing (Q15). This is surprising considering that mutual learning is a fundamental PR principle and the latter two are key issues to be resolved for any PR project. More literature on these topics would be very useful; for example, Jagosh and colleagues found that successful conflict resolution led to further strengthening of the teams [75]. This review also highlights gaps that the Reliability Tested Guidelines do not address. These include (i) the issues of power dynamics and recommendations for ways of decentralizing power and decision-making either through subcommittees or through a high level of local control, (ii) ways to address issues of equity of resources, that is, equitable sharing of resources across community organizations and researchers, or providing grants or other funding to participating community-based organizations, and (iii) the common problem of adding or replacing new members throughout the project-which causes shifting group dynamics. We also recognize that more other human aspects of partnerships have not been addressed, including the time needed to consolidate partnerships, issues of power differences, personality clashes, and institutional cultures. There is much diversity in the strategies discussed by the four PR leaders. This is particularly encouraging for three reasons. First, it suggests that PR is highly adaptable to many contexts and settings and the iterative nature of this research approach. As PR is rapidly expanding beyond its earlier application in health promotion with marginalized communities, this adaptability will become increasingly important for partnerships with new types of communities including communities of practice and organizations such as practice-based research networks [76] and also for partnering with patients and policy makers. Second, research teams can find many strategies in the results to draw upon when starting out. A given strategy does not always work for a given context and the whole team can discuss potential alternative strategies. Third, the diversity of results reinforces the notion that the PR process is an active, iterative endeavour, requiring energy and flexibility from all partners. The findings are supported by other authors including a critical review by Cargo and Mercer [3] and incorporated by Wallerstein and Duran [10] in a conceptual logic model of community-based participatory research. For those embarking on PR there are recommendations and training curricula from individual teams [76][77][78] and organisations [74,[79][80][81] on how to build PR teams and maintain equitable partnerships throughout the research process, including dissemination of the results. There are also an increasing number of publications on the (i) The community can be involved in all phases of research (ii) Ensure active involvement of community members in all study tasks (e.g., reviewing all study documents to ensure they are in an understandable language) (iii) Solicit suggestions from community partners through focus groups or meetings (e.g., on data collection approaches) (iv) Hire and train lay community members or utilize an advisory board as field coordinators, interviewers, data collectors, intervention staff, and analysts (e.g., identification of variables, selection of measures, and questionnaire development) (viii) Community training in research (i) Provide training to community about health issues (ii) Use training sessions to get community perspective on these issues (iii) Have community members critique preexisting research instruments as a way of learning about developing questionnaires and for researchers to learn about the community's perspective (iv) Teach community public health and research skills (v) Conduct community workshops on research methods (vi) Use focus groups to engage community members in discussions about research in their community Strategy Examples (ix) Engage in early community interactions while developing the project (i) Conduct in-depth interviews with community members and other key informants (ii) Go on "wind-shield" tours driving around the community (iii) Involve community in developing context-specific models (iv) Make use of qualitative data (v) Use theoretical, convenience, and open sampling (x) Advisory committee (i) Set up a subcommittee of the advisory committee to review all partnership evaluation results and make recommendations to the overall advisory committee (ii) Advisory committee can facilitate data analysis and interpret results (iii) Present and discuss results with community partners to facilitate interpretation (iv) Researchers and community members can analyze data independently and present their interpretations (v) Engage in open, interactive analysis with community partners (vi) Adopt a research agreement at the beginning outlining community involvement in results interpretation (xi) Action planning (i) Establish action groups of community partners to develop intervention strategies and plan policy initiatives (ii) Work with community members in deciding upon policy initiatives and action plans (iii) Instrumental use of research results to lobby government (iv) Work with community partners to develop community resources based upon study results (v) Hold meetings with community partners to discuss other nonstudy-related, important issues (xii) Interpretation, data ownership, and dissemination (i) Community partners can communicate their own interpretation of study data along with researcher study publications (ii) Adopt a no veto rule, meaning that neither researchers nor community partners can block a publication with results (iii) Spell out this process in a written researcher agreement before it arises (iv) Researchers can be guardians of the data during the project, but transfer data control to community after the project ends (v) Community obligation is to allow researchers the right to on-going data analysis (vi) Develop dissemination strategy outlining community involvement (vii) Include nonacademic partners as coauthors/copresenters on manuscripts/abstracts (viii) Disseminate results through local organizations, newspapers, media, and community-based practitioners (ix) Jointly publish a community newsletter with results included (x) Make use of local cultural mechanisms, such as street theatre (xi) Circulate a summary report to community members and/or have feedback/discussion sessions (xii) Organize debriefing sessions with a luncheon or gala celebration (xiii) Discuss publication drafts with the community before submission experiences of both academic [82] and community [83,84] team members from their participatory research experiences and documented common characteristics of successful community-institutional partnerships [85]. While this review provided an innovative synthesis of key PR strategies for researchers using a PR approach, a limitation is that it is based on only four authors' publications. Because the review included book chapters not limited to the word count restrictions of journal articles we may have captured more details than from journal articles alone. There are no standard recommendations for reporting on PR; from this review we recommend that journal editors require the key stages from the Reliability Tested Guidelines to be included, which would facilitate future synthesis. Our results consist of strategies that could be tested and explored in greater detail through a larger systematic literature review, which may include more detailed descriptions of applied strategies for planning and sustaining PR partnerships. Such a systematic review might be able to rank these strategies in terms of their effectiveness in different contexts, which would first require further basic research into the efficacy of particular participatory strategies and their effectiveness in generating and translating new knowledge into action. As PR is becoming more accepted, this new evidence is slowly emerging within the fields of participatory research as well as in implementation and translational science. Conclusion This review is the first to adapt the Reliability Tested Guidelines for Assessing Participatory Research Projects to identify leading processes that support PR partnerships. Five key practical strategies to foster a successful PR process are identified that in turn integrate knowledge translation throughout the research process. Some of these results have already been incorporated into the Canadian Institutes for Health Research (CIHR) Guide to Researcher and Knowledge-User Collaboration in Health Research [81]. One colleague remarked, "I will print these 5 strategies in big color letters and pin them in front of my desk. No one can remember 25 questions, while anybody can handle 5 ideas per day. " The guidelines, originally intended to allow funders to assess partnership engagement in grant applications, proved effective at identifying and evaluating the same engagement strategies as reported by completed research projects. Adapting these guidelines for our use identified gaps where the tool was unable to assess the fundamental PR elements of power dynamics, equity of resources, and member turnover. Our resulting template serves as a new tool for research teams to apply to measure their own partnerships.
v3-fos-license
2018-04-03T02:16:40.043Z
2015-10-13T00:00:00.000
15623928
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ijhy/2015/498074.pdf", "pdf_hash": "2a9e215f1a8afd7c9e9fc3d8540eedf8c0cb668d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44695", "s2fieldsofstudy": [ "Medicine" ], "sha1": "01d7dc56ca946214a653052e009aaa964288e2f6", "year": 2015 }
pes2o/s2orc
Provider Adherence to National Guidelines for Managing Hypertension in African Americans Purpose. To evaluate provider adherence to national guidelines for the treatment of hypertension in African Americans. Design. A descriptive, preexperimental, quantitative method. Methods. Electronic medical records were reviewed and data were obtained from 62 charts. Clinical data collected included blood pressure readings, medications prescribed, laboratory studies, lifestyle modification, referral to hypertension specialist, and follow-up care. Findings. Overall provider adherence was 75%. Weight loss, sodium restriction, and physical activity recommendations were documented on 82.3% of patients. DASH diet and alcohol consumption were documented in 6.5% of participants. Follow-up was documented in 96.6% of the patients with controlled blood pressure and 9.1% in patients with uncontrolled blood pressure. Adherence in prescribing ACEIs in patients with a comorbidity of DM was documented in 70% of participants. Microalbumin levels were ordered in 15.2% of participants. Laboratory adherence prior to prescribing medications was documented in 0% of the patients and biannual routine labs were documented in 65% of participants. Conclusion. Provider adherence overall was moderate. Despite moderate provider adherence, BP outcomes and provider adherence were not related. Contributing factors that may explain this lack of correlation include patient barriers such as nonadherence to medication and lifestyle modification recommendations and lack of adequate follow-up. Further research is warranted. Introduction Hypertension (HTN) is a medical condition that is characterized by high or uncontrolled blood pressure. Inadequate control of HTN can lead to more serious vascular conditions affecting the major blood vessels in the heart, brain, and body. Additionally, HTN and diabetes mellitus (DM) frequently coexist, which further increases the risk of developing vascular complications. Vascular complications are a group of disorders that affects the heart and blood vessels. Hypertension is a major risk factor for vascular disease including heart attacks and strokes [1]. In 2008, an estimated 17.3 million people died from vascular complications. Of those 17.3 million vascular-associated deaths, 6.2 million were due to strokes [2]. It is predicted that, by the year 2030, an estimated 23.3 million will die from stroke and heart disease [2]. Addressing risk factors that contribute to HTN may help prevent vascular complications. According to the World Health Organization (WHO) [3], complications of HTN such as strokes account for 9.4 million of the astounding 17 million vascular-associated deaths. Another consideration is the financial burden of HTN; according to the Centers for Disease Control and Prevention (CDC) [4], the annual cost of HTN treatment was 131 billion dollars. The physical and financial burdens of HTN are not unique to any one group of individuals. However, it has been well documented that African Americans (AAs) have a disproportionate burden of morbidity and mortality compared to Caucasians [1]. Data collected from 2008 suggest that non-Hispanic blacks accounted for 31.7% of the 59.4 million people with HTN, whereas non-Hispanic whites accounted for only 26.8% [2]. Despite research and interventions to decrease both the physical and financial burdens of uncontrolled HTN, specifically in the AA population, HTN remains a national problem [5]. Numerous interventions have been documented to improve control of HTN in AAs. The aims of such interventions have been to reduce the barriers to better control. Provider-centered barriers are the focus of this study and include limited patient-provider communication regarding lifestyle changes, lack of adherence to established guidelines for HTN management, and resistance to change. In addition, systems barriers were assessed and include access to care, medication costs, and lack of healthcare coverage [6]. Racial disparities related to geographical areas in healthcare lead to disproportionate mortality and morbidity in rural areas. Patients often seek medical attention for chronic conditions from their primary care providers. Geographic location of this population and clinic locations can influence patient outcomes [7]. Rurality adds to the burden of HTN in AAs. Healthcare disparities such as ethnicity, poverty, and access to care are all associated with rurality and contribute to the higher incidence of HTN in AAs. For example, barriers to healthcare in rural communities include transportation, lack of health insurance, and lack of healthcare facilities and providers, all of which contribute to limited access to healthcare. As a result, rural communities have a higher incidence of chronic diseases such as HTN [7] and have poorer outcomes [8]. As previously mentioned, a major problem for rural communities is access to healthcare. Improving access to healthcare for rural America is a priority. The National Rural Health Association [9] has developed a timeline for the Affordable Care Act, which is designed to address the issues pertaining to access to healthcare. Provisions on the timeline include workforce improvement, payment reimbursement, and requirement of the electronic health record requirements, to name a few. Student loan repayment programs for those working in rural or underserved areas and improving Medicare and Medicaid reimbursement in rural practices are some specific provisions that have been implemented to improve access to healthcare in rural communities [9]. Theoretical Framework. The theoretical framework of Avedis Donabedian was used as a tool to guide this research. His framework was used to assess the quality of care provided in healthcare. The three components that form the foundation of this theory are (1) structure of care, (2) process of care, and (3) outcomes. The concept is grounded on the principle of healthcare outcomes as a result of the medical care provided by medical professionals [10]. Donabedian (as cited in McDonald et al. [11]) describes structure of care as any process that relates to the organizational and physical aspects of care settings. A few specific examples of this process are facilities, equipment, and operational and financial processes supporting medical care. The second component of this framework is process of care. Process of care is dependent upon the structures of care to supply resources and methods that are necessary for participants to carry out patient care activities. Patient-provider communication, practice habits, and care management are all examples of process of care. Further, the goal of process of care is to improve patient health by promoting recovery, patient survival, and even patient satisfaction [10]. The final concept of this model, outcomes, is simply the patient outcomes based on medical health after the application of the two previous components [10]. Figure 1 depicts the components of Donabedian's theory and how it is applicable to this study. Study Design. A retrospective review of the EMR was conducted to identify hypertensive AA patients in a rural clinic who were seen from July 1, 2014, to August 31, 2014. A descriptive, preexperimental, quantitative method was used to evaluate the degree of provider adherence to national HTN guidelines in AAs living in a rural community. Inclusion criteria for the patients included ( Figure 2) (a) age 20 to 80 years, (b) AAs with a diagnosis of HTN, and (c) receiving antihypertensive medications. Exclusion criteria included (a) specific end organ damage (i.e., CKD, stroke, cardiomyopathy, or myocardial infarctions), (b) age younger than 20 or greater than 80 years, (c) no office visits during research dates or office visits for reasons other than HTN, (d) no established relationship with a single primary care provider (PCP), (e) diagnosis of medical nonadherence, (f) race other than AA, and (g) deceased patients. A sample of 62 participants met the inclusion criteria. AAs [12]. The practice accepts Medicare, Medicaid, private insurances, and the indigent. The practice serves pediatric to geriatric patients. There are four primary care providers, one cardiologist, two pulmonologists, one neurologist, and one podiatrist. Primary care providers were the focus of this study. The study aims to assess healthcare provider adherence to JNC hypertensive guidelines in AAs. Data Collection and Procedures. The primary source data were selected from the EMR Centricity developed by General Electric Healthcare. An EMR is a digital or electronic version of a paper chart that contains the patient's medical history. A report was populated using the following criteria: (1) the practice site location, (2) race specified as black or African American, (3) birthdate on or after 01/01/1949 but before 01/01/1995, (4) appointment date on or after 07/01/2014 but before 09/01/2014, and (5) active International Classification of Diseases, Ninth Revision (ICD-9) codes containing 401 for hypertension. The EMR was reviewed to identify onset of HTN if feasible, provider selection of antihypertensive drugs for initial treatment, and additional drug choices. HTN was defined as a blood pressure ≥140/90 mmHg in the general population and >130/80 mmHg in hypertensive individuals with a comorbidity of DM in accordance with JNC 7 or patients taking antihypertensive medications. Patients who met the inclusion criteria were selected by convenience sample. A list of those patients was composed and each was assigned a research identifier. Potential participants were consecutively recruited and a sample size of 62 participants met the inclusion criteria. Demographic variables assessed included age, gender, marital status, and insurance coverage. Other variables that were considered include the following antihypertensive drug classes: (a) thiazide diuretics, (b) angiotensin-converting-enzyme inhibitors (ACEIs), (c) angiotensin II receptor blockers (ARBs), and (d) calcium channel blockers (CCBs). Evaluation of monotherapy and combination therapy was also performed. 2.5. Measures. JNC 7 (as cited in Chobanian et al. [13]) describes HTN as a systolic blood pressure ≥140 mmHg or a diastolic blood pressure of ≥90 mmHg in the general population, including AAs. If the patient has a comorbidity such as DM, a systolic blood pressure >130 mmHg and a diastolic blood pressure of >80 mmHg are considered suboptimal in the treatment of HTN. The coexistence of HTN and DM further increases the risk of vascular complications such as strokes and renal disease, which is why the optimal blood pressure goal is lower [13]. Diagnostic measurements for the classification of HTN were performed based on JNC 7 guidelines (Table 1). According to JNC 7, two consecutive readings in contralateral arms at least 5 minutes apart while sitting are categorized as HTN. By auscultation, blood pressures were manually obtained by nurses using the appropriate size cuff with a sphygmomanometer. Patients were in a seated position with feet on the floor and arm positioned at the level of the heart. They had been seated for at least 5 minutes or longer. The providers performed repeat BPs in suboptimal readings after at least 5 minutes. JNC 7 describes the stages of HTN. A normal blood pressure is a systolic blood pressure of <140 mmHg and a diastolic blood pressure of <90 mmHg. Stage 1 is a systolic blood pressure reading of 140 to 159 mmHg or a diastolic BP reading of 90 to 99 mmHg. Stage 2 is classified as a systolic blood pressure ≥160 mmHg and a diastolic reading of ≥100 mmHg. In the general AA population, initial monotherapy with diuretics, specifically thiazide diuretics (TDs), or CCBs should be used for stage 1 HTN or a diuretic in combination with other drug classes for combination drug therapy regimen for stage 2 HTN. JNC 7 recommends specialty referrals if blood pressure is not controlled after maximizing three medication classes, with one being a TD. Lastly, for those with compelling indications, such as those with a comorbidity of DM, ACEIs are recommended to reduce strokes and other vascular complications [13]. With regard to follow-up, JNC 7 recommends a monthly follow-up office visit if blood pressure is not at goal and a follow-up office visit every 3 to 6 months if BP is at goal. Laboratory values for potassium and creatinine should be obtained 1 to 2 times annually and patients with a comorbidity of DM should have their urine microalbumin levels measured at least annually. Patients newly diagnosed with HTN should have a urinalysis, blood glucose, hematocrit, potassium, creatinine, calcium, and lipid profile drawn prior to beginning pharmacological treatment. JNC 7 recommends lifestyle modification education. Better outcomes have been found when lifestyle modification is incorporated into the plan of care. The following are the areas recommended for lifestyle modification: (a) weight loss, (b) following the Dietary Approaches to Stop Hypertension (DASH) diet, which consists of a diet rich in fruit and vegetables, low fat dairy products, and reduced intake of saturated and total fat, (c) adhering to sodium restrictions, (d) regular physical activity, and (e) limiting alcohol consumption. Data Analysis. Statistical analyses were performed on the outcomes of blood pressure control in participants who were prescribed antihypertensive medications based on JNC 7 guidelines compared to those who were not and also on blood pressures that were at goal and those that were not. Additionally, outcomes of provider adherence to the guidelines were measured based on adherence to medication choice recommendations, documented lifestyle modification recommendations, laboratory studies, and follow-up for patients with HTN and HTN with a comorbidity of DM. Descriptive analysis was conducted using crosstabs, frequencies, and means comparison and reported as percentages to describe the results. Crosstabs were used to determine the number of times the recommended combination of a thiazide diuretic was used in combination with an ACEI or ARB. Frequencies were conducted to identify the percentage of patients not prescribed a TD or CCB as monotherapy. Further, the use of means comparison was to compare differences in BP outcomes in patients prescribed ACEIs compared to TDs as monotherapy. Additionally, a chi square analysis was performed to determine if there is a relationship between provider adherence and blood pressure outcomes. Providers. Physicians accounted for 64.5% ( = 40) of the providers in this study. Nurse practitioners (NPs) accounted for 35.5% ( = 22). Of the 29 patients with blood pressure at goal, a physician was the provider in 75.9% ( = 22) of the office visits while NPs provided care in 24.1% ( = 7) of the visits. The demographic characteristics of the 62 participants are described in Table 2. Of the 62 patients studied, 41.9% were male and 58.1% were female. Patient age was divided into 2 categories: less than 65 years and 65 years and older. There were 50% patients in each age group. The mean age was 62.8 years. The most frequent stage of uncontrolled HTN was stage 1 accounting for 84.4% of the 32 patients. Stage 2 HTN was detected in 15.6% of the patients. There were 16 nondiabetic patients whose BP was not at goal. Of those 16, 81% had stage 1 HTN and the remaining 19% had stage 2 HTN. In patients In the eight patients on monotherapy, 37.5% ( = 3) of them met their blood pressure goal despite not being on a TD or CCB. Of the 54 patients on combination therapy, 87% ( = 47) were on a TD or CCB as recommended by JNC 7. Of those 47 patients, 55.3% ( = 26) achieved goal blood pressures. A chi square test was used to determine if there is a relationship between being prescribed JNC 7 medication regimen and blood pressure outcomes. There was no significant relation between taking the recommended TD or CCB and blood pressure control ( 2 = 0.0; = 0.99). Lifestyle Modifications. The categories examined under lifestyle modifications included DASH diet, weight loss, sodium restrictions, physical activity (PA), and alcohol consumption (see Figure 3). Only 6.5% ( = 4) of the 62 patients had documentation of provider recommendations for the DASH diet and alcohol consumption. The DASH diet includes recommendations for limiting alcohol consumption. Weight loss, sodium restriction, and PA recommendations were documented in 82.3% ( = 51) of the patients. Follow-Up Care. JNC 7 recommends follow-up every 3 to 6 months if BP is controlled. Providers were adherent to follow-up recommendations 96.6% ( = 28) of the time in the 29 patients with controlled blood pressure. In the remaining 33 patients who required monthly follow-up due to uncontrolled blood pressure, providers were only 9.1% ( = 3) adherent to the recommendations. Discussion Endless and organized activities that result in measurable improvement in healthcare services and targeted patient outcomes have been described as QI [14]. The way care is delivered is related to quality. The US Department of Health and Human Services [14] identified the 4 principles of quality improvement (QI) as (1) QI work as systems and processes, (2) focus on patients, (3) focus on being part of the team, and (4) focus on use of data. QI work as systems and processes refers to resources and activities that are carried out and are evaluated simultaneously to improve quality of care or outcomes [14]. This is modeled after Donabedian's framework for quality improvement. This study focused on the systems or structural components of systems barriers such as the rural setting the study was conducted in, EMR structure, EMR utilization, providers, and policy. Activities assessed included provider barriers, access to JNC 7 guidelines, provider adherence to those guidelines, recommendation of lifestyle changes, laboratory assessment, and follow-up. Evaluation of the first 2 components is necessary to produce or improve patient outcomes. Outcome goals include decreasing the prevalence of uncontrolled HTN in AAs, decreasing cost associated with HTN, increasing quality of life, and equity. This study was conducted to assess what is currently being done in this rural primary care setting to address the increased prevalence and mortality of HTN in AAs. Using the methodical framework of Donabedian, both quantitative (frequencies) and qualitative (descriptive) data were collected and analyzed to assess the current system and 6 International Journal of Hypertension to identify areas for improvement. The practice guidelines of JNC 7 were used as performance measures for comparison. The JNC conducts and analyzes evidence-based studies periodically. Subsequently, JNC formulates recommendations based on those findings. The National Heart Lung and Blood Institute (NHLBI) traditionally has endorsed previous versions. Controversy surrounding the Eighth Report of the Joint National Committee (JNC 8) has led to NHLBI not endorsing JNC 8 [15]. Other federal organizations have also declined to endorse the new recommendations. The controversy surrounds the increased BP goal of <150/90 in patients aged >60 and a goal of <140/90 for those aged 18 to 60, including those having comorbidities of DM and CKD. JNC 8 guidelines were avoided for this study due to this controversy and its relatively new release. There are several main findings of this study. First, the first-line drug choice as monotherapy in the treatment of HTN in AAs should be TDs or CCBs, as recommended by JNC 7. While there were only eight patients receiving monotherapy, none of them were on TDs or CCBs, which indicates 100% nonadherence to the guidelines regarding monotherapy. In fact, the majority of the patients on monotherapy were on ACEIs, while the remaining were on ARBs. However, studies have found that ACEIs and ARBs are less effective in the AA population. These findings are consistent with the studies reviewed in the literature review for this study. One such study compared the effectiveness of ACEIs as monotherapy between AAs and Whites [16]. Whites had a greater systolic (mean difference of 4.64) and diastolic (mean difference of 2.82) reduction in BP compared to AAs. In contrast, providers were more consistent with the guideline recommendations in AAs on combination drug therapy. Provider adherence was documented in 87% of the patients receiving combination therapy. This finding is consistent with previous studies. One of the goals of a previous study was to determine provider adherence to national guidelines including a TD in combination therapy in AAs of Nigerian decent [17]. The majority (88.8%) of the study sample was prescribed combination therapy inclusive of a TD. In addition, combination therapy was more effective than monotherapy in reducing both systolic and diastolic BP (32.64 mmHg compared to 15.43 mmHg and 18.56 mmHg compared to 6.96 mmHg, resp.). Less than half of the 62 patients in this study have at goal BP readings at the level recommended by JNC 7 despite moderate provider adherence. Similar findings were found in a previous study. Provider adherence to the guidelines overall was 76%. Mean BP values decreased but insignificantly concluding no correlation with provider adherence and attaining BP goals [18]. In this study, medication adjustments were not made in 18% of the 33.2% that required adjustments. This could be a contributing factor to patients not meeting their BP goal. However, adjusting medications is not always associated with attaining BP goals [18]. Additionally, provider adherence in prescribing ACEIs in patients with a comorbidity of DM was seen in 70% of the population. Similar results were found in a previous study. Provider adherence to prescribing an ACEI or ARB in patients with comorbidities such as DM was seen in 88% of the population [18]. Studies have shown that the use of ACEIs in this population decreases mortality and morbidity by decreasing end organ damage and cardiovascular incidents. Lifestyle modification, as an adjunct to pharmacologic therapy, has been associated with better BP control. Provider adherence to alcohol consumption and DASH diet recommendations was poorly documented. Detailed recommendations for the DASH diet and alcohol were only documented in 4 of the 62 EMRs. Adherence was high in the recommendations for physical activity, weight loss, and sodium restrictions. The smart plan is inclusive of these 3 recommendations, which was documented in 51 of the 62 patients. Patient adherence with adequate office visit follow-up has been known to yield better BP control. JNC 7 recommends an office visit follow-up every 3 to 6 months in patients with BP at goal and monthly visits for those who are not at goal. Provider adherence was evident in the 3-to-6-month followup population (97%). A monthly follow-up recommendation for those with uncontrolled BP was poorly represented in this study. Uncontrolled BP can lead to end organ damage such as renal insufficiency or failure, heart attacks, or strokes. Further, the medications used to treat HTN can have adverse effects on other organs. JNC 7 recommendations include a urinalysis, blood glucose, hematocrit, potassium, creatinine, calcium, and lipids in patients newly diagnosed with HTN before initiating therapy. Due to the limitations of the EMR, only 2 patients were identified as newly diagnosed. Of these patients, neither of them had laboratory testing performed prior to starting treatment. Diabetics should have their urine microalbumin level measured annually. Only 15% of diabetics had documented microalbumin levels within the preceding 6 months. There are several limitations to this study. Due to small sample size, variability and vagueness should be noted as limitations. For example, the finding of no documentation of patients receiving labs prior to the initiation of drug therapy may provide a greater impact and consistency in a larger sample size. A larger sample size would produce more detailed, robust, and explanatory assessments. Secondly, the study was conducted during only the summer months and over a short duration. Extending the study period and expanding the study to include fall or winter months may provide input for comparison to determine if seasons impact BP control. Conclusions Despite evidence-based recommendations by JNC 7, provider adherence in AAs has room for improvement. Provider pharmacologic choices and lifestyle modification recommendations are major components to blood pressure control in this population. Thiazide diuretics are recommended as initial monotherapy and in combination therapy for African Americans. CCBs are recommended as an acceptable alternative to thiazide diuretics. CCBs are preferred over ACEIs because of the increased risk of stroke, myocardial infarctions, and other vascular conditions associated with ACEIs. Conversely, providers have demonstrated a preference in prescribing ACEIs and ARBS in monotherapy. Better adherence in prescribing a TD or CCB is seen in prescribing patterns for International Journal of Hypertension 7 patients on combination therapy. Providers are not adherent to the monthly follow-up recommendations required for medication adjustment or specialist referrals when BP is not at goal. Lack of lifestyle modification documentation, specifically the DASH diet and alcohol consumption, is consistent with nonadherence to the JNC 7 guidelines. Although there appears to be no relationship between receiving the recommended medications and BP outcomes, more than half of the population did not meet BP goals. The principal factor assessed in the process of care was provider barriers. Specific components impacting provider barriers include access to JNC 7 guidelines, provider adherence to JNC 7 guidelines, recommendation of lifestyle changes, and follow-up. Provider adherence to the guidelines overall was poor. Lack of documentation, providerprescribing habits, and lack of knowledge of up-to-date, evidence-based guidelines may be contributing factors. While there is a gap between evidence-based national guidelines and clinical practice to controlling HTN, all contributing factors, including physician, patient, and systems barriers [19], require further exploration if successful interventions are to be developed.
v3-fos-license
2020-07-20T14:51:58.224Z
2020-07-20T00:00:00.000
220638801
{ "extfieldsofstudy": [ "Medicine", "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ethnobiomed.biomedcentral.com/track/pdf/10.1186/s13002-020-00392-2", "pdf_hash": "4d1b26024643efd457c85375efc586d45fe50224", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44696", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "sha1": "4d1b26024643efd457c85375efc586d45fe50224", "year": 2020 }
pes2o/s2orc
Home gardens’ agrobiodiversity and owners’ knowledge of their ecological, economic and socio-cultural multifunctionality: a case study in the lowlands of Tabasco, México Background Home gardens (HGs) are hotspots of in situ agrobiodiversity conservation. We conducted a case study in Tabasco, México, on HG owners’ knowledge of HG ecological, economical and socio-cultural multifunctionality and how it relates to agrobiodiversity as measured by species richness and diversity. The term multifunctionality knowledge refers to owners’ knowledge on how HGs contribute to ecological processes, family economy, as well as human relations and local culture. We hypothesized a positive correlation between owners’ multifunctionality knowledge and their HGs’ agrobiodiversity. Methods We inventoried all perennial species in 20 HGs, determined observed species richness, calculated Shannon diversity indexes and analysed species composition using non-metric multidimensional scaling (NMDS). Based on literature, semi-structured interviews and a dialogue of knowledge with HG owners, we catalogued the locally recognized functions in the ecological, economic and socio-cultural dimensions. We determined the score of knowledge on each function in the three dimensions on explicit scales based on the interviews and observed management. We determined Spearman rs correlations of HGs’ observed species richness, Shannon diversity index (H) and of HGs’ scores on NMDS-axis and multifunctionality knowledge scores. We dialogued on the results and implications for agrobiodiversity conservation at workshops of HG owners, researchers and local organizations. Results HG agrobiodiversity and owners’ multifunctionality knowledge in the study area showed large variation. Average richness was 59.6 perennial species, varying from 21 to 107 species, and total observed richness was 280 species. A total of 38 functions was distinguished, with 14, 12 and 12 functions in the ecological, economic and socio-cultural dimensions. Total multifunctionality knowledge scores varied from 64.1 to 106.6, with an average of 87.2. Socio-cultural functionality knowledge scores were the highest, followed by scores in the ecological and economic dimensions. Species richness and Shannon H were significantly correlated with ecological functionality knowledge (rs = 0.68 and P < 0.001 in both cases), and species richness was also correlated with economic functionality knowledge (rs = 0.47, P = 0.03). Species composition scores on the first and second axes of NMDS was significantly correlated with knowledge of ecological multifunctionality, with rs = 0.49 resp-0.49 and P = 0.03 in both cases. Other functionality knowledge scores showed no correlation with NMDS scores. Dialogue in workshops confirmed the interwovenness of multifunctionality knowledge and agrobiodiversity. Conclusion The rich agrobiodiversity of home gardens cherished by rural families in Tabasco relates with the knowledge about HG functionality in the ecological and economic dimensions. Also, species composition relates with ecological functionality knowledge. The socio-cultural functionality knowledge, which includes many elements beyond the individual HG, is not correlated with agrobiodiversity, but had the highest scores. Our results show that multifunctionality knowledge provides many opportunities for the participative conception and planning of policies and actions necessary to conserve agrobiodiversity. Background Tropical home gardens (HGs) are socio-ecological systems that maintain a high diversity of cultivated, enhanced and tolerated species, usually denominated agrobiodiversity, and contribute to in situ conservation of plant genetic resources and ongoing processes of domestication [1][2][3][4][5][6][7]. HGs are part of cultural landscapes, i.e. areas that give meaning and identity to their habitants and are shaped by culture through its inextricable relation with the managed and unmanaged environment [8][9][10]. Agrobiodiversity goes through selection and management processes by the HG owners as they fingerprint their culture on home gardens in daily practice [11][12][13][14][15] and adapts to the varied microclimate and soil conditions in the complex, forestlike agroecosyestem [4,16,17], taking part in manifold interactions at genetic, species, ecosystem and landscape scales [11,[18][19][20][21]. HGs' agrobiodiversity depends on the continuous management, experimentation, cultivation, organization, knowledge transmission and motivations of their owners [3,22,23]. Their species composition and vegetation structure respond to ecological, economical and sociocultural functions that local people aim at through design and management at different scales [11,[23][24][25][26]. In this regard, our use of the term "functions" refers to those operating in socio-ecological systems at scales from sections of agroecosystems to the landscape [19,[27][28][29]. Functions derive from the ecological, economic and social system dimensions, as well as their interactions, such as learning about agrobiodiversity [23,26,30,31]. Ecological functions refer to the functions that also occur in natural ecosystems, such as nutrient cycling, enhancing rainwater infiltration in soils, generating distinct micro-climates and providing habitats to species. Economic functions refer to generating products and services for human consumption, favouring family economy through income and savings. Socio-cultural functions refer to the enhancement of social relations and aesthetic, learning, spiritual and emotional functions, among others [2,26,29]. Agrobiodiversity and knowledge of its multifunctionalities are the result of continuing changes, whereby each acquires properties that modify both [25,32,33] (Fig. 1). People's knowledge with regard to home gardens' ecological, economic and socio-cultural functionality, i.e. knowledge of the system`s multifunctionality, evolves in a process of continuous transmittance and renewal in regional bioculture [24,29,[34][35][36][37]. It responds dynamically to contextual influences on its production and reproduction, as described in literature on Traditional Ecological Knowledge (TEK) and Local Ecological Knowledge [38]. Berkes [38] considers that TEK includes associated practices and beliefs. Aldasoro-Maya [39] assigns the connotation of "contemporary" to these localized bodies of knowledge, emphasizing that although having roots in tradition, they also reflect interactions with other forms of knowledge, cultures and temporality, thus updating their suitability for the actual management of agrobiodiversity conservation. Multifunctionality knowledge is a part of these continuously actualized localized bodies of contemporary knowledge and contributes to the maintaining and renewing of biocultural diversity [24,31,40] and the diversification, updating and adaptation of socio-ecological systems such as home gardens [2,41,42]. In this article, we analyse how owners' knowledge of HG multifunctionality functions links to HG agrobiodiversity, based on species inventory data from a sample of HGs in the tropical lowlands of México and the information provided by their owners on the ecological, economic and socio-cultural HG functions they distinguish and how they value them. In our hypotheses, HG multifunctionality knowledge is positively correlated with species richness and Shannon diversity index, as both enhance each other, and also relates with the species composition of HG flora as this reflects owners' knowledge. Through our research, we pretend to contribute elements and methods for the design of policies and actions for agrobiodiversity conservation at the local level based on knowledge of HG multifunctionality as defined in their own terms by rural families from their livelihood strategies onwards [35,43]. Study area We conducted fieldwork in the Comalcalco municipality in the heart of the cacao-producing tropical lowlands of the state of Tabasco, México. Based on the experience of the local NGO "Horizontes Creativos", engaged in grassroots organization on social innovation, México, we selected the villages Zapotal, Gregorio Méndez, Reyes Hernández and Sargento López (Fig. 2), where research fitted in ongoing organization processes. The villages are located on the slightly elevated margins of former riverbeds, with fertile vertisols and gleysols [45]. The climate is hot and wet, with an average year temperature of 27.1°C and annual rainfall of 1926 mm [46]. Agricultural modernization from the 1950s onwards has resulted in the general deforestation of the area originally covered with tropical rainforests [45,47], home gardens and cacao plantations providing the remaining tree cover. Dominant land use is for animal husbandry and sugarcane, banana and cacao production. Villagers combine agricultural activities with work in the services sector and oil industries, Tabasco being Mexico's main oil producer. The advent of the oil industry in the 1970s, its expansion and crisis have strongly impacted society, culture and environment. Deforestation, floods, contamination and lack of economic opportunities have recently catalysed social reactions such as the establishment of cooperatives and local micro-financing organizations. Home garden selection We informed HG owners on the goals of the research at local meetings in January 2017 on responses to the productive crisis in cacao. We mentioned methods, research needs and possible benefits for local organization processes and invited the assistants to participate. Other families showed interest as information spread, reaching a total of 20 families: 8 in Zapotal, 3 in Gregorio Méndez, 4 in Reyes Hernández and 5 in Sargento López (Fig. 2). The total area of the 20 HGs was 5.2 hectares, considered sufficient to capture regional species richness to a large extent [48]. Agrobiodiversity census From February to June 2017, we registered all trees, shrubs, climbers, small woody shrubs (suffrutescents) and perennial herbs in each HG by local and scientific name, based on the knowledge of the research team, home garden owners and local experts. In case of doubt (occurring with less than 10 species), we took photographs and samples of leaves, flowers and/or fruits, which the first author compared with voucher specimens available in the herbarium at the Universidad Juárez Autónoma of Tabasco (UJAT). We checked the scientific names and biogeographical distribution of species (native, neotropical or introduced), at Trópicos [49], WCS [50] and The Plant List [51], and species' conservation status considering the IUCN Red List [52], CITES [53] and NOM-059-SEMARNAT-2010 [54]. We registered intraspecific variation based on local cultivar names, colour and shape of leaves, flowers and fruits [2]. Cataloguing home garden functions We generated a draft catalogue of home garden functions from literature [7,25,26,30,32,[55][56][57], distinguishing ecological, economic and socio-cultural dimensions. From June to September 2018, we applied semi-structured interviews to the owners in their home garden to assure a connection, which we recorded with previous consent and transcribed. This allowed elaborating the final version of the catalogue, adding the functions that owners described and modifying or removing others. We interviewed household heads, in total 11 women and 19 men (see supplementary material "Agrobiodiversity data", sheet HG), with ages ranging from 38 to 86 years. All interviewed families managed cacao groves; 9 also dedicated to other agricultural production systems and 11 to work in the industrial and services sectors. Based on the transcribed interviews, we assigned a score to the knowledge of the functions in each dimension (socio-cultural, ecological, economic), following the general method of explicit assessment outlined by Bosshard (54). We assigned a score 0 if functions were not recognized, even negated; 1 if a function was not recognized, but the home garden reflected management regarding the function; 2 if a function was recognized, but the management did not reflect it; and 3 if owners recognized the function and the management reflected it. As for the economic dimension, we grouped 33 different uses of HG species in the economic functions of wood (5 uses), food (4), ornamentals (4), medicines (4), agricultural inputs (4), domestic products (4), handicrafts (4) and others (4) and based scores on the number of uses of species for each group mentioned by the home garden owners. We thus obtained aggregated scores for each dimension in all HGs, which we standardized to the same scale for the three dimensions, and then calculated an aggregated functionality knowledge score by summing the scores in the three dimensions. Data analysis We elaborated a species abundance matrix from the inventory data and evaluated our sampling effort [58] by dividing the number of observed species by the estimated total regional home garden species richness using the Chao1 algorithm in the EstimateS program [59]. We used non-metric multidimensional scaling (NMDS) in the PAST software [60] on the species abundance matrix to represent variation in species composition among HGs along axes. We determined Spearman rs correlation coefficients between agrobiodiversity-considering observed species richness and Shannon H-and multifunctionality knowledge scores, using the PAST software. We also analysed if species composition was related to HG multifunctionality knowledge, by determining correlations of HG scores along the axis of NMDS with the knowledge scores for the ecological, economic and socio-cultural dimensions and their aggregate multifunctionality values. Knowledge dialogues at workshops We shared the results of agrobiodiversity censuses and interviews with HG owners in workshops conducted in May and July 2018 and established a dialogue of different ways of knowing between HG owners and academics [61] on the relevance of relations of HG agrobiodiversity and multifunctionality knowledge for the management of the conservation of agrobiodiversity. HG agrobiodiversity We registered 4349 individuals belonging to 280 botanical species, 229 genera and 84 families (Additional file 1 "Species list.xlsx") [62]. Chao-1 total estimated species richness was 348, indicating that our sample included 80.7 % of all species in regional home gardens. Average species richness in HGs was 59.6 ± 5.1, of which 49 ± 4 were tree or shrub species (Additional file 2 "Agrobiodiversity data.xlsx") [63]. The average number of perennial individuals in HG was 217.5 ± 23.8. HG size was on average 2584 m 2 and showed no significant correlation with the number of individuals (rs = 0.14, P = 0.56) and total species richness (rs = − 0.08, P = 0.71). Families with smaller HGs compensated with higher numbers of individuals (rs = − 0.54, P = 0.01) and of species of perennial herbs (rs = − 0.46, P = 0.04). The average Shannon diversity index was 3.35, with a minimum of 2.30 and a maximum of 4.02. Of the 280 inventoried species, 33.2% were native to Mesoamerica, 26.4% of neotropical origin and 40.4% introduced (Additional file Species list.xlsx). Most abundant species were cacao, Theobroma cacao L., and macuilis, Tabebuia rosea (Bertol.) Bertero ex A.DC., with 6.3% and 6.2% of the total number of individuals, respectively, followed by Citrus x sinensis (L.) Osbeck Sixty-eight species were only present with one individual and 32 with two. Of all registered species, 21 had some national or international conservation status [52][53][54]62]. HG multifunctionality knowledge We recorded a total of 38 HG functions (f), of which 14 were in the ecological, 12 in the economic and 12 in the socio-cultural dimension ( Table 1). The scales for determining knowledge scores for the socio-cultural, economic and ecological functions are based on the deliberations in the research team, and the interviews and are detailed in the Additional file 3 "Functionality knowledge data.xlsx", sheets "function scores" and "scores by use groups" [64]. The total HG functionality knowledge scores varied from 64.1 to 106.6, resulting from variable combinations of scores in the three dimensions (Fig. 3). Coefficient of variation was highest in the economic dimension (24.5%), followed by the ecological dimension (19.8%) and the socio-cultural dimension (11.7%). Though HG owners considered that socio-cultural, economic and ecological functions are all important at the same time, the medians of standardized absolute functionality knowledge scores in the three dimensions were different (Kruskal-Wallis test, P < 0.001; pairwise comparison with the Mann-Whitney test, P < 0.001 in all cases). The medians of economic, ecological and socio-cultural functionality knowledge scores were respectively 21, 31.1 and 36.9. Absolute scores of functionality knowledge in the economic dimension and in the ecological and sociocultural dimension were positively correlated (rs = 0.44 and 0.64, respectively, P < 0.05), and there was no correlation between the ecological and socio-cultural knowledge scores (rs = − 0.03, P = 0.91). In the socio-cultural dimension, all owners consider the maintenance of Traditional Ecological Knowledge and wisdom with regard to the environment as an important role of the HGs (f9, see Additional file 3 "functionality knowledge data.xlsx", sheet "function scores") [64]. All interviewees considered HGs' impulse social learning among generations on agricultural themes and the taking care of productive spaces (f10), while several interviewees commented on the increasing difficulty to motivate young generations for these issues. HG activities enhanced family organization in 50% of the home gardens, where families distributed activities according to members' capacities (f7). Most families consider that collective activities in home gardens strengthen family cohesion and organization (f4) and that gifts of HG HGs provide space for recreational, ludic, sports, artistic, religious, relaxing and social-familiar activities (meetings) in almost all HGs (f1). They host family and community traditions and customs (f11), such as the altar at Día de Muertos, religious services, Christmas, Easter and death wakes. At feasts in honour of the villages' patron saints, products from home gardens are brought to the church as offerings or for sale to fund religious activities. Many owners considered HGs to represent a cultural and historical legacy of their ancestors (f8). All HG owners recognized the aesthetic function, as HGs beautify the families' direct environment and the village landscape (f3). HGs stimulate the senses and generate positive emotions and feelings that favour mental and spiritual health (f5). In this context, people mention terms such as wellbeing, peace, tranquillity, satisfaction, confidence, security, nostalgy, yearning, identity, pride, love, happiness, joy, concentration, inspiration, pleasure, harmony and company, among others. The emotional function (f5) received the highest allover score (60) of all functions and numerous expressions of HG owners during the interviews support them. HGs' contribution to food sovereignty involves the socio-cultural and economic dimensions, as mentioned by all families. In the socio-cultural dimension, families considered that the foods produced in HGs have cultural value and contribute to a diverse diet, influencing positively in health and satisfying the local sense of taste (f2). Another socio-cultural function is to contribute to maintain culinary traditions (f12), using cultivated ingredients as a part of the process that involves the traditional kitchen structures, organization and transmission of knowledge. In the economic dimension, HGs contribute to regional and local food autonomy reducing daily expenditures (f28). One of the economic functions is providing income from sales of HG products (f27). Families mention that its contribution to income is small, yet important as it continues all over the year due to HG species richness (f30). Forest similarity of HGs, with trees of different ages and architecture, allows spreading the harvesting of wood in time according to needs (f29) or use the incrementing wood stock as a piggy. Wood supply has the highest score among the economic functions (f35) and is reported by all families to meet needs for carpentry (e.g. Cedrela odorata and Swietenia macrophylla), energy (e.g. Diphysa americana), construction (e.g. Colubrina arborescens) and fences (e.g. Bursera simaruba and Gliricidia sepium, Nopalea cochenillifera, Sansevieria zeylanica). Food provision had the second highest score (f33), and includes fruits (e.g. Citrus spp. and Annona spp.), leaves (e.g. Piper auritum), stems (e.g. Saccharum officinarum), roots (e.g. Manihot esculenta), condiments (e.g. Plectranthus amboinicus and Pimenta dioica), inputs for sweets (e.g. Vasconcellea pubescens and Malpighia glabra), ferments (e.g. Theobroma cacao and Byrsonima crassifolia) and wrapping materials (e.g. Piper auritum, Calathea lutea and Musa spp.). All HGs provide ornamental plants (f37). The category includes ornamentals sensu stricto (e.g. Hibiscus spp., Ixora spp. and Rosa spp.), aromatic (e.g. Cestrum nocturnum) and ritual species (e.g. Bursera graveolens and Cordyline fruticose), as well as species that people maintain as a "relic", i.e. plants maintained as a living memory, be it for their sentimental value towards the person who planted it, or because the plants have become increasingly rare so it is necessary to learn new generations about them (e.g Aristolochia pentandra and Smilax domingensis). All owners mention the HGs' provision of plants of medicinal uses (f36), be these of curative (e.g. Tradescantia spathacea, Citrus x aurantium and Sambucus canadensis), cosmetic (e.g. Aloe vera), relaxing (e.g. Justicia pectoralis) or energizing (e.g. Theobroma cacao). However, several interviewees manifested that the new generations lack knowledge about these plants as they stop using and cultivating them. The function of providing products for use in agricultural production (f31) includes plant parts as tools (e.g. Genipa americana), forage (e.g. Gliricidia sepium), green manure (e.g. Erythrina caribaea) and control of plagues (e.g. Azadirachta indica). Domestic uses (f34) include the use as insect repellents (e.g. Ocimum basilicum), utensils and containers (e.g. Cocos nucifera), basketry (e.g. Sabal mexicana) and fibre for ropes (e.g. Heliocarpus appendiculatus). Plants for handicrafts (f32) provide plant colorants (e.g. Bixa orellana), materials to elaborate handicrafts (e.g. Crescentia cujete), toys (e.g. Canna indica) and music instruments like traditional drums (e.g. Persea americana). The functions of provision of materials for domestic uses and handicrafts obtained the lowest scores (Additional file 3). Though interviewees mentioned the use of plants for these functions, they also observed that they are available in very few HGs and not in theirs. Rather, many materials traditionally used for these functions have been substituted by widely available and cheap plastic objects. As plants for other uses (f38), families mentioned shade trees (e.g. Terminalia catappa), melliferous plants (e.g. Lonchocarpus hondurensis and Melicoccus spp.), stinging (e.g. Phenax hirtus) and oilproviding plants (e.g. Cocos nucifera and Acrocomia aculeata). Temperature regulation scored highest among the ecological functions (f25). Interviewees commented how tree cover filters the sunlight and avoids heating of the soils. Transpiration by plants is considered to lower the air temperature, as well as interception and introduction of air flows by trees, replacing hot air and mitigating high temperatures. Well-positioned trees protect houses and other structures from strong winds (f21). Owners also consider that HGs filter atmospheric contamination (f16), as leaves absorb and trap suspended particles. Owners also mentioned the production of oxygen and the absorption of carbon dioxide as important functions (f18) and observed that HGs attract rainfall (f17) as does other forest-like vegetation. In their view, deforestation has shortened rainy seasons, lengthened dry periods and has caused higher temperatures, affecting crop production and human health. Families consider that HGs conserve agrobiodiversity (f15), as they maintain plants that do not occur any more in other spaces of the regional landscape. HGs are a source of seeds that colonize adjacent fields and receive seeds from other fields (f23). They contribute to conservation of associated agrobiodiversity by providing food and refuge for fauna (f14), including birds, mammals, reptiles, monkeys, native bees and other species that are tolerated or favoured. Families also mentioned that they routinely eliminate species that cause damage to the vegetation or domestic animals, or that are dangerous for humans, mentioning snakes, squirrels, rats, possums and rapacious birds. HGs' vegetation structure with big and small trees in dense and open patches generates variation in microclimatic functions (f22), and thus the provision of adequate conditions for species with different physiological requirements (f24). For example, Citrus spp. require direct sunlight for optimum fruit production, while cacao (Theobroma cacao) needs a certain degree of shade. Open spaces are often used for ornamentals and also for productive activities such as the drying of fresh cacao beans. HGs thrive on soils that widely vary in texture and fertility (f20). Families have broad knowledge of this variation, which orients the selection of sites for planting particular species and of management practices to adapt to limitations. For example, owners may maintain a cover crop on soils with a fine sandy texture to avoid high temperatures of such soils, conditions to which e.g. Citrus spp. and Pimenta dioica are not adapted. Soils function as the main reservoir of nutrients and water for plants (f13), and families also consider that rains, floods and air play a role in the provision of both. HG vegetation contributes to soil fertility by providing organic matter (f19), thus contributing to maintain soil humidity and porosity and avoid erosion. Though families recognize this function, the removal and burning of leaf litter is a general practice, and only few families return decomposed organic materials to the plants. The main reason for burning is to avoid the spread of snakes and mosquitos. Owners consider that HGs act as vehicles to rehabilitate or maintain tree cover (f26). Their establishment frequently involves the replacement of some of the naturally occurring trees with trees of more useful species but may also start from tree planting on formerly deforested areas. Relations of HG agrobiodiversity and multifunctionality knowledge Within each dimension, we standardized absolute scores to a potential maximum score of 45 (Additional file "Functionality knowledge data.xlsx", sheet "function scores by dimension" [64] and tested these scores for relations with species diversity and composition. In general, HG owners perceive that the more species of plants, the more functions are met that benefit the people and the environment. Correlation analysis on the data of all home gardens showed indeed significant correlations of richness and Shannon diversity indexes with knowledge scores in particular dimensions ( Table 2). Species richness and Shannon H were significantly correlated with ecological functionality knowledge (rs = 0.68 and 0.68, P < 0.001); species richness was also correlated with economic functionality knowledge (rs = 0.47, P = 0.03), but not with the sociocultural and the aggregated functionality knowledge scores (rs = − 0.18 and 0.44, P = 0.45 and 0.052). Shannon H was also not correlated with socio-cultural functionality knowledge (rs = − 0.24, P = 0.31). When separating species richness by the biogeographical origin of the species (see Additional file 1, Species list), the observed correlation pattern was maintained, i.e. independently of the origin, there were no correlations of richness with socio-cultural functionality and significant correlations with ecological functionality (Table 2). Additionally, there were significant correlations between aggregated functionality and richness of native species and of the sum of native and neotropical species richness. NMDS, applying the Bray-Curtis similarity index and three dimensions, gave a stress factor of 0.21, indicating a reasonable representation of variation in species composition. Species composition scores on the first and second axes and ecological functionality knowledge showed significant correlations, with rs = 0.49 and P = 0.03 on the first axis and rs = − 0.49, P = 0.03 on the second axis. Other functionality knowledge scores showed no correlation with NMDS scores. Exchange of knowledge on functionality and agrobiodiversity relations in workshops of HG owners, the research team and NGOs allowed discussions on how to assemble ideal HGs. This exercise showed the desirability of enhancing and combining many of the functions mentioned in the function catalogue (Table 1). Owners emphasized the importance of knowledge on the production of wood and fruit in the economic dimension, the provision of living space for native stingless bees and other biological groups in the ecological dimension, and on how to transmit knowledge from generation to generation in the socio-cultural dimension. Together, these components of functionality knowledge enhance agrobiodiversity conservation in HG, thus allowing the knowledge to be maintained and evolve. Discussion Agrobiodiversity in our sample of 20 HGs in Tabasco was quite high (279 species) as compared to the findings in other studies in México [65,66] and the tropics in general [2,3,11,24,37,67]. The richness of species native to Mesoamerica and the Neotropics and the fact that 21 species are listed in national and international conservation categories confirm HGs' high relevance for regional conservation of agrobiodiversity as well as ongoing species domestication [2,25,31]. The many uses of plant species distinguished by the home garden owners (33, Additional file 3) and the presence of many cultivars of HG species that are adapted to the regional environmental conditions reflect how the knowledge on functions of HGs is very much alive. HGs maintain the regional agrobiodiversity that people consider important [2-4, 24, 68]. Based on functionality knowledge, owners select the species and cultivars for their HGs [25,35]. In the study area, this notably includes species of different growth habits: 33% of the inventoried species were suffrutescents, climbers or perennial herbs. The combination of different growth habits allows owners to adjust species selection to the available area and explains why even small HGs are rich in species. Families with smaller HGs compensated with higher numbers of individuals and species of perennial herbs, explaining why HG size showed no significant correlation with the number of plant individuals nor total species richness. Due consideration of different growth forms in HG agrobiodiversity studies therefore allows a more complete view of their management [3]. Species abundance was highly skewed: 39.9% of the inventoried plants belonged to only 11 species and were present in most HGs, whereas 100 species were singletons or doubletons. Singletons and doubletons were not equally distributed over home gardens: three home gardens had together 36 singletons and concentrated more than a third of doubletons. Only a few HGs had no singleton (1) or doubleton (3). This shows that HG owners intentionally care for rare species and some dedicate special effort on this task, based on knowledge of their contribution to HG multifunctionality. Examples of rare species taken care of are Acrocomia aculeata, Annona purpurea, Aristolochia pentandra, Chrysophyllum mexicanum, Dioscorea composita, Garcinia intermedia and Smilax domingensis, among others. In this article, we have used the term "multifunctionality knowledge" in the three distinguished dimensions (ecological, economic, socio-cultural), which add up to multifunctionality knowledge. As mentioned in the introduction, this knowledge is part of what has been referred to in literature as Traditional Ecological [38] considers them as complex bodies of knowledge, belief and practices that is culturally transmitted and warns to consider them as static. Rather, TEK (or LEK) is frequently reinvented and adapted to meet changing needs and is in this sense contemporary knowledge. Far from being static, it is reproduced, enriched and renewed continuously, integrating new elements [35,38,40]. Based on our results, we would prefer a term for TEK/LEK that reflects the integration of economic, ecological and socio-cultural aspects and the fusion of traditional, contemporary and scientific elements in the localized knowledge body, as they all influence natural resource management. We would avoid the exclusive epithet "ecological", as it narrows the integrality of local systemic knowledge. Multifunctionality knowledge guides practices and is transmitted through narratives, observations and learnings that are part of and renew a social memory [35]. HG owners had knowledge on 38 functions in the ecological, economic and socio-cultural dimensions (Additional file 3). In their vision, all are part of the same integral knowledge system, and for this reason, some of the functions are transversal to the dimensions or play a role at different scales (Fig. 1), including the cultural landscape. We refer in results that families have a strong consideration of socio-cultural functions as compared to ecological and economic functions. This confirms findings in the Catalan Pyrenees, where cultural aspects were also most valued [56]. A difference was that the Pyrenees owners did not consider climate regulation and provision of habitat as relevant, as HGs were small-on average 147 m 2 -as compared to the surrounding forested areas. This contrasts strongly with the Tabasco context, where man-made forests like HGs and cacao groves provide most forest cover [45,47] and comply with the ecological functions formerly provided by the natural vegetation. This shows how functionality knowledge in the three dimensions reflects the regional context. Functionality knowledge and agrobiodiversity are dialectically related: due to deforestation and broader societal changes, owners start to consider new functions as important and procure them in their HGs, which thus acquire new characteristics, and may eventually influence the more general context. For example, it is notorious how owners presently consider seed rains from home gardens to their species-poor surroundings as an ecological function (Additional file 3). Overall change in the socio-ecological context in Tabasco in recent decades (deforestation, oil industries, contamination, migration, urban-rural relations) has influenced the renewal of functionality knowledge and agrobiodiversity in HGs and, broader, in the cultural landscape. Updating of functionality knowledge makes use of the available sources in daily social interactions. One of these is the transmittance of knowledge by elders to the new generations. HG owners referred frequently to this in wordings as "our ancestors said", "gone generations knew", "the old tell us" and "our parents taught us". Another source is the knowledge of professionals transmitted through environmental and agricultural projects, referred to as "as the engineer says", "in the course they taught us" and "according to the technical officer". Another source is the interaction with academics and NGOs working on socio-ecological themes, local organizations that have gone through processes of adjustments of practice and knowledge, through meetings and cultural events and internet access. Also, children and young adults transmit new knowledge, for example with regards to gardening with recycled materials. These instances of dialogues of different ways of knowing [61] and access to information allow the updating of multifunctionality knowledge applied to and interacting with HG agrobiodiversity. The high score of functionality knowledge in the sociocultural dimension and in particular of knowledge transmission to younger generations indicate that HGs are a response to the ecological, economic and social changes in Tabasco [47], as has occurred also in other regions [12,67,69]. This response explains why we did not find a correlation between species richness/Shannon diversity index and functionality knowledge scores in the socio-cultural dimension: much of the considered functionality knowledge does not depend directly on the availability of species in the HG but rather refers to the belonging to a regional culture and the desire to transmit it (for example, f1, f3, f4, f5, f7 and f11). Socio-cultural functionality knowledge is thus a strong asset for the strengthening of regional bioculture among the new generations. Economic functionality knowledge scores were low as compared to those for socio-cultural and ecological functionality knowledge (Fig. 3). The relatively high variation coefficient indicates that some HG scarcely comply with economic functions and others considerably (Fig. 3). This variation may be partly due to the small contributions of HGs to family economy in small HGs, as indicated by a significant correlation of economic functionality knowledge scores and HG area (rs = 0.510, P = 0.026). Economic functionality knowledge was also significantly correlated with species richness. Scores were highest for wood and fruit provision functions and lowest for handicrafts and domestic uses, as substitutes of the latter are readily available in local shops [5,17,25]. HGs' contribution to family economy through sales or savings in spending thus contributes to agrobiodiversity, but this may combine with the substitution of species. The relations of agrobiodiversity and functionality knowledge show many variations among families, as these participate differently of regional bioculture and knowledge of the socio-ecological system elements. In general, HG species richness and owners' total functionality knowledge scores showed no significant correlation (rs = 0.44, P = 0.052) ( Table 2). We had expected the contrary: owners managing more species would consider more functions, resulting in higher total functionality knowledge scores. The absence of such an overall correlation is due to several factors. As mentioned earlier, part of functionality knowledge goes beyond the individual home garden (Additional file 3) and is not necessarily related to species richness, as in the case of several socio-cultural functionalities. Also, the presence of multi-purpose species [33] explains that higher functionality knowledge scores are not necessarily associated with higher species richness, as does the situation where owners maintain trees without having knowledge on their functionality [17,32,70]. Scores of the economic and ecological functionality knowledge showed significant correlations with species richness, and in the ecological dimension also with the Shannon H diversity index (Table 2). These correlations were also found when we separated species by their biogeographical origins, indicating that families experiment in their HGs with both regionally occurring and introduced species [71]. It is noteworthy that also aggregated functionality knowledge was significantly correlated with the richness of native species, reflecting the importance of this agrobiodiversity component in regional culture. Socio-cultural functionality knowledge scores were however not correlated with species richness. As observed above, these scores correlated positively with home garden area (rs = 0.71, P = 0.001), as was also the case of economic (rs = 0.522, P = 0.022) and total functionality knowledge scores (rs = 0.507, P = 0.027). Larger available area logically allows addressing more functions, often in specific home garden sections [72]. Several socio-cultural functions, such as receiving family, friends and neighbours (f4), and economic functions require specific areas. The wide array of HG functions in the ecological, economic and socio-cultural dimensions that people distinguish reflects the importance of HGs in local livelihoods and the daily practices aimed at maintaining and adapting of their socio-ecological systems. As such, multifunctionality knowledge provides concrete opportunities for agrobiodiversity conservation in the local and regional spheres. Multifunctionality knowledge-agrobiodiversity relations are a point of departure for working towards integral strategies of agrobiodiversity conservation and improved livelihoods [8,10,31,35,41,73]. Families in the study area are aware of this and therefore actively promote biocultural attachment among the new generations, dedicating time to conserving, learning and teaching about HG functionality and establishing alliances with NGOs, academics, consumers and institutions to do so. Examples in Tabasco include initiatives that involve new generations in agricultural and conservation activities, such as the "School of peasant life" established in one of the study communities, as well as co-organized agroecology research and workshops. Sharing and advancing multifunctionality knowledge regarding HGs, as well as other socio-ecological systems, and their agrobiodiversity, is therefore a starting point and a central element for improving and adapting local livelihoods [24,36,73]. Conclusions The rich agrobiodiversity of home gardens cherished by rural families in Tabasco is positively correlated with HG owners' broad multifunctionality knowledge in the ecological and economic dimensions. Although the socio-cultural functionality knowledge is not correlated with agrobiodiversity, its high scores underline the strong and general interest of local people in these aspects. The contemporary knowledge with regards to HG multifunctionality is a strong asset for the conservation of agrobiodiversity (43), as it is an integral part of local livelihoods. Its analysis should therefore be a starting point for policies and actions in this respect. Acknowledgements We are grateful to the families in Comalcalco for allowing us to work with them and for sharing their knowledge. Special thanks to José Jesús Angulo Córdova, Pablo González Arguedas and Fermín Espíndola González for their invaluable help in the fieldwork. The local NGO "Horizontes Creativos" shared their perspective on grassroot organization and the goodwill they earned in years of consistent activity. The Mexican National Council of Science and Technology (CONACYT) provided the M. Sc. grant of the first author and financed the project "Adaptability to climate change in rural mosaics" that financed fieldwork and workshops. Data availability The datasets supporting the conclusions of this article are included as additional files. The names of the species found in the sample of home gardens are provided in the spreadsheet named "species list.xlsx" [62]. It contains botanical family and species names, as well as common names, information on biographical distribution, growth habit and conservation status. The file "agrobiodiversity data.xlsx" [63] provides the detailed data on species and their abundance in the sample of home gardens. The file "functionality knowledge data.xlsx" [64] provides the data on functionality knowledge scores for all home gardens. Other data are available on request. Permissions Samples of plants unknown to the field team were collected for comparison with specimens in the herbarium of the Universidad Juárez Autónoma de Tabasco with the permission of the Mexican Secretaría de Medio Ambiente y Recursos Naturales, Subsecretaria de Gestión para la Protección Ambiental, Dirección General de Gestión Forestal y de Suelos, through document number SGPA/DGGFS/712/1367/17.
v3-fos-license
2020-08-14T13:05:39.620Z
2020-08-14T00:00:00.000
221113963
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01918/pdf", "pdf_hash": "566a90705cecad7b60471ec6fcba38e362045df2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44697", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "sha1": "566a90705cecad7b60471ec6fcba38e362045df2", "year": 2020 }
pes2o/s2orc
Multidrug Resistant Acinetobacter Isolates Release Resistance Determinants Through Contact-Dependent Killing and Bacteriophage Lysis Antimicrobial resistance is an ancient bacterial defense mechanism that has rapidly spread due to the frequent use of antibiotics for disease treatment and livestock growth promotion. We are becoming increasingly aware that pathogens, such as members of the genus Acinetobacter, are precipitously evolving drug resistances through multiple mechanisms, including the acquisition of antibiotic resistance genes. In this study, we isolated three multidrug resistant Acinetobacter species from birds on a free-range farm. Acinetobacter radioresistens, Acinetobacter lwoffii, and Acinetobacter johnsonii were isolated from hens, turkeys and ducks and were resistant to 14 clinically relevant antibiotics, including several listed by the World Health Organization as essential medicines. Co-culturing any of the three Acinetobacter species with Acinetobacter baumannii resulted in contact-dependent release of intact resistance determinants. We also isolated several lytic bacteriophages and selected two of these phages to be included in this study based on differences in plaquing characteristics, nucleic acid content and viral morphology. Both phages released host DNA, including antibiotic resistance genes during cell lysis and we demonstrated that these resistance determinants were transferable to a naïve strain of Escherichia coli. This study demonstrates that contact-dependent competition between bacterial species can readily contribute to DNA release into the environment, including antibiotic resistance determinants. We also highlight that the constant lysis and turnover of bacterial populations during the natural lifecycle of a lytic bacteriophage is an underappreciated mechanism for the liberation of DNA and subsequent genetic exchange. INTRODUCTION In recent years, multidrug resistant (MDR) bacteria have become a serious concern for healthcare providers worldwide (Poirel et al., 2011;Bengtsson-Palme et al., 2017). Preeminent among these bacteria is Acinetobacter baumannii, a common nosocomial MDR pathogen resistant to desiccation and readily able to acquire MDR genes in both hospital and environmental settings (Poirel et al., 2008;Al Atrouni et al., 2016). In 2013, the CDC ranked carbapenem-resistant A. baumannii as the most concerning MDR pathogen requiring new antibiotics (Sievert et al., 2013) and the WHO published a similar report in 2017 (Lawe-Davies and Bennett, 2017). At the origin of this problem is the propensity of A. baumannii to become resistant to antibiotic treatment through acquisition of resistance genes (Wilharm et al., 2013). Although the extreme resistances of clinical isolates of A. baumannii are well documented (Valentine et al., 2008;Yang et al., 2015;Fernando et al., 2016), the prevalence of MDR isolates in agricultural settings is only recently being explored (Maboni et al., 2020). Furthermore, there has been limited characterization of A. baumannii interactions with reservoir species, such as Acinetobacter radioresistens, which can serve as a source of carbapenem resistance in hospital settings (Poirel et al., 2008). Thus, identifying MDR gene reservoirs and modes of gene release to naïve pathogens will be critical to understanding and combating the spread of antibiotic resistance. In addition to encoding a number of antibiotic resistance determinants, A. baumannii is well equipped to outcompete neighboring bacteria in a number of approaches. The genus Acinetobacter has been confirmed to carry type 1 (T1SS), type 2 (T2SS), type 4 (T4SS), and type 6 (T6SS) secretion systems, as well as >40 contact-dependent inhibition (CDI) systems, all of which are virulence factors that give Acinetobacter species a competitive advantage (Weber et al., 2013;Hayes et al., 2014;Souza et al., 2015;Harding et al., 2017;Kinsella et al., 2017;De Gregorio et al., 2019;Sgro et al., 2019). T6SS and CDI systems are two mechanisms of direct bacterial competition that Acinetobacter species encode (Harding et al., 2018). Briefly, the T6SS is a complex multi-component system anchored in the inner cell membrane that builds a spike, which, upon contact, injects toxins into a prey cell (Weber et al., 2013). CDI is a two-component system anchored in the outer membrane, which consists of a large transporter (CdiB) and toxin (CdiA) that is released during contact with the prey cell (De Gregorio et al., 2018;Roussin et al., 2019). These virulence factors greatly increase the fitness of the acinetobacters that express them and enables the bacteria to thrive in diverse and competitive environments. In recent years, there has been an effort to isolate more bacteriophages that use ESKAPE pathogens, such as A. baumannii, as a host. Bacteriophages have been and still are an under-appreciated distributor of antimicrobial resistance elements. These microbial viruses, which have been reported to infect ∼10 24 bacterial cells per second globally (Deresinski, 2009), are the most abundant biological entity on earth (Suttle, 2005;Paez-Espino et al., 2016). Host gene transduction, including MDR genes, has been described for some bacteriophages and phage-derived elements (Penades et al., 2015). Although the mechanisms are poorly understood, it was generally accepted that upon infection, most nontransducing lytic bacteriophages will degrade host DNA to inhibit cellular activities or to repurpose nucleotides for viral use (Warren and Bose, 1968). However, there have been recent observations that some lytic phages release intact plasmid DNA (Keen et al., 2017). Keen et al. (2017) have designated these phages as "superspreaders" and demonstrated that drug resistance genes released by these phages can be acquired and expressed by unrelated bacteria found in the same environment. Although over 100 Acinetobacterinfecting bacteriophages have been isolated, this phenomenon has not been reported for this genus (Turner et al., 2017). A greater understanding of Acinetobacter phage impact on MDR dissemination is required before designing phage therapies to treat antibiotic resistant infections, except in the most desperate cases. Understanding how Acinetobacter species are exposed to exogenous genetic material is crucial to further understanding MDR spread. In this report, we describe MDR Acinetobacter species isolated from a free-range farm (Rothrock et al., 2016). Based on their susceptibility to phage predation and killing when co-cultured with A. baumannii, we demonstrate that antibiotic resistance genes can be released by contact-and phage-lysed cells. In addition to identifying a potential reservoir of MDRcarrying Acinetobacter species in food animals, these findings highlight two underappreciated modes of DNA release that can partially account for the unchecked spread of antibiotic resistance within this genus. Strain Isolation and Growth Strains were isolated from fecal samples collected from layer hens, turkeys, and ducks on a free-range pastured poultry farm (Rothrock et al., 2016). Fecal samples were diluted 2 mL/g in sterile phosphate-buffered saline and plated on Brilliance Campycount agar (Remel) under microaerobic conditions (10% CO 2 , 5% O 2 , 85% N 2 ) at 37 • C for 18 h. Isolated red colonies were re-cultured on Campy-Line selective agar (Line, 2001). Strain Identification Cell suspensions from agar plates were adjusted to an OD 600 = 0.5. Suspensions were floated on formvar-coated copper grids for 1 h and 5% paraformaldehyde (Electron Microscopy Sciences, Hatfield, PA, United States) was added to fix the samples before imaging. Transmission electron microscopy (TEM) was performed using a JEOL JEM1011 microscope (JEOL Inc., Peabody, MA, United States). MALDI-TOF VITEK TM MS (BioMérieux, Durham, NC, United States), and, when necessary, full 16S rRNA and rpoB sequencing (La Scola et al., 2006) confirmed the isolate identities with 99.9% confidence. Acinetobacter Growth Conditions After positive Acinetobacter identification, the three Acinetobacter species we isolated: A. radioresistens, A. johnsonii, and A. lwoffii, along with the laboratory strains of A. baumannii, were incubated aerobically using Luria-Bertani (LB) medium (Becton, Dickinson and Company) at 30 • C and under agitation at 200 RPM for liquid cultures. Antibiotic Minimum Inhibitory Concentration (MIC) Testing Antibiotic MIC values for all Acinetobacter isolates was assessed using the Vitek 2XL (BioMérieux) and Sensititre (Trek Diagnostic Systems, Cleveland, OH, United States) platforms. The cards/plates used where GN65, GN69 (BioMérieux) and TrekCOMPGN1F, TrekGN4GF (Trek Diagnostic Systems, West Sussex, United Kingdom), following the instructions of the manufacturers and the CLSI guidelines (CLSI M07-A10). Two different MIC systems were used since some isolates did not grow in the Vitek2 card system. Antibiotic break points for both MIC methods were determined by CLSI M100 or EUCAST 1 when no CLSI break points were published. Isolates were considered MDR if they were resistant to ≥3 antibiotic classes (Manchanda et al., 2010). Antibiotic Gradient Diffusion Testing Imipenem resistance for A. radioresistens was also assessed using the imipenem ETEST R strip (BioMérieux). Briefly, a culture of A. radioresistens LH6 was adjusted to an OD 600 = 0.7 in Mueller-Hinton (MH) broth and 150 µL was spread onto MH agar plates. The E-test strip was placed on top of the plate and incubated at 37 • C overnight prior to imaging on the Chemidoc XRS+ imager (Bio-Rad). Bacterial Killing Assay The contact-dependent killing assay was adapted from a previously described bacterial competition assay (Weber et al., 2015). Briefly, strains were grown in liquid culture and adjusted to OD 600 = 1.0. Strains were washed in LB one time to remove any antibiotic, mixed in a 1:1 ratio and 5 µL was 1 http://www.eucast.org/clinical_breakpoints/ spotted onto LB agar. After 4 h, the agar containing the spot was excised and resuspended into 1 mL of LB broth. Ten-fold serial dilutions were made and spotted onto LB plates supplemented with 50 µg/mL kanamycin (GoldBio) to select for the prey strain and incubated overnight at 30 • C to enumerate the surviving strain. When assessing contactdependent inhibition, a 0.45 µm nitrocellulose membrane (Bio-Rad) separated the strains by spotting the prey strain on LB agar, placing the membrane on the prey strain, and spotting the predator strain on top of the nitrocellulose. This allows the possible passage of small molecules between the predator and prey during the 4-h incubation, but prevents migration of the predator (results not shown). After the incubation time, the membrane and predator strain were removed, and the agar spot was excised and resuspended in PBS for extracellular DNA isolation. Extracellular DNA Isolation Following the co-culture experiments described above, cells were resuspended in phosphate-buffered saline. For all extracellular DNA isolations, cell suspensions and released DNA were filtered through a 0.22 µm membrane. The nucleic acids in the filtrate were isolated via phenol/chloroform (Thermo Fisher Scientific) extraction, precipitated with isopropanol and sodium acetate (Thermo Fisher Scientific), and resuspended in nuclease-free water. Care was taken to ensure that an equal fraction of the aqueous phase was taken from each sample so that the isolated DNA quantities would be proportional to each other. PCR Detection of the Kanamycin Resistance Gene Isolated extracellular DNA was PCR amplified using the kanamycin resistance gene primers: KanF-(CGCAGAAGGCAATGTCATAC) and KanR-(CACTTTGAACGGCATGATGG). Taq polymerase (Thermo Fisher Scientific) was used according to the manufacturer's instructions, with a T m of 55 • C. Bacteriophage Isolation and Propagation Bacteriophage isolation was performed as described (Gencay et al., 2017). All Acinetobacter strains were tested and only LH6 was capable of being infected by all phages. Thus, LH6 was used as the propagating strain and was grown overnight at 30 • C with shaking at 200 RPM, the culture was then adjusted to OD 600 = 0.4, and infected with bacteriophages at a multiplicity of infection (MOI) of 0.0001. The infected culture was incubated at 30 • C with shaking at 200 RPM overnight. Afterward, the culture was centrifuged at 4255 × g for 15 min, the resulting supernatant was put through a 0.22 µm filter, and the phage-containing filtrate was collected. Bacteriophage DNA Release Assay The gfp-plasmid containing strain LH6g (LH6 with gfp) was grown in LB liquid culture at 30 • C with shaking at 200 RPM overnight. The culture was adjusted to OD 600 = 1.0, infected with phages at MOI = 0.001, and incubated at 30 • C with shaking at 200 RPM for 18 h. DNA isolation was performed as described above. Transformation of Released DNA Into Chemically Competent Escherichia coli Cells Transformation of chemically competent TOP10 E. coli cells (Invitrogen) was done according to the manufacturer's instructions. Briefly, 10 µL of isolated released DNA was incubated with 40 µL of competent cells on ice for 15 min. The cells were then heat shocked at 42 • C for 45 s and placed back on ice for 2 min. Cells were mixed with 400 µL of LB medium and incubated at 37 • C, 200 RPM for 45 min. This was followed by plating the cells on LB plates supplemented with 50 µg/mL kanamycin and incubation overnight at 37 • C before counting the transformants. Isolation of MDR Acinetobacter Species From Laying Hen, Duck, and Turkey Feces While isolating campylobacters for our routine studies, colonies typically indicative of Campylobacter jejuni or Campylobacter coli from Brilliance TM agar followed by sub-culturing on Campy-Line agar were examined by TEM. These isolates did not show the characteristic curved rod morphology associated with C. jejuni and C. coli (results not shown). To obtain species identification with 99.9% confidence on our isolates, we compared each isolate by MALDI-TOF mass spectrometry, entire 16S rRNA sequencing, and rpoB sequencing when necessary (see Supplementary Sequencing Data). We also examined the antibiotic MIC values for these strains since we predicted these isolates would possess multiple resistances allowing them to be cultured on C. jejuni/C. coli selective plates. Table 1 summarizes the species identification and MIC results. Three Acinetobacter species were isolated: A. radioresistens, A. johnsonii, and A. lwoffii. In total, the isolates were resistant to 14 antibiotics, seven of which are on the WHO list of essential medicines (ampicillin, cefazolin, ceftazidime, chloramphenicol, nitrofurantoin, rifampicin, and tetracycline) (WHO, 2017). Additionally, 8/10 isolates were MDR (Manchanda et al., 2010) and were obtained from all bird hosts sampled (laying hen, turkey, and duck). Nine MICs were found to be exclusive to one bird type (laying hen-ceftriaxone, chloramphenicol, rifampicin; turkey-ertapenem, ceftazidime, aztreonam, tetracycline, and cefalexin ticarcillin/clavulanic acid), indicated in gray Table 1. These groupings suggested that drug resistances could be mobile within the respective laying hen and turkey populations, leading us to investigate drug resistance gene mobility further. Contact-Dependent Cell Killing We hypothesized that contact-dependent killing mechanisms present in Acinetobacter species could assist in obtaining DNA from neighboring organisms by accelerating the death and lysis of those cells. Therefore, we performed bacterial competition assays to determine whether the isolated Acinetobacter species showed a reduction in cell numbers after incubation with A. baumannii. The clinically relevant pathogen A. baumannii ATCC 19606, which has a constitutively active T6SS, was used as the predator (Weber et al., 2015). After introducing the gfp-containing plasmid, pBAV1K-T5-gfp, into our Acinetobacter prey strains (A. johnsonii LH2, A. radioresistens LH6, and A. lwoffii D16), competition assays were performed to observe the susceptibility of these strains to A. baumannii 19606. Killing was observed for all strains, with LH2g (g denotes gfpexpressing variant) and D16g showing the most dramatic effect ( Figure 1A). Incubating A. baumannii 19606 with D16g reduced the prey strains to the limit of detection, while LH2g and LH6g were reduced by ∼100-fold. In an effort to determine if the killing was through general CDI or T6SS, we obtained cdi1 and cdi2 deletion mutants in strain 19606 and an hcp (hemolysin co-regulated protein) mutant in A. baumannii ATCC 17978 (Weber et al., 2015;Harding et al., 2017) and compared their killing relative to the respective WT strains. The hcp mutation renders the strain unable to form the extracellular "needle" portion of the T6SS complex, causing the predator to be unable to interact with prey in a T6SS-dependent manner (Ho et al., 2014). We used both strains because we could not generate an hcp mutant in strain 19606 and bioinformatic analysis did not identify CDI systems in strain 17978. We observed minimal effect of CDI knockouts in LH2g and LH6g, compared to WT, while the deletion of cdi1 resulted in greater survival in prey strain D16g, compared to WT and the cdi2 deletion mutant (Figure 1A). When testing the hcp mutant, there was a contribution to killing by the T6SS with LH6g and D16g, and minimal killing of LH2g ( Figure 1B). From these results, we concluded that LH2 has minimal susceptibility to CDI and T6SS, LH6 has minimal susceptibility to CDI, but is susceptible to T6SS, and D16 is susceptible to both CDI and T6SS. Detection of DNA Released During Contact-Dependent Killing of Acinetobacter Isolates We next determined if, after co-incubation with A. baumannii, the prey strains can release intact DNA into the environment through contact-dependent killing (schematic in Figure 2A). The A. baumannii 19606 competition assays were therefore repeated with and without a nitrocellulose membrane between the two strains (to inhibit cell contact, while facilitating the exchange of other molecules) and the DNA was isolated after co-culture ( Figure 2B). To test whether or not the released DNA contains intact antibiotic resistance genes, the kanamycin resistance (KmR) gene on the gfp-plasmid was probed by PCR. Figure 2C and Supplementary Figure S1 show that the KmR gene was present in all recovered DNA samples from co-cultures, but not in cultures where contact was inhibited by the membrane or in control incubations. The relative intensities of the PCR products in each lane were measured by densitometry, and significantly more plasmid was released by LH2g and D16g during contact than when inhibited by a membrane (Figure 2D). This demonstrates that contact is necessary for cell killing and that an intact KmR gene is released from these cells. Detection and Uptake of DNA Released During Bacteriophage-Mediated Cell Killing During initial strain isolation, we also isolated several bacteriophages capable of propagating on A. radioresistens strain LH6 with two differing phenotypes: a typical "pinprick" and a "halo" plaque morphology. One representative phage from each phenotypic group (CAP1 and CAP3, respectively) was selected to determine if intact KmR DNA is released after phage infection and cell lysis. The CAP1 phage is a DNA phage likely belonging to the podoviridae and the CAP3 phage is an RNA phage belonging to the cystoviridae. The characterization of the isolated phages will be described elsewhere (Crippen et al., in preparation). To test the ability of phages CAP1 and CAP3 to release host DNA that contains MDR genes, we performed a phage-mediated DNA release assay. The gfp-expressing A. radioresistens LH6g was infected with either CAP1 or CAP3 overnight until a clear culture was obtained and the released host DNA was recovered along with encapsidated phage nucleic acids (Figures 3A,B). Bands at approximately 1 and 1.7 kb were determined to be cell RNA (Supplementary Figure S2). PCR amplification with KmR gene primers indicated that both CAP1 and CAP3 released more KmR DNA than was released during cell growth (Figure 3C and Supplementary Figure S3). This shows that these phages exhibit the potential to accelerate the spread of antibiotic resistance genes through the release of intact resistance genes. We further wished to test the transferability of the released DNA by transforming chemically competent E. coli cells with equal amounts of released DNA and plating for gfp-expressing colonies on Km selective media (Figure 3D). We found that released DNA did contain intact plasmids, which was transformed into E. coli cells. Together, these results show that Figure S1 were measured by densitometry using Image Lab TM . The band produced by amplification of pBAV1K-T5-gfp was used as the standard (Rel. Intensity = 1.0). The averages are represented, and error bars represent the standard error of the mean. Student's paired T-tests were performed for each co-incubation compared to its corresponding membrane-separated control incubation. The p-values for the LH2g, LH6g, and D16g datasets are indicated. phage-released DNA can be source of resistance determinants for competent cells. DISCUSSION In this study, we unexpectedly isolated three Acinetobacter species on C. jejuni/C. coli selective media from laying hen, turkey and duck fecal samples obtained from a free-range pasture fed farm. Species such as C. jejuni are ubiquitous on poultry farms and have been shown to develop resistances, even in the absence of a selective pressure (Luo et al., 2005;Luangtongkum et al., 2009). Previously, a high incidence of MDR E. coli and Listeria spp. were also detected in bird fecal samples from this farm, however, Acinetobacter species were not examined in that study (Rothrock et al., 2016). The species that we isolated were resistant to 14/31 clinically relevant antibiotics that were tested. Each of these Acinetobacter species has been previously reported to carry carbapenemase genes on the bla NDM or bla OXA containing plasmids, the latter predicted to have actually originated from A. radioresistens (Poirel et al., 2008;Zong and Zhang, 2013;Yang et al., 2015). Given that those plasmids are found in divergent Acinetobacter isolates, it is unlikely that the genes are subject to exclusive vertical transmission. In this study, 8/10 of our reservoir isolates were MDR, with one A. lwoffii MDR isolate showing an additional resistance to the last resort carbapenem, ertapenem, with an MIC of 8.0 mg/mL. Given the identification of MDR bacteria on an antibiotic-free farm, it is important to understand the dynamics of gene exchange between naïve pathogens and non-pathogenic MDR reservoir species in the absence of selective antibiotic pressure. Comparison of our drug resistance profiles to the previous study indicates a potential exchange of resistances across bacterial barriers. Most notably, tetracycline resistance was found in all the previous genera tested, including Campylobacter, Salmonella, Listeria, and Escherichia and in both of our turkey-derived A. lwoffii isolates. Resistance to tetracycline is most commonly conferred by the activity of a drug efflux pump, which are typically found on mobile elements (Espinal et al., 2011). It will be interesting to sequence the genomes of the A. lwoffii and A. johnsonii isolates to compare the resistance profiles determined in this study with the genetic composition and extra chromosomal elements that these strains may possess. Additionally, the resistance to β-lactam derivatives was observed in the previous study, while resistance to cephalosporins, which was widespread in our study, was not (Rothrock et al., 2016). These resistance phenotypes can be effected by the activity of AmpC, a metalloβ-lactamase (MBL), or an oxacillinase, all of which have been reported to be associated with mobile genetic elements (Pfeifer et al., 2010;Evans and Amyes, 2014). We previously sequenced strain A. radioresistens LH6 described in this study. LH6 lacks plasmids, but encodes several putative MBLs, which can account for its resistance profile (Crippen et al., 2018). Interestingly, LH6 also encodes bla OXA−23 with 100% amino acid homology to the A. baumannii ASM74664v1 homolog, but no broad carbapenem, cephalosporin or β-lactamase resistance phenotype was observed (Table 1 and Supplementary Figure S4), which is normally associated with the presence of this gene (Pfeifer et al., 2010). This could be due to the lack of IMP-1, OXA-58, and ISAcra1 in the LH6 genome, any of which are required for the trademark carbapenemase activity associated with the OXA-23 carbapenemase (Poirel et al., 2008;Higgins et al., 2013). Intermediate levels of nitrofurantoin resistance were found in Listeria spp. isolated in the Rothrock et al. (2016) study, while 9/10 of our isolates were resistant to nitrofurantoin, explained by the intrinsic nitrofurantoin resistance inherent to Acinetobacter spp. (Giske, 2015). This could be mediated by point mutations in nsfA or nsfB, which metabolize nitrofurantoin into the reactive intermediates that interfere with ribosomal subunits, or by oxqAB in some Acinetobacter spp., but the LH6 sequenced strain does not possess these efflux genes (Sekyere, 2018;Gardiner et al., 2019). Previously, a high incidence of fluoroquinolone resistance was also found among the Listeria spp., while aminoglycoside resistance was found in E. coli and Salmonella spp. (Rothrock et al., 2016). All of our isolates were susceptible to quinolone and aminoglycoside derivatives, indicating a barrier to genetic mobility that was potentially not present for tetracycline or β-lactam resistance elements. To further explore this phenomenon, we used a gfp-labeled resistance marker to demonstrate that plasmid DNA can be released through both contact-dependent killing by A. baumannii and by bacteriophage-mediated lysis. In our study, we performed bacterial competition assays with two strains of A. baumannii and tested CDI and T6SS deletion mutants. Co-incubation of the isolated strains with A. baumannii strain 19606 and its isogenic cdi1 and cdi2 mutants resulted in extensive killing of A. lwoffii, and less with A. radioresistens and A. johnsonii. We observed that deletion of cdi1 (Harding et al., 2017) did not increase survival of A. johnsonii and A. radioresistens, but did show less killing of A. lwoffii. In this study, cdi2 mutation did not increase survival for any of the isolated strains. These results indicate that CDI systems have a minimal impact on Acinetobacter interspecies competition, except in the case of CDI1 and A. lwoffii strain D16. When using the A. baumannii 17978 strain and its isogenic hcp mutant, we observed minimal killing of A. johnsonii, while A. radioresistens and A. lwoffii were extensively killed and the T6SS could only account for part of this reduction. This killing could be due to an undescribed CDI system that does not resemble known CDI systems in that strain or other combat mechanisms expressed by the 17978 strain (Le et al., 2020). These recent findings are particularly relevant to consider in environments that are under constant selective pressure, such as the poultry gut (Singer and Hofacre, 2006). Rapid expansion of MDR has also been described in low-resource urban areas and rural farming areas where waste management is underdeveloped (Pehrsson et al., 2016). These observations make it important to understand the transfer of resistance determinants, prompting our subsequent experiments tracking an exogenously added resistance marker. We confirmed that release of the KmR gene was proportional to the amount of cell killing observed, consistent with observing the greatest reduction in A. lwoffii growth concomitant with the highest levels of KmR gene detection/release. Therefore, bacterial prey susceptibility to attack will likely impact the availability of new MDR genes for A. baumannii to acquire. Due to these combinations of selective pressures and diversity of bacterial inhabitants, reservoir species accumulate MDR genes that can be released by related pathogens such as A. baumannii, which has also been isolated from livestock in earlier studies (Fernando et al., 2016), and then incorporated into their own genomes. Bacteriophages have also been described as vehicles for MDR spread through the transduction of genes or mobile elements that contain MDR genes (Brown-Jaque et al., 2015). If we consider the quantity of DNA that can be released when approximately half of the earth's bacterial population is killed by bacteriophages every 2 days (Deresinski, 2009), then the exchange of MDR genes across species through phage release of DNA not only becomes likely, but inevitable. Evidence for this gene exchange was first demonstrated in the case of the E. coli "superspreader" bacteriophages (Keen et al., 2017). In addition to the findings of Keen et al. (2017), we have demonstrated the ability of the novel A. radioresistens bacteriophages CAP1 and CAP3 to release an intact KmR gene that could then potentially be incorporated by certain naturally competent bacteria that share the same environment, including A. baumannii. We further demonstrated this phenomenon by transforming the recovered DNA from phage-lysed cells into chemically competent E. coli. The probability that these two phage isolates, from different taxonomic families, are unique in their ability to release intact MDR genes is highly unlikely. Thus, great consideration must be given to all lytic phages when designing phage therapy treatments, because of the potential accelerated risk for horizontal gene transfer. These findings provide evidence for alternate methods for rapid MDR gene spread in bacterial species, especially in food animals. Additionally, these types of studies should be replicated for phage-derived elements such as endolysins, which are also being considered as potential therapeutics along with intact phages (Fischetti et al., 2006). Because phage transduction can also be a mechanism for AMR determinant spread, we investigated the genomes of the isolated phages and their propagating strain, A. radioresistens LH6. Interestingly, 5/7 of the isolated phages were segmented RNA phages that are known to undergo recombination at high rates (Simon-Loriere and Holmes, 2011). In addition, genomic sequencing of A. radioresistens LH6 (Crippen et al., 2018) identified two possible chromosomal prophages, including one Mu-like, that may also be involved in AMR gene transduction (Braid et al., 2004). One of these prophages was indeed isolated after induction with mitomycin C and is being described in a separate manuscript (Crippen et al., in preparation). The presence of this active prophage and the regularity of prophages within the genus Acinetobacter indicates that prophages are also likely to participate in the dissemination of resistance determinants (Touchon et al., 2014;Costa et al., 2018). Microbial antibiotic resistance is an ancient defense mechanism that is developing into a major crisis for healthcare providers due our great dependence on antibiotics for most medical procedures. The prevalence and mechanisms associated with antibiotic resistance are better characterized, but the mechanisms of MDR gene transfer in the environment are less understood. It is important to continue studying antibiotic resistance gene transfer, so that innovative solutions to the current antibiotic resistance crisis can be found (Bush et al., 2011;Culp et al., 2020). DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material. AUTHOR CONTRIBUTIONS CC performed all the experiments, with the exception of the strain identification and resistance profiles, and drafted the manuscript. MR provided access to the farm and the fecal samples that the bacterial strains and bacteriophages were isolated from. SS facilitated the strain identification and determination of resistance profiles. CS coordinated all the experiments. All authors edited the manuscript.
v3-fos-license
2019-04-27T13:06:40.882Z
2016-05-30T00:00:00.000
134061538
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://tar.sljol.info/articles/10.4038/tar.v27i3.8205/galley/6277/download/", "pdf_hash": "e26c947c3aae14eb67b0b9be8193bcb027dc6111", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44699", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "e26c947c3aae14eb67b0b9be8193bcb027dc6111", "year": 2016 }
pes2o/s2orc
Effects of Changing Rainfall and Soil Temperature on Population Density of Pratylenchus loosi in Tea Lands at Different Elevations The climatic, elevational and edaphic factors are the major abiotic determinants of survival and reproductive behavior of plant pathogenic nematodes and thus responsible for their occurrence, population levels and severity of symptom development. Present study attempted to determine relationship between rainfall, soil temperature and soil moisture on soil and root population densities of Pratylenchus loosi, the key nematode pest of tea in six different elevation regimes in Sri Lanka. Rainfall, soil temperature and soil moisture of six locations were recorded by standard methods over 18 months. P. loosi populations in soil and root samples obtained from the same locations were also monitored using standard methods. The fluctuating nematode population density was correlated with rainfall, soil temperature and soil moisture. There was a positive correlation of P. loosi population density with mean rainfall and negative correlation with soil temperature and soil moisture content in majority of the tested locations. Results also revealed an increase in mean soil temperature above the optimal range for development of P. loosi and a remarkable change in soil temperature range of 18-24 0 C. However, there were exceptions in some locations indicating that factors other than temperature have influenced nematode population. Nevertheless, presence of P. loosi at increased soil temperature ranges beyond the acceptable range was evident in certain locations making disease expressions and damage to tea. Therefore, further investigations are warranted on the presence of new biotypes and the influence of other factors for development of P. loosi population in view of developing specific management strategies. INTRODUCTION Tea, Camellia sinensis (L) O. Kunze (Theaceae) is the major plantation crop grown in Sri Lanka.With its wide adaptability, tea is grown in a range of climates and soils in different agro ecological regions (AERs).Productivity of tea depends on various environmental and biological factors irrespective of the cultivar grown.The crop and soil management practices adopted, changing weather conditions and pest and disease incidences also determine the overall crop productivity. Plant parasitic nematodes are considered as one of the key pests which limits the establishment, growth and productivity of tea.Pratylenchus loosi is the most predominant nematode species causing economic damage to tea cultivation in Sri Lanka as well as many of other tea growing countries such as Japan, Iran, Bangladesh, China, and Korea.It is a perennial pest attacking both young and old plants and thus a problem in tea nurseries, new clearings as well as in mature tea fields (Gnanapragasam and Mohotti, 2008).In Sri Lanka, the damage of P. loosi to tea crop has been estimated to be in the range of 4% to 40% (Gnanapragasam and Mohotti, 2005). Nematodes are among the most sensitive animals in aquatic and soil ecosystems.Changes in soil temperature and moisture as affected by rainfall, sunshine hours, number of wet and dry days with climate change influence on the biology, morphology, locomotion, multiplication, establishment and survival of plant pathogenic nematode species (Liliane et al., 1999).Studies have demonstrated that geographical distribution range of plant and animal parasitic nematodes and their spread to newer areas may get expanded with global warming (Somashekar et al., 2010). The severity of damage by nematodes to tea depends on the interaction of number of factors such as prevailing climatic conditions, type of soil, cultural practices and age and vigor of the plant (Gnanapragasam, 1994).The distribution of P. loosi is determined mainly by soil temperature and soil moisture (Gnanapragasam and Mohotti, 2005).Sivapalan and Gnanapragasam (1975) reported that highest population and obvious pathogenicity symptoms of P. loosi had been encountered at altitudes with soil temperatures of 18-24 °C.Sivapalan (1972) also experienced less rate of population build-up of P. loosi and consequently reduced damage to tea at temperatures above and below the ambient temperature range.Further, a marked periodic fluctuation of P. loosi population levels has been observed in different months of the year where the variation was correlated to rainfall pattern and soil temperature (Sivapalan, 1972).In addition, Mohotti (2009) reported a significant shift in nematode populations and their distribution patterns with probable elevation of soil temperatures in respective tea growing regions, which was supported by evidences of Wijeratne (2013) on clear changes in weather parameters in all tea growing regions.Although P. loosi has been designated as an 'Up Country species of nematodes', in the recent past, it is reported in mid and low elevations of Sri Lanka.Nevertheless, P. loosi behaves as a species complex as reported in Sri Lanka, Japan and Iran (Mizukubo, 1998;Mohotti, 1998) the reasons for which have not been elucidated clearly as weather, host plants, cultural practices etc. Though, climate change impacts have been attributed for alterations in morphological, morphometric and molecular expressions of animal and plants, studies on nematodes are scanty.Hahn et al. (1994) reported the R. similis races / pathotypes in Sri Lanka based in relation to host status and localities.However, no descriptive studies have been conducted on P. loosi. These findings underline the importance of understanding the impact of climate change on soil nematodes and its implications to agricultural systems while developing mitigation and adaptation strategies to address the impact of climate change on agriculture.More importantly, the present evidences on unusual existence and spread of P. loosi and its remarkable damage to tea in Sri Lanka warrant urgent interventions in managing the pest, Therefore, the present study was conducted to determine the effects of rainfall, soil temperature and soil moisture on population densities of P. loosi in tea lands at different elevation regimes together with the correlations of each climatic parameter on the nematode density. METHODOLOGY Selection of sampling sites Sampling sites were selected by reviewing past records of P. loosi infestations in tea lands maintained by the Nematology Laboratory of the Tea Research Institute.Accordingly, Cicilton Estate, Hapugastenna Estate, Mahadowa Estate, Richiland Estate, Delmar Estate and a small holder land at Nawalapitiya were selected representing different elevations and soil temperature regimes (Table 1).Soil and root samples were collected from the six selected sampling sites during the period of February 2014 -May 2015 to assess nematode populations at monthly intervals.About 10-15 samples were collected from each location and about 50 g soil was collected from each sampling point.The samples were taken 15 cm away from the base of the bush and at a depth of 15-25 cm with an auger.Samples were pooled and about 500 g of the composite soil sample was prepared.The root samples each containing 5 g feeder roots were collected from the same sampling points along with the soil samples.The composite samples were brought to the laboratory for nematode estimation. Collection of Weather Data Soil temperature data were collected on daily basis, twice a day at 8.30 am and 3.30 pm at a depth of 10 cm using soil thermometers.Rainfall data were collected on daily basis during the experimental period using rain gauges established in respective locations. Determination of Soil Moisture Content Soil moisture content was determined at the time of sampling at monthly intervals.Oven dry method was used for determination of soil moisture content.About 100 g soil was taken into a moisture can and kept at 105 o C overnight. Extraction and quantification of nematodes Soil and root samples were processed for extraction of nematodes using modified Baermann Funnel Technique and Whitehead tray method, respectively (Southey, 1986).Processed samples were observed under light microscope and P. loosi counts were taken.Identification of the species P. loosi was done as per the morphological parameters.Data on P. loosi counts per 100 g soil and 5 g roots were taken separately. Data Analysis The soil temperature, rainfall and soil moisture data were analyzed using Statistical Application Software (SAS) 9.1.Correlations between weather data and soil and root populations of P. loosi at six locations monitored over 18 months were performed using correlation analysis. RESULTS AND DISCUSSION Data on rainfall, soil temperature and soil moisture monitored in six locations (i.e.PL1-PL6) over the experimental period in order to determine the variation of parameters attributable for changes of nematode behavior, population densities and resultant disease expressions are presented in Table 2.As presented in Table 2, a wide variation of rainfall, soil temperature and soil moisture, among locations and within a given location was observed.The highest mean values for rainfall, soil temperature (a.m.), soil temperature (p.m.) and soil moisture were reported from PL3, PL6, PL6 and PL5 locations respectively.In contrast, the lowest mean values for the above parameters were reported respectively from the PL2, PL2, PL5 and PL2 locations.Such changes would probably influence behavior, life cycle and population dynamics of P. loosi in any given location which would resultantly interfere in differences in disease expressions exhibited as symptoms as well as adaptations of P. loosi.Therefore, in order to understand the impacts of the changed weather parameters on nematode population densities, data were correlated. Correlation between soil population of P. loosi and Soil Temperature Soil temperature determines most of biological factors of living organisms where plant parasitic nematodes are no exception.Moreover, soil temperature has been identified as the most determining factor for P. loosi incidence in tea (Sivapalan and Gnanapragasam, 1975).The optimal soil temperature range for P. loosi was determined as 18-24 °C favored through a well distributed rainfall.However, results of the present study on data collected over a period of 18 months in six locations revealed an increase in mean soil temperature above the optimal range for development of P. loosi (Fig. 1), Soil temperatures in PL 2 and PL6 locations were beyond 24 0 C. Unusually, a remarkable change in soil temperature ranges was also seen in most locations except in PL2.The highest and lowest men soil temperatures were recorded from PL6 and PL2 locations respectively.The difference of soil temperature on P. loosi population was insignificant between the other locations (PL 1, PL 3, PL 4 and PL 5).A negative correlation between soil temperature and P. loosi population in soil was evident in locations except in PL 1 (R 2 = 0.006) and PL 4 (R 2 = 0.173) (Fig. 1). Though the highest P. loosi populations have been encountered at altitudes with soil temperatures of 18-24 °C as reported by Sivapalan and Gnanapragasam (1975), our results support the presence of P. loosi populations at locations above the optimum range.They are also in par with the results of Mohotti (2009) on the remarkable shift in P. loosi populations in the different agro-ecological regions.As such, the populations may behave as different isolates and express individual disease symptoms.Other factors responsible in population fluctuation of P. loosi however, need to be elucidated.The results of the present study also validate potentials of wide spreading of P. loosi species and its dominating ability in all tea growing regions.It also shows concomitant occurrence of P. loosi with other nematode species such as R. similis requiring higher soil temperatures.In contrast, R. similis has been sensitive with such climate change scenarios as its occurrence has been minimized in general and in certain locations previously known as critical, has been totally replaced by P. loosi (Mohotti, 2009).Hence, P. loosi showed its virulence as a dominating nematode species with potential survival mechanisms and adaptations to get established and developed even under extreme weather conditions.This warrants further understanding as well as search for novel management practices. Fluctuation of soil population of P. loosi as Influenced by the Mean Rainfall P. loosi densities fluctuate with mean rainfall at different locations (Fig. 2).In PL2 and PL6 locations, very low nematode densities (< 2 nematodes/100 g of soil) were recorded during the experimental period despite PL6 had a reasonably well-distributed rainfall.The initial understanding by Hutchinson and Vythilingam (1963) was that large populations of P. loosi being encountered in areas with high and well distributed rainfall.P. loosi has also been found abundantly in tea growing areas experiencing south-west monsoon rains than those having north-east monsoon rains (Hutchinson and Vythilingam, 1963).And, results of our study do support except in locations of PL2 and PL6. This indicates that rainfall is not the only environmental factor determining P. loosi density in tea soils.Effects of altitude, soil temperature, soil texture have also been reported to play a significant role on the population densities of P. loosi (Choshali et al., 2015).It has been also reported that P. loosi populations are more abundant in tea fields above 1219 m altitude (Hutchinson and Vythilingam, 1963).However, our findings experienced a lower P. loosi population in location PL2 with a higher altitude than 1219 m.This may be probably due to the lowest soil moisture content as recorded in the present study (Table 2). Correlation between Soil Moisture Content and P. loosi Population Densities Soil moisture plays a major role in soil biota inclusive of parasitic nematodes, saprophytic nematodes and their biological control agents.In tea lands, soil moisture levels are influenced by rainfall, soil type and the cultural practices adopted.Data on P. loosi in soil and roots of the study locations are described with varying levels of soil moisture. Correlation between Soil Moisture Content and P. loosi Population Densities in Soil The Fig. 3 presents the relationship between soil moisture content and P. loosi population levels in soil.All P. loosi populations were positively correlated with the soil moisture content except in locations PL2 and PL5 (Fig. 3).The negative relationships at PL2 and PL5 (R 2 = -0.005and 0.000 respectively) locations could be due to the extremely lowest and highest moisture contents in the respective soils as shown in Table 2, which are detrimental to the survival of nematodes.It also suggests the poor survival and adaptation mechanisms of the isolates of P. loosi at those locations to extreme soil moisture contents.In depth studies to understand responses of different P. loosi populations to varying soil moisture levels and search for specific nematode management practices to such nematode isolates are warranted.P. loosi is a migratory endoparasite and the root population density depends on changing soil environments.Therefore, root nematode population becomes increased when the soil conditions are unfavorable.Our results support the initial understanding of positive relationship with higher soil moisture content with P. loosi populations in roots in locations except PL1 and PL5 (Fig. 4).This confirms that population increase of P. loosi depends not only on soil moisture but also other factors as well.The type of soil and its management may have interfered with the magnitude of changes in soil environment which need further investigations under controlled environmental conditions. In tea, the information on impacts of weather factors on tea pests are scares besides the responses of tea growth and productivity.Erratic incidences and significant fluctuations of nematode species in different tea growing regions were reported in the recent past.Therefore novel approaches of location specific nematode management strategies need to be introduced.Therefore, further attempts were made to study the effect of changing rainfall, soil temperature and soil moisture on population dynamics of P. loosi in tea lands in six locations covering agro ecological regions and elevations and under controlled environments. CONCLUSIONS Survival rates and pathogenicity of plant parasitic nematodes are sensitive to changing rainfall and soil temperature.However, in tea, the information on impacts of climate change on tea pests are scares besides the responses on growth and productivity are evident.Erratic incidences and significant fluctuations of nematode species in different tea growing regions were reported in the recent past warranting novel approaches of location specific nematode management strategies.Therefore, further attempts were made to determine the effects of changing rainfall and soil temperature on population dynamics of P. loosi in tea lands in six locations covering agro ecological regions and elevations and under controlled environments. In comparison with past records, the results revealed a wide variation in rainfall, soil temperature and soil moisture in favor of P. loosi in the different study locations.As soil temperature is the most determining factor for P. loosi incidence in tea, the changes in soil temperature were seen significant in majority of the study locations with a rising trend from 19.3-31.2 0C; extreme soil temperature ranges were also seen where the optimal range for P. loosi is 18-24 0 C.As the nematode incidence above economic threshold levels and symptomologies were continued, new adaptations, emergence of new races and / or isolates of P. loosi etc. are therefore possible.In general, a negative correlation between soil temperature and P. loosi population in soil was evident in locations except in PL 1 (R 2 = 0.006) and PL 4 (R 2 = 0.173).Results did not reveal any strong adaptations of P. loosi populations to the higher temperatures and continued to behave as a subtropical nematode species.P. loosi densities in soil varied with the rainfall pattern and the results corroborated findings of Sivapalan (1972) and evidences in field. These results also validate potentials of wide spread of this species and dominate in all tea growing regions as well as presence as concomitant occurrence with other nematode species such as R. similis requiring higher soil temperatures.In contrast, R. similis has been sensitive with such climate change scenarios as its occurrence has been minimized in general and in certain locations previously known as critical, has been totally replaced by P. loosi (Mohotti, 2009).Hence, P. loosi showed its virulence as a dominating nematode species with potential survival mechanisms and adaptations to get established and developed even under extreme weather conditions.This warrants further understanding as well as search for navel management practices. All P. loosi populations were positively correlated with the soil moisture content except in locations PL2 and PL5.The poor relationship (R 2 = -0.005and 0.000 respectively) in such locations revealed that P. loosi populations seem comparatively less sensitive with possible adaptations and enhanced survival mechanisms.In depth studies to understand responses of different P. loosi populations to varying soil moisture levels are warranted and also to search for specific nematode management practices. The results support the initial understanding of soil moisture content with P. loosi populations in roots with positive correlations except in locations PL1 and PL5.The type of soil and its management etc. may interfere with the magnitude of changes in soil environment which need further investigations under controlled environmental conditions. According to the past experiences, P. loosi is widely distributed in tea plantations in the elevation range 750 -1800m with the greatest damage in the range of 1200 -1700 m and hence called as the 'Up country nematode species'.However, our results revealed that P. loosi could exist in locations irrespective of elevation ranges and could cause damage to tea. Research evidence on emergence and incidence of P. loosi and decrease or disappearance of R. similis densities in tea lands warrants adequate attention for mitigation strategies and location specific nematode management methods.Further, the results highlighted potentials of harnessing nematodes as a sound biological indicator in climate change impacts in agriculture. Fig 1 . Fig 1. Correlation between Soil Temperature and P. loosi population densities in soil in Locations PL1 -PL6 (Red line indicates the optimal soil temperature range preferable for P. loosi: 18-24 0 C) Fig 2 . Fig 2. Mean rainfall during the experimental period (a) and the fluctuation of P. loosi densities (nematodes/100 g of soil) in soil (b) in locations PL1-PL6. Fig 3. Correlation between Soil Moisture and P. loosi populations in soil in six locations (PL1 -PL6) Fig 4. Correlation between Soil Moisture content and P. loosi populations in roots in locations PL1 -PL 6
v3-fos-license
2017-06-23T08:15:15.583Z
1979-09-01T00:00:00.000
40371437
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://jnnp.bmj.com/content/42/9/815.full.pdf", "pdf_hash": "f9168cfb86179655ca402fa8d3c4a05323017a52", "pdf_src": "BMJ", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44701", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0bb3f11b495080e5052b0f71cd0a2efc34cedc22", "year": 1979 }
pes2o/s2orc
Congenital arteriovenous fistula producing carpal tunnel syndrome A case of carpal tunnel syndrome resulting from congenital arteriovenous fistula is described. Carpal tunnel syndrome can result from a wide variety of causes. We wish to report a case of the syndrome resulting from a congenital arteriovenous fistula, believed to be unique. Case report A 45 year old man was admitted to hospital because of swelling and pain in the left hand and forearm since the age of 12 years. He used to get pain and paraesthesiae intermittently in the radial three fingers of the left hand. Three weeks before admission he had persistent pain and paraesthesiae in the affected limb with some relief on elevation of the left arm. The left radial pulse was high volume. The left forearm and hand were swollen. The skin was bluish red, and there was local warmth. The swelling was not tender or pulsatile and there was no bruit. However, it was compressible and on elevation of the limb it decreased considerably and wasting of thenar groups of muscles became obvious. Motor power in the thenar group of muscles was grade 4. There was no objective sensory deficit. No other neurological or other systemic abnormality could be detected. Routine blood chemistry, haemogram, and cardiac investigations were within normal limits. Skin temperature over the dorsum of the left hand was elevated by 1OC, and oscillometry showed that the fistula was situated at the level of the wrist. An arteriogram performed by percutaneous puncture of the left brachial artery in the anticubital fossa showed a diffuse arteriovenous fistula involving all fingers and wrist (Fig. 1). Nerve conduction studies showed prolonged motor and sensory distal latencies on the left side. There was improvement in the distal latencies and motor nerve conduction velocities on elevation of the arm, and the amplitude of the evoked sensory potential of the left median nerve was increased on elevation of the arm (Table). The distal latency (2.0 ms) and motor nerve conduction velocity in the forearm segment (67 m/s) of the left ulnar nerve were within normal limits. The left carpal tunnel was explored surgically. There was extension of the arteriovenous fistula proximally through the carpal tunnel into the forearm. The carpal tunnel was full of engorged and tortuous vessels (Fig. 2). The median nerve was embedded between these vessels. Adequate decompression of the carpal tunnel was performed. Pain and paraesthesiae decreased considerably immediately after surgery, and subsequently there was complete relief of symptoms. Nerve conduction studies performed one month after surgery showed significant improvement in all the parameters, more so in the sensory conduction (Table). Discussion The clinical history, examination, and electrophysiological investigations of this patient have shown evidence of compression of the median nerve at the carpal tunnel on the left side. There seems to be good evidence that intermittent symptoms in carpal tunnel syndrome are the results of ischaemia (Gilliatt and Wilson, 1953;Fullerton, 1963) but the more permanent changes such as increase in terminal latency are not attributable to ischaemia. It has also been shown that a direct mechanical effect on myelin leads to conduction block and conduction delay. Thus, there seems likely to be a dual mechanism for nerve damage (Simpson, 1956;Fullerton, 1963;Anderson et al., 1970). Intermittent excruciating acroparaesthesias in the initial part of the illness in the present case can be explained by a vascular "steal" phenomenon caused by the fistula. In congenital arteriovenous fistula, arterial blood from a high pressure artery is shunted into a low pressure vein, thus decreasing venous pressure distal to the fistula (Noble, 1974). The increase in severity of symptoms in the present case in the recumbent position can be explained by pooling of blood in the fistula, thereby increasing its volume and resulting in compression of the Fig. 2 Photograph of carpal tunnel full of engorged and tortuous vessels. The median nerve (arrow) was embedded between these vessels. Congenital arteriovenous fistula producing carpal tunnel syndrome median nerve. The improvement of sensory nerve conduction after decompression can occur within 30 minutes (Hongell and Mattsson, 1971). The rapidity of this recovery strongly suggests that relief of ischaemia is likely to be responsible. Thus, in the present case the improvement in the motor and sensory conduction parameters after elevation of the limb cotild be the result of relief of ischaemia after shunting and mechanical compression of the median nerve by engorged veins. In our patient the attacks of pain were relieved immediately after decompression. The nerve conduction velocities, both motor and sensory, returned to the normal range within one month. Distal venous stasis because of increased venous pressure resulting from the transmission of the arterial pressure to the venous side, together with a leash of blood vessels constituting the arteriovenous fistula increased the volume of the tunnel contents, presumably causing constant compression of the median nerve with reversible and demyelinating conduction block and conduction delay. Thus, in this case there is evidence to suggest a dual mechanism for the nerve damage.
v3-fos-license
2023-01-19T21:15:59.350Z
2022-08-20T00:00:00.000
255975837
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://molecularbrain.biomedcentral.com/counter/pdf/10.1186/s13041-022-00960-5", "pdf_hash": "966d89394757e0f6689928042416a3258ac44fff", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44703", "s2fieldsofstudy": [ "Biology" ], "sha1": "966d89394757e0f6689928042416a3258ac44fff", "year": 2022 }
pes2o/s2orc
Toll-like receptors and their role in neuropathic pain and migraine Migraine is a complex neurological disease of unknown etiology involving both genetic and environmental factors. It has previously been reported that persistent pain may be mediated by the immune and inflammatory systems. Toll-like receptors (TLRs) play a significant role in immune and inflammatory responses and are expressed by microglia and astrocytes. One of the fundamental mechanisms of the innate immune system in coordinating inflammatory signal transduction is through TLRs, which protect the host organism by initiating inflammatory signaling cascades in response to tissue damage or stress. TLRs reside at the neuroimmune interface, and accumulating evidence has suggested that the inflammatory consequences of TLR activation on glia (mainly microglia and astrocytes), sensory neurons, and other cell types can influence nociceptive processing and lead to pain. Several studies have shown that TLRs may play a key role in neuropathic pain and migraine etiology by activating the microglia. The pathogenesis of migraine may involve a TLR-mediated crosstalk between neurons and immune cells. Innate responses in the central nervous system (CNS) occur during neuroinflammatory phenomena, including migraine. Antigens found in the environment play a crucial role in the inflammatory response, causing a broad range of diseases, including migraines. These can be recognized by several innate immune cells, including macrophages, microglia, and dendritic cells, and can be activated through TLR signaling. Given the prevalence of migraine and the insufficient efficacy and safety of current treatment options, a deeper understanding of TLRs is expected to provide novel therapies for managing chronic migraine. This review aimed to justify the view that TLRs may be involved in migraine. Migraine Migraine is a neurological disorder that manifests as a paroxysmal headache lasting approximately 4-72 h. This type of headache may be unilateral and is characterized by pulsating or throbbing. It is generally associated with nausea and/or vomiting and sensitivity to light and sound. It can be relieved after rest and aggravated after activity. If treated inactively or improperly, headache severity may progress throughout an attack and even develop into chronic migraine [1]. Migraine is divided into episodic (< 15 monthly headache days, MHDs) and chronic (≥ 15 MHDs, with migraine attacks occurring at least 8 days per month), according to the frequency of headache days per month [2]. In the 2016 Global Burden of Disease Study, migraine was a leading cause of disability among patients under 50 years of age worldwide, second only to lower back pain [1,3]. However, the exact etiology and pathogenesis of migraine are still under discussion, resulting in limited treatment options. Recent studies have shown that Tolllike receptors (TLRs) are significantly associated with Open Access *Correspondence: gezhaoming5120@163.com migraine. They mediate inflammatory pain and cause central sensitization by generating inflammatory mediators (e.g., TNF-α, IL-1β, and NO) [4]. Neuropathic pain Neuropathic pain was redefined as pain caused by a lesion or disease of the somatosensory system [5,6]. Its symptom severity and duration are often greater than those of other types of chronic pain [7], with 5% of patients debilitated despite the use of analgesics [8]. Therefore, in-depth study of the role of TLRs in neuropathic pain is conducive to better treatment. Several recent studies have demonstrated that TLRs are dramatically associated with neuropathic pain [4,[9][10][11][12][13][14][15][16][17]. Its pathogenesis may be that they induce the activation of microglia or astrocytes and the production of the proinflammatory cytokines in the spinal cord, resulting in the development and maintenance of inflammatory pain and neuropathic pain. In particular, primary sensory neurons express TLRs to sense exogenous PAMPs (pathogen-associated molecular patterns, PAMPs) and endogenous DAMPs (damageassociated molecular patterns, DAMPs) released after tissue injury and/or cellular stress. History of TLRs TLRs have been characterized by their essential contribution to innate immune signaling [18,19]. They were first discovered in the form of genes in Drosophila melanogaster that control the dorsal-ventral axis during embryonic development [20]. Toll was further identified as a transmembrane interleukin-1 receptor homolog that initiates immune responses in Drosophila in vitro [21,22]. A human homolog of Drosophila Toll (Toll-like) was cloned and characterized as a transmembrane protein that can activate nuclear factor-κB (NF-κB), mediating transcription of the proinflammatory cytokines IL-1, IL-6, and IL-8 in human monocytes [23]. The discovery of this receptor provided preliminary evidence that TLRs are regulators of mammalian immunity [18,24]. The TLR gene was discovered to be one of the key genes during development. TLRs are specific type-I transmembrane receptors and pathogen pattern recognition receptors in the innate immune system. These receptors initiate immediate innate immunity by recognizing pathogens and can initiate adaptive immunity via activating signaling pathways. However, they are also expressed in many non-immune tissues, both throughout development and in adulthood. Several studies have indicated that TLRs not only exert immune functions, but also have a wide range of functions in regulating cell fate, cell number, and cell shape [29][30][31][32][33]. These receptors also play a key role in regulating the survival of nerve and glial cells and regulating synaptic plasticity in the central nervous system (CNS) [34]. Signaling pathways of TLRs The TLR ligands include exogenous pathogenic microorganisms and endogenous ligands released after tissue injury or damage. TLRs play an essential role in recognizing specific patterns of microbial components involved in the activation of innate immunity. Simultaneously, they can initiate a series of downstream reactions by binding to endogenous ligands during acquired immune activity. These noxious endogenous ligands are known as DAMPs (also known as alarmins). MyD88 is essential for the induction of inflammatory cytokines triggered by all TLRs. TIRAP is specifically involved in the MyD88-dependent pathway via TLR2 and TLR4, whereas TRIF is involved in the MyD88-independent pathway that is mediated by TLR3 and TLR4. Thus, the diversity of TIR domain-containing adapters provides the specificity and complexity of TLR signaling [35]. The TLR5, TLR7, TLR8, and TLR9 signaling pathways are MyD88-dependent. Research on TLR10 signaling is currently inconclusive. To date, TLR10 is the only TLR known to exhibit antiinflammatory properties. Previously, TLR10 was thought to be an "orphan receptor, " but many recent studies have identified ligands of TLR10 [25,38]. Some studies have suggested that TLR10 activation can promote Fig. 2 TLRs signaling. TIR domain-containing adaptors and TLR signaling. MyD88 is an essential TIR domain-containing adaptor for the induction of inflammatory cytokines via all the TLRs. Upon stimulation, MyD88 recruits IL-1 receptor-associated kinase (IRAK) to TLRs. IRAK is activated by phosphorylation and then dimerizes with TRAF6, leading to the activation of two distinct signaling pathways, finally activating MAPK and NF-kB to elicit proinflammatory cytokines. TIRAP/Mal is a second TIR domain-containing adaptor that specifically mediates the MyD88-dependent pathway via TLR2 and TLR4, While TRIF specifically participates in the MyD88-independent pathway mediated by TLR3 and TLR4, TLR2 leads to the complexity of signal pathway by forming tlr2-tlr1 and tlr2-tlr6 heterodimers and starts intracellular signal transduction. Both homodimers (TLR10/TLR10) and heterodimers (TLR10/TLR2) can recruit MyD88. TLR10 can reduce the production of IL-1β by directly inhibiting MyD88 or MAPK. Although several studies have suggested its inflammatory properties, TLR10 has also been shown to increase the production of IL-1Ra (an anti-inflammatory factor), but the underlying mechanism is still unclear, as indicated by question marks. Nucleic acids in endolysosomes activate TLR3, TLR7 or TLR9 and initiate different and overlapping signal cascades inflammation by activating NF-κB, while others have shown that it suppresses inflammation by inhibiting NF-κB. However, the downstream signaling pathway remains to be elucidated. The complexity of TLR10 signaling may be related to its ability to form TLR2/TLR10 heterodimers or TLR10/TLR10 homodimers. TLRs and migraine TLRs are normally expressed in immune and glial cells of the CNS [39,40]. In addition to pathogen recognition, TLRs also function to recognize the molecular patterns of ligands associated with cellular stress, tissue damage, or cell death [41][42][43]. The possible mechanisms by which TLRs cause migraine are as follows: activation of TLRs leads to the upregulation of NF-κB, while increasing the transcription of genes encoding IL-1 family cytokines and TNF [49,50]. After activation, Th1, Th2, and Th17 effector cells express a series of cytokines that act on innate immune cells to fight infections and may cause migraine [51][52][53] (Fig. 3). TLR2 signaling Residing at the plasma membrane, TLR2 is characterized by an exceptional diversity of compatible exogenous and endogenous ligands [18,54]. This is mainly because it can dimerize with TLR1 or TLR6, which increases the complexity of ligand specificity. Structural studies have confirmed that TLR2 can distinguish various lipopeptides by forming TLR2/TLR1 and TLR2/TLR6 heterodimers [18,55,56]. Ligand-induced heterodimerization of the TLR2 extracellular domain brings the cytoplasmic C-terminal TIR domain into proximity and then initiates intracellular signaling via the MyD88 -dependent pathway. This then leads to upregulation of NF-κB and increased transcription of genes encoding the IL-1 family cytokines and TNF by binding to the cofactor CD14, which induces the production of inflammatory cytokines, resulting in pain [18,51,57]. The TLR2 signaling pathways are complex, not only because it is easy to form a heterodimer, but also because there seems to be a large overlap between some endogenous ligands and their effects on TLR2 and TLR4, creating crosstalk between TLR2 and its downstream targets. The ability to directly attribute to functional results to TLR2 depend on the method used [18]. TLR2 and neuropathic pain TLR2 is found in many organisms, where it induces the generation of inflammatory cytokines, activating NF-kB, with consequent pain [51,63]. Although low levels were detected in astrocytes, oligodendrocytes, Schwann cells, fibroblasts, endothelial cells, and neurons, they are predominantly expressed on microglia and other macrophages in the peripheral and central nervous system [49,50,[64][65][66]. Several studies have shown that TLR2 activates microglia and astrocytes and produces proinflammatory cytokines in the spinal cord following tissue and nerve injury, leading to the development and maintenance of inflammatory and neuropathic pain [4]. Several researchers have also found that Tlr2 knockout partially alleviated mechanical allodynia and thermal hyperalgesia caused by nerve ligation [12,18,67]. TLR2 and migraine RNA sequencing of the brain revealed that Tlr2 gene expression is highly enriched in microglia compared to other cell types, and has been identified as a reliable marker of activated microglia in vivo, but its detailed role in microgliosis is still unknown [18,68,69]. Previous studies have suggested that TLR2 is involved in the pathogenesis of neuropathic pain and trigeminal neuralgia. However, the mechanism of TLR2 pathway during migraine attacks remains unclear [40]. Evidence suggests that TLR are significantly associated with migraine. Transcriptomics has demonstrated that the expression of proinflammatory genes (e.g., TLR2, CCL8) in the calvarial periosteum is significantly increased in patients with CM [44]. In a study of migraine with aura, multiple cortical spreading depression (CSD) episodes induced significant HMGB1 release, and the HMGB1-TLR2/4 axis activated microglia [45]. Several studies have shown further evidence that both mast cells and T cells are activated and the expression of chemokine and TLR2 are increased in migraines [51,70,71]. An increase in inflammatory cytokines leads to increased cell adhesion, production of chemical inflammatory compounds, and NF-κB dysfunction (Fig. 3). Therefore, reducing inflammatory symptoms in migraine may affect innate immune response pathways by modulating the inflammatory cytokines, TLRs and NF-κB [44]. It is not difficult to see that research on TLR2 and migraine is still in its infancy. Further work is needed to elucidate the upstream and downstream molecular mechanisms of migraine. TLR3 signaling TLR3 is an intracellular receptor localized within the endosomal compartments. In addition to DRGs, TLR3 is thought to be expressed to varying degrees in microglia, astrocytes, oligodendrocytes, Schwann cells, fibroblasts, and endothelial cells [18]. Intracellular TLR3 is intrinsically capable of detecting nucleic acids. It acts within the endosomal compartment and can distinguish between host and foreign nucleic acids. This role is exerted at specific stages of endosomal maturation and acidification. TLR3 recognizes double-stranded RNA (dsRNA) and is MyD88-independent [72]. TLR3 is specific to dsRNA, and in addition to ligand dsRNA, TLR3 is also able to recognize some ssRNA viruses [27]. It is unique among all TLRs, and it signals through the TRIF pathway, resulting in the release of type I interferons via IRF3 and/or inflammatory cytokines via NF-κB [18,36]. TLR3 and neuropathic pain Research on the mechanism of its involvement in pain is increasing [27], and there is some initial evidence suggesting that TLR3 modulates pain through both shared and distinct molecular mechanisms. This is indirectly supported by the observation that DRGs express TLR3 in culture. TLR3 specific agonist (poly I: C) can increase TRPV1 expression and the functional activity of these sensory neurons, along with triggering an increase in the release of pro-nociceptive prostaglandin E2 [18,73]. However, few studies have investigated the relationship between TLR3 and neuropathic pain. A recent investigation identified elevated TLR3 mRNA and protein levels in the rat spinal cord after nerve injury along with increased activation of microglial autophagy. Intrathecal injection of the TLR3 agonist poly (I: C) significantly increased the activation of microglial autophagy and promoted neuropathic pain, which was dramatically reversed by TLR3 knockout [11]. Several studies have shown that TLR3 plays a substantial role in the activation of spinal microglia and development of tactile allodynia after nerve injury [74]. TLR3 deficient mice exhibit moderately reduced allodynia in response to nerve injury, suggesting that activation of TLR3 can be used to regulate neuropathic pain [12]. Tong Liu demonstrated a critical role of TLR3 in regulating sensory neuronal excitability, spinal cord synaptic transmission, and central sensitization. Central sensitization-driven pain hypersensitivity, but not acute pain, is impaired in Tlr3(-/-) mice [10]. However, the specific endogenous ligands of TLR3 and mechanisms by which they induce neuropathic pain remain unclear. TLR3 and migraine Although little research has been conducted on the relationship between TLR3 and migraine, there is direct and indirect evidence to suggest that TLR3 is associated with migraine. Research has shown that TLR3 mediates inflammatory pain and causes central sensitization. The specific signaling pathways are as follows: activation of TLR3 in spinal cord microglia results in the activation of the nuclear factor κB (NF-κB), extracellular signal-regulated kinase (ERK), and p38 signaling pathways, leading to the production of inflammatory mediators, central sensitization, and chronic pain [4] (Fig. 3). However, there seem to be opposing conclusions regarding the association between TLR3 and migraines. Significant evidence suggests that TLR3 activation is neuroprotective and anti-inflammatory in CSD-induced neuroinflammation. Targeting TLR3 may be a novel strategy for developing new treatments for CSD-related neurological disorders [46]. This contradictory conclusion provides research space and innovation for future research. Apparently, research on the relationship between TLR3 and migraine is insufficient, and more research is needed in the future. TLR4 signaling TLR4 is one of the most widely characterized TLRs owing to its fundamental role in bacterial perception and the resulting inflammatory response. The canonical ligand for TLR4 is lipopolysaccharide (LPS). The recognition of LPS by TLR4 is multifaceted, and requires the coordination of multiple accessory proteins and coreceptors [18]. TLR4 and neuropathic pain A growing number of studies have shown that TLR4 is a key receptor associated with persistent pain [18,[79][80][81]. The participation of the sciatic nerve in neuropathic pain was confirmed by drug interventions in a chronic contraction injury model [82]. The TLR4 antagonist LPS-RS reversed mechanical hypersensitivity in a mouse model of arthritis pain [83]. While antagonism of TLR4 may help prevent dysregulated pain, the involvement of TLR4 may help orchestrate some aspects of tissue repair in the context of nerve injury [18,84,85]. Therefore, targeting TLR4 in the treatment of neuropathic pain needs to be cautiously confirmed through further in-depth research. TLR4 and migraine The findings of Rafiei et al. suggested that TLR-4 polymorphism is a genetic risk factor for migraine [86]. Other evidence has indicated that TLR4 is associated with hyperalgesia in migraines. The TLR4 signaling pathway promotes hyperalgesia induced by acute inflammatory soup delivery by stimulating the production of proinflammatory cytokines and activating microglia [87]. IL-18-mediated microglia/astrocyte interactions in the medullary dorsal horn likely contribute to the development of hyperpathia or allodynia induced by migraines [88]. In periorbital hypersensitivity of migraine, the TLR4 antagonist (+)-naltrexone blocked the development of facial allodynia after supradural inflammatory soup [89]. In addition, the relationship between the gut microbiota and migraine is currently a hot research topic. Significant research has shown that migraine is associated with functional gastrointestinal disorders (FGIDs), such as functional nausea, cyclic vomiting syndrome, and irritable bowel syndrome (IBS). Modulation of the Kynurenine (l-kyn) pathway (KP) may provide common triggers for migraine and FGIDs involving of TLR, aryl hydrocarbon receptor (AhR), and MyD88 activation; Meanwhile, TLR4 signaling was observed to initiate and maintain migrainelike behavior through mouse MyD88, and KP metabolites detected downstream of TLR activation may be a marker of IBS. Therefore, TLR4 may play a role in the mechanism of migraine induced by FGIDS [47,48] (Fig. 3). Although the relationship between TLR4 and migraine is more well-studied than that between TLR2 and TLR3, the related upstream and downstream mechanisms still require significant research. Conclusions and perspectives Decades of work have indicated that pain and inflammation are subtly entangled concepts. Here, we present evidence that TLRs are essential for migraine development. Research thus far has suggested that the TLR family members TLR2, TLR3, and TLR4 are associated with migraine, but the detailed underlying pathways and mechanisms remain unclear. Since the effect of each TLR on pain varies widely due to its structure and cellular location, future studies should investigate the signaling properties of TLRs in migraine attacks at a deeper level, while seeking to translate preclinical insights into effective treatment. In the study of the relationship between TLR2, TLR3, TLR4, and migraine, more attention should be paid to the study of the detailed signaling pathways. We further dissected how each TLR affects nociception and how its expression in glial cells and neurons, or crosstalk between the two, differentially affects the processing of migraines. In addition to TLR2, TLR3, and TLR4, future research should also focus on the roles of TLR5, TLR7, TLR8, and TLR9 in the etiology of neuropathic pain in migraine. Despite these challenges, continuing to elucidate the role of each TLR in representing pain experience provides a very promising opportunity to improve pain in migraine sufferers.
v3-fos-license
2018-04-03T05:56:05.661Z
2016-05-26T00:00:00.000
18849338
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/ijg/2016/9543274.pdf", "pdf_hash": "16a2068ea4fd672867bd78d2fec917a77fc72224", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44704", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "46c1ac7cf83b4883b43cb59b8845f5a2dbed5c8d", "year": 2016 }
pes2o/s2orc
Comparative Genomics of Herpesviridae Family to Look for Potential Signatures of Human Infecting Strains Herpesviridae family is one of the significant viral families which comprises major pathogens of a wide range of hosts. This family includes at least eight species of viruses which are known to infect humans. This family has evolved 180–220 million years ago and the present study highlights that it is still evolving and more genes can be added to the repertoire of this family. In addition, its core-genome includes important viral proteins including glycoprotein B and helicase. Most of the infections caused by human herpesviruses have no definitive cure; thus, search for new therapeutic strategies is necessary. The present study finds core-genome of human herpesviruses that differs from that of Herpesviridae family and nonhuman herpes strains of this family and might be a putative target for vaccine development. The phylogenetic reconstruction based upon the protein sequences of core gene set of Herpesviridae family reveals the sharp splits of its different subfamilies and supports the hypothesis of coevolution of viruses with their hosts. In addition, data mining for cis-elements in the genomes of human herpesviruses results in the prediction of numerous regulatory elements which can be used for regulating the expression of viral based vectors implicated in gene therapies. Introduction Human herpesviruses (HHVs) are one of the major human pathogens and are known to cause several diseases including herpes genitalis, infectious mononucleosis, and Kaposi's sarcoma. Herpes simplex virus type 1 (HSV-1) and herpes simplex virus type 2 (HSV-2) are the most common pathogens among HHVs and cause several infections including genital or oral herpes, conjunctivitis, and encephalitis, commonly known as herpes simplex infection. This infection is incurable and around 90% of world's population is infected with one or both viruses [1]. If human simplex virus (HSV) induced encephalitis remains untreated, it has a very high (>70%) fatality rate [2]. Its management is also poor which results in death of a major proportion of patients while only a minor proportion returns to normal functions. In addition, Epstein-Barr virus (EBV) is another one of most common human pathogens and is implicated in a number of human malignancies. Previous study showed that EBV-attributable malignancies accounted for 1.8% of all cancer deaths in 2010 and this percentage is increased by 14.6% over a period of 20 years [3]. There are no definitive therapies or drugs available for most of the HHV induced infections. Global burden of HHV induced infections is increasing rapidly which needs effective means of prognosis and therapeutics for its better management. On the other side, few members of HHVs including HSVs are also implicated as vectors for vaccine development and gene therapy of several diseases, namely, Parkinson's disease and Alzheimer's disease. Cis-elements play significant role in the regulation of these virus-vectors for desired gene expression. These aspects of HHVs make them significant for clinical and pharmaceutical research. HHVs belong to Herpesviridae family of Herpesvirales order under group I (dsDNA) in virus classification hierarchy. Members of Herpesviridae family are well characterized and are known to infect a wide range of hosts. In addition to humans, these hosts include mammals, birds, reptiles, amphibians, molluscs, and fish. At least eight species of HHVs are found to infect humans. Based upon biological features and genomic attributes, members of Herpesviridae family have been classified into three subfamilies including Alphaherpesvirinae, Betaherpesvirinae, and Gammaherpesvirinae with their estimated origin being 180 to 220 million years ago [4]. The Alphaherpesvirinae subfamily includes important 2 International Journal of Genomics [5]. In recent times, a bloom in sequencing technologies has contributed to an increase in the number of publically available genome sequences of several members of Herpesviridae family. This has led us to investigate this family in context of its genomic diversity and evolutionary aspects. In this study, we performed a pan-genome analysis and phylogenetic clustering of publically available complete genomes of 64 members of Herpesviridae family. Further, a detailed analysis was conducted to explore the differentiating genomic attributes of HHVs in comparison to non-HHVs belonging to Herpesviridae family. The core gene sets of HHVs are further screened for putative antigenic determinants which might be potential candidates for epitope-based vaccine development. In addition, we also carried out genome data mining of HHVs for regulatory cis-elements which might be crucial factors for modulating the expression of viral genes in vaccine development and gene therapy for fatal diseases. Pan-Genome Analysis. The pan-genome is calculated by using ortho Markov cluster (OMCL) and clusters of orthologous groups (COG) methods implemented in get homologues package [7] with default parameters. The intersection of these two algorithms is taken for determination of pan-genome which includes four compartments including core (genes contained in all considered genomes), soft core (genes present in 95% of the considered genomes), cloud (genes present in a few genomes), and shell (remaining genes contained in several genomes). The expansion of a pan-genome size is examined by plotting the number of genomes considered against the total number of genes. The pan-genome plot is fitted with Tettelin function available in get homologues package [7] to estimate the size of pangenome. Similarly, the contraction of core-genome size is examined by plotting the number of genomes considered against the total number of genes. The core-genome plot is fitted with Tettelin function available in get homologues package [7] to estimate the size of core-genome. 2.2.2. Core-Genome Analysis. The core-genome is evaluated by using bidirectional best-hit (BDBH), OMCL, and COG clustering strategies implemented in get homologues package [7] with default parameters. The intersection of these three clustering methods is taken as stringent consensus coregenome. Epitope Prediction. Data mining is done for the identification of antigenic determinants in the core gene set of HHVs using Immune Epitope Database (IEDB, http://www.iedb.org/). The core gene products of HHVs are screened for any kind of epitopes involved in any human disease which can induce human immune response. To achieve this, we have searched the IEDB database (version as on 15th April, 2016) with antigen (parameter organism: human herpesvirus species, namely, HHV-1, HHV-2, HHV-3, HHV-4, HHV-5, HHV-6 (type A and type B), HHV-7, and HHV-8) and host (humans) and using the other parameters as the default values. Phylogenetic Reconstruction. The phylogenetic reconstruction is done based upon core gene set of 64 members of Herpesviridae family. To achieve this, glycoprotein B (gB) and helicase protein sequences are extracted from the proteomes and were concatenated. Sequence alignment is done using ClustalW module of MEGA6 [8] with default parameters. The evolutionary history is inferred using the Neighbor-Joining method. The bootstrap consensus tree inferred from 1,000 replicates is taken to represent the evolutionary history of the taxa analysed. The evolutionary distances are computed using the JTT matrix-based method and are in the units of the number of amino acid substitutions per site. The analysis involved 64 amino acid sequences. All positions containing gaps and missing data are eliminated. There are a total of 1,283 positions in the final dataset. Evolutionary analyses are conducted in MEGA6 [8]. Cis-Element Prediction. To predict the cis-regulatory regions in DNA sequences of HHVs, standalone version of Cister (Cis-element Cluster Finder) tool [9] is used. Default parameters of Cister tool are used along with default nucleotide count matrices for the selection of 16 cis-elements available on the given webpage (http://zlab.bu.edu/∼mfrith/NucFreqMat.html). Pan-Genome Analysis of Herpesviridae Family. To determine the global gene repertoire of the Herpesviridae family, the number of new genes added by each genomic sequence is estimated. The expansion of a pan-genome is examined by plotting the number of genomes considered against the pan-genes observed along with Tettelin fit. The resulting pan-genome curve suggests its open nature as it does not reach a plateau and grows by an average of 24 genes per genome (Figure 1(a)). This open pan-genome indicates the continuous evolution of Herpesviridae family using different gene acquisition strategies, namely, horizontal gene transfer and diversification. This indicates towards the expansion of the pan-genome of Herpesviridae family with the increase in the number of additionally sequenced species. The open nature of the pan-genome of Herpesviridae family is also consistent with the hypothesis that species inhabiting a wide range of environments tend to have an open pan-genome [10,11]. Further, pan-genomes obtained by OMCL and COG algorithms produce clusters of 2,094 and 2,271 genes, respectively, whereas their intersection results in a cluster of 1,785 genes. In addition, this gene cluster is further classified into four compartments including core (0.28%), soft core (0.50%), cloud (86.94%), and shell (12.54%) (Supplementary Figure 1). The core gene set of the Herpesviridae family includes the genes present in all 64 genomes which are highly conserved [12] during the evolution of this family, whereas soft core includes the genes which are present in at least 60 genomes taken in this study. Basically, soft core estimates a more robust core with the possibility of missing or truncated genes [13]. Shell component of Herpesviridae family estimates the genes present in >2 genomes but <60 genomes and represents limited conservation [12] during the evolution of this family. The gain and loss of these genes from a given genome is supposed to occur at slower rate [14]. In contrast, cloud component includes the genes which are gained and lost from the genomes at faster rate [14] and are poorly conserved [12]. In our dataset, cloud component estimates the genes present in ≤2 viral genomes of the Herpesviridae family. In contrast to the expansion of size of pan-genome, the contraction of core-genome size is evaluated by plotting the number of genomes considered against the core genes observed followed by fitting the plot with Tettelin function. The core-genome size of Herpesviridae family represents a well fitted decaying exponential trend (Figure 1(b)). It implies that the number of core genes present in all considered genomes tends to decrease with the addition of genomes and reaches a saturation level after finding minimal essential core set required for viral survival and growth. Evaluation of Core-Genome of Herpesviridae Family. A highly stringent strategy is further employed to find a 4 7 11 15 19 23 27 31 35 39 43 47 51 55 59 63 1 Genomes ( minimal essential core of Herpesviridae family. Towards this, the core-genomes are examined by three clustering methods including BDBH, COG, and OMCL strategies resulting in clusters of 2, 8, and 6 genes, respectively. whereas their intersection produces a cluster of 2 genes (Figure 2(a)). This might be a minimal set of critical genes essential for the survival of all members of Herpesviridae family taken in this study. These genes code for glycoprotein B and helicase proteins. Glycoprotein B is a primary and crucial component of the herpesvirus fusion machinery which is involved in the cell entry of herpesviruses into the host cells [15]. Similarly, helicase is another crucial component of viral genomes which is essential for several significant biological processes including viral genome replication, transcription, and translation [16]. of HHVs and non-HHVs of Herpesviridae family, their corresponding core-genomes are evaluated. Core-genome evaluation of HHVs is done using three clustering methods including BDBH, COG, and OMCL resulting in three different clusters consisting of 5, 7, and 11 genes, respectively ( Figure 2(b)). Intersection of these three methods results in a final cluster of 3 genes which represents a minimal set of core genes of HHVs. This includes genes encoding glycoprotein B, helicase, and major capsid proteins. In case of non-HHV strains of Herpesviridae family, core-genome analysis results in BDBH, COG, and OMCL clusters of 2, 7, and 6 genes, respectively, whereas their intersection results in a cluster of 2 genes (Figure 2(c)). This minimal set of core genes of non-HHVs of Herpesviridae family encodes glycoprotein B and helicase proteins. Though the core gene set of non-HHVs of Herpesviridae family is identical to that of the whole Herpesviridae family but is different from the core gene set of HHVs, the core gene set of HHVs include one additional gene encoding major capsid protein which is absent in the core gene set of non-HHVs of Herpesviridae family. It implies that major capsid protein may be critical for the HHVs infecting humans. Difference in the Genomic Major capsid protein functions in the assembly of the capsid and DNA packaging into the capsid for new viral particles within the host [17]. A study carried out on human papillomavirus 16 [18] concluded that yield of virus-like particles (VLPs) is different in different viral strains with distinct sequences of major capsid protein L1. In addition, major capsid protein is also implicated in immunogenicity and thus becomes a suitable target for vaccine development. For instance, major capsid protein (VP1) of Merkel cell polyomavirus [19] was found to be a major immune activating factor inducing a robust polyclonal antibody response. Epitope Prediction in Core Gene Set of HHVs. We have investigated putative antigenic determinants in the core gene set of HHVs which might have significant role in inducing human immune system. From Table 2, it is evident that envelope glycoprotein B of all HHVs except for human herpesvirus 7 consists of numerous epitopes, whereas antigenic determinants of the major capsid protein were identified in the genomes of all HHVs except for HHV-3, HHV-7, and HHV-8. However, no antigenic determinant is located in the helicase protein of any of the HHVs. In addition, from our analysis we have identified only three epitopes in the genome of HHV-7 which are present on the antigenic proteins including ribonucleoside-diphosphate reductase large subunit-like protein U28, other human herpesvirus 7 proteins, and protein U3. The presence of a comparatively lower number of epitopes in the genome of HHV-7 and in particular the absence of antigenic determinants in the core proteins including glycoprotein B and major capsid protein of HHV-7 indicates the limiting nature of pathogenesis of this virus (Table 2). This is corroborated by the fact that HHV-7 is not a known causative agent of any definitive disease although it has been found to be implicated in febrile seizures with or without roseola infantum infection along with HHV-6 virus [20]. In addition, reviews by Long et al., [21] and Griffiths [22] also suggest the lower pathogenicity of HHV-7. Recently, the epitope-based vaccination has been suggested as a promising measure to enhance the protective immunity against several infections caused by HHVs and other viral species. For example, YNND epitope present in the glycoprotein B of HCMV was found to be a significant target for vaccine development in order to induce protective immunity [23]. Similarly, protective epitope peptide from glycoprotein D (gD) of HSV-1 was found to have immunomodulatory protective effects [24]. In another study, multiepitope assembly peptide (MEAP) from HSV-2 was found to induce efficient protective immune response in mice [25]. Similarly, epitope vaccine based on EBV-specific CD8+ T-cell peptide was found to be a potent vaccine against infectious mononucleosis in phase I trial [26]. Thus, the putative antigenic determinants of glycoprotein B and major capsid protein might be potential targets for epitopebased vaccine development for protective immune response against infections caused by HHVs. Although no antigenic determinant is located in helicase enzyme of HHVs, the complex of this enzyme with primase is suggested to be an efficient drug target against the infections caused by HSV-1 [27], HSV-2 [28], and HHV-3 [29]. Thus, helicase and putative antigenic determinants of glycoprotein B and major capsid protein of HHVs may become effective targets for drug and vaccine development; experimental investigations are required to develop therapeutics and drugs using these targets. Phylogenetic Reconstruction. Phylogenetic reconstruction of Herpesviridae family is done using its core gene products including glycoprotein B and helicase proteins. The phylogenetic tree based upon these two conserved proteins clearly resolves the splits between herpesvirus subfamilies and sublineages (Figure 3). Present study supports the previous observations of early split of Betaherpesvirinae and Gammaherpesvirinae subfamilies from Alphaherpesvirinae subfamily [30]. It is also seen that a few lineages of herpesviruses are clustered together, namely, Elephant endotheliotropic herpesvirus 5, which is clustered with Elephantid herpesvirus 1 and Alcelaphine herpesvirus 1, which is clustered with Alcelaphine herpesvirus 2. These observations are consistent with the hypothesis of coevolution of viruses with their hosts [30]. Data Mining of the Genomes of HHVs for the Prediction of Cis-Elements. Cis-elements are regulatory sequences, namely, promoters and enhancers, which regulate gene expression and control cellular dynamics in terms of its structures and functions. Cis-elements are usually composed of noncoding DNA and contain protein binding sites for transcription factors (TFs) or transcription regulators (TRs) which are essential to initiate and regulate the process of transcription. Cis-elements play significant role in virus induced pathogenesis by determining the range of cell-types susceptible to viral infection, modulating and resisting the host immune system [31]. In the present study, we have mined the genomes of 10 HHVs for the prediction of 16 candidate cis-elements taken in this study ( Table 3). The genome of HSV-2 is found to have highest number of putative ciselements (535), whereas the genome of human herpesvirus 7 strain RK is found to have lowest number of putative ciselements (105). This implies that herpes simplex virus type 2 has a complex regulation system as compared to human herpesvirus 7 strain RK. This is consistent with the previous findings which show that genome of herpes simplex virus type 2 is comparatively more complex [32]. All 16 cis-elements are found to be present in the genomes of 7 HHVs. However, the genomes of 3 HHVs including human herpesvirus 3 strain Dumas, human herpesvirus 6A, and human herpesvirus 7 strain RK are found to lack 3 (AP-11, ERE, and Myf), 4 (ERE, LSF, SRF, and Tef), and 3 (AP-11, CRE, and Myc) cis-elements, respectively (Table 3). Further analysis of cis-elements shows that Sp1 is the most abundant cis-element followed by TATA, Ets, and Mef-2 cis-elements which are present in all genomes of HHVs ( Figure 4 and Table 3). Sp1 is implicated in the regulation of E6 promoter activity and governs the transcriptional activity of human papillomaviruses (HPVs) in epithelial cells [31]. In addition, Sp1 is found to be an essential component of immediate early enhancers of HSVs which are implicated in upregulating the process of DNA replication [33]. TATA box is another significant cis-element and is known to be implicated in the optimal expression of glycoprotein C (gC) and late gene expression in case of HHVs. In addition, 8 International Journal of Genomics Table 3: List of count of putative regulatory cis-elements predicted in the genomes of HHVs; for abbreviations of HHVs, refer to Table 1 double mutation in TATA box is found to reduce viral replication and thus suggests its significance for maximal activity in adenoviruses [34]. Similarly, Ets cis-element plays significant role in the activation of early viral gene expression of human Cytomegalovirus (HCMV) [35]. This HCMV infection activates the pathway of mitogen-activated protein kinases/extracellular signal-regulated kinases (MAPK/ERK) which in turn regulates host cell cycle and viral pathogenesis [35]. Also, Ets cis-element is found to be involved in the latency and reactivation of herpes simplex virus 1 by stimulating its ICP0 promoter [36]. Besides these cis-elements, Mef-2 is also considered as a significant regulatory entity which recruits class II histone deacetylase upon its binding with transcription factor [37]. This enzyme determines the fate of latency in Epstein-Barr virus (EBV) and thus suggests its significance in viral life cycle [37]. In addition, this also suggests the importance of inhibitors of Mef-2 transcription factors for the reactivation of EBV [37]. Conclusions Herpesviridae family consists of significant human pathogenic strains causing several incurable diseases. In this paper, genome based approaches have been employed to mine the putative targets with therapeutic values in the genomes of HHVs. Towards the evolutionary aspects of Herpesviridae family, pan-genome analysis shows its open nature, that is, this family is still evolving and more genes are yet to be added to its repertoire with the addition of new sequences. We have also estimated the coregenome of Herpesviridae family that differs from the core-genome of HHVs which has one additional gene encoding major capsid protein. Two genes (glycoprotein B and major capsid protein) of this core set may be used for epitope-based vaccine development, whereas third-gene encoding helicases too have target-based therapeutic values. Further, phylogenetic reconstruction based upon protein sequences of the core gene set of Herpesviridae family shows consistent results with previous studies and represents sharp splits among Alphaherpesvirinae, Betaherpesvirinae, and Gammaherpesvirinae subfamilies. In addition, cis-elements are also predicted in the genomes of HHVs which can be used as modulators of gene expression in viral-vector based gene therapies. This study is significant in context of the data mining of putative factors of HHVs which might have significant therapeutic values, although experimental investigations are required for the validation of the role of these putative factors in therapies for different diseases caused by HHVs. Being a significant viral family consisting of major human pathogens and having potential to infect a wide range of hosts, Herpesviridae family requires further research for deep insights into their evolution and pharmaceutical aspects.
v3-fos-license
2021-11-12T16:16:28.723Z
2021-11-09T00:00:00.000
244017028
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/13/22/12360/pdf", "pdf_hash": "c1864408e77506d515415c01476994c636dfabc6", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44705", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "25b4dd8faa967791971bbc7ae95dccd847ea9dde", "year": 2021 }
pes2o/s2orc
Land Suitability Assessment for Pulse (Green Gram) Production through Remote Sensing, GIS and Multicriteria Analysis in the Coastal Region of Bangladesh : The agricultural potential of Bangladesh’s coastal region has been threatened by the impact of climate change. Pulse crops with high nutritional value and low production costs such as green gram constitute an important component of a healthy and accessible diet for the country. In order to optimize the production of this important staple, this research aims to promote climate-smart agriculture by optimizing the identification of the appropriate land. The objective of this research is to investigate, estimate, and identify the suitable land areas for green gram production based on the topography, climate, and soil characteristics in the coastal region of Bangladesh. The methodology of the study included a Geographic Information System (GIS) and the Multicriteria Decision-Making approach: the Analytical Hierarchy Process (AHP). Datasets were collected and prepared using Landsat 8 imagery, the Center for Hydrometeorology and Remote Sensing (CHRS) data portal and the Bangladesh Agricultural Research Council. All the datasets were processed into raster images and then reclassified into four classes: Highly Suitable (S1), Moderately Suitable (S2), Marginally Suitable (S3), and Not Suitable. Then, the AHP results were applied to produce a final green gram suitability map with four classes of suitability. The results of the study found that 12% of the coastal area (344,619.5 ha) is highly suitable for green gram production, while the majority of the land area (82.3% of the area) shows moderately suitable (S2) land. The sensitivity analysis results show that 3.3%, 63.4%, 28.0%, and 1.2% of the study area are S1, S2, S3, and NS, respectively. It is also found that the highly suitable land area belongs mostly to the southeastern part of the country. The result of this study can be utilized by policymakers to adopt a proper green gram production strategy, providing special agricultural incentive policies in the highly suitable area as a provision for the increased food production of the country. Introduction Bangladesh, due to its geographical location and social circumstances, is one of the world's most disaster-prone countries [1][2][3][4]. Various natural disasters, such as intense rainfall, cyclones, flooding, thunderstorms, tornadoes, storm surges, salinity intrusion, and others, have already occurred in this country, and the intensity of these disasters has been rising in coastal Bangladesh [5]. Coastal areas are more vulnerable to disasters than other parts of the world [6]. These hazards may lead to a variety of socioeconomic consequences in coastal areas, including loss of property and coastal habitats, reduced agricultural productivity, loss of tourism, transportation, recreation and industry, and harbor activities [7,8]. Non-climate stressors, such as urbanization, population migration, land-use change, pollution, and gender problems, have also been strong drivers of changes in coastal agriculture around the world. These will, in turn, have an effect on the long-term viability of coastal food security [9][10][11]. and communication technology [22]. In addition, the Analytical Hierarchy Process (AHP), which has the advantage of incorporating expert views to prioritize the criteria according to weight in consistent judgments with GIS, are used to consider the influencing criteria for increasing green gram production. The AHP method, based on remote sensing and GIS, is extensively used in spatial decision-making processes, such as land suitability analysis for cassava [23]; crop insurance premiums based on land suitability [24]; investigating drought hazard using microwave and infrared datasets [25]; mapping of flood hazard areas [26]; site suitability for aquaculture [27]; and site selection for industrial, landfill, and biodigester [28,29]. Land suitability assessment is highly needed for productive planning and long-term land use in climate-vulnerable countries. It is crucial because it provides information on the potentials and limitations of land for a specific land use type in terms of crop performance. Land suitability analysis, according to Halder [30], is a method of land assessment that evaluates the level of appropriateness of land for a specific use. Cropland suitability analysis is a crucial step in ensuring that the available land resources are used to their full potential in order to practice sustainable agricultural production [30,31]. GIS is one of the most important methods for mapping and analyzing land-use suitability. Several criteria are required for land assessment, including various soil properties, land use land cover (LULC), slope, elevation, rainfall, and temperature. Different criteria, such as geology and biophysical components (i.e., geology, soil characteristics, relief, atmospheric conditions, and vegetation), as well as economic and socio-cultural conditions, are considered in a multicriteria assessment of land suitability [32]. The primary goal of land assessment is to determine the best land use for each specified land unit while also promoting environmental resource conservation for future use [33]. Many researchers have attempted to develop a standard framework for the most appropriate and efficient use of agricultural land. The Food and Agriculture Organization (FAO) [34] developed a framework for land evaluation, dividing the land into four classes, namely: highly suitable (S1), moderately suitable (S2), marginally suitable (S3), and not suitable (S4) (NS). The Analytical Hierarchy Process (AHP) method, which was developed in the 1980s and introduced by Saaty in the mid-1970s, is one of the best methods for performing land suitability analysis [35,36]. AHP has been used in a variety of fields around the world, including government, enterprise, industry, healthcare, and education [37]. Because of its ability to integrate a large amount of heterogeneous data, GIS-based AHP has become popular in research, and obtaining the necessary weights for analysis can be relatively simple, even for a large number of criteria [31]. The primary purpose of this study is to investigate and determine the suitable land area with four suitability classes: highly, moderately, marginally, and not suitable, through GIS and considering expert opinions for green gram cultivation in the coastal region of Bangladesh. This suitability analysis could help the government make an effective subsidy program for crop production to enhance food security. This study could be a model for searching the appropriate site or land area to cultivate specific agricultural crops. Overall, it will facilitate the policymaker and agricultural extensionist in their land-use planning, to maximize the land use and achieve a sustainable agriculture in the southern area of Bangladesh. Study Area The research is carried out in the coastal area, the southern part of Bangladesh, composed of 18 districts, viz. Bagerhat, Barguna, Barishal, Bhola, Chattagram, Cox's bazar, Feni, Gopalganj, Jessore, Jhalkathi, Khulna, Lakshamipur, Madaripur, Narail, Noakhali, Patuakhali, Pirojpur, and Satkhira. The whole study area is located between 89 • 93 E and 21 • 23 N, and the surface area is 47,150 km 2 ( Figure 1). Though the people of the coastal areas are mostly dependent on agriculture, the cropland quality is already degraded and continuously degraded because of the occurrence of natural disasters and climate change impact. Coastal Bangladesh is a hotspot for hydrometeorological disasters, where cyclones, tidal waves, drought, floods, waterlogging, saltwater intrusion, and land subsidence are common phenomena. This has a direct impact on livelihoods, since agriculture employs more than 60% of Bangladesh's population [38], and it is also a major source of income for the 40 million people who live along the coast [39]. Multicriteria Evaluation of the Land Suitability Analysis The assessment of land suitability for a specific use is known as land suitability analysis [40] (Food and Agriculture Organization [FAO] [41]). A land suitability assessment examines various criteria, such as the climatic, geographical, soil, vegetation, and other characteristics of lands, to determine suitable lands for specific uses [40,41]. One of the most critical aspects of this assessment is the definition of parameters that influence land suitability [42]. Land suitability for agricultural uses can be assessed using a variety of parameters that take into account a variety of factors, such as data availability, farming methods, the precision of evaluation, the crop type, and the environmental characteristics of the study region. For evaluating the land suitability of pulse crop (green gram), eleven criteria (Table 1) are considered which belong mostly to topography (slope and elevation), climate (rainfall, land surface temperature), land use land cover (LULC), and soil characteristics (topsoil texture, soil drainage, soil salinity, soil pH, soil depth, and inundation land type) based on a relevant literature review [42][43][44][45][46] and the opinions of experts like agriculturists, agronomists, and government personnel from the agriculture ministry. Topography Data The topography of the study area refers to the slope and elevation properties of the land of the coastal area. The slope and elevation were calculated using the original Shuttle Radar Topography Mission (SRTM) and digital elevation models (DEM), which were downloaded from the USGS earth explorer in the ArcGIS environment. The topographical maps were produced and corrected the projection using the Universal Transverse Mercator (UTM) projection and the WGS 84 datum (WGS 84 46N) in the ArcGIS environment. The slope was determined by calculating the maximum rate of change between each cell and its neighbors. In the output raster, each cell had a slope value. A lower slope value means that the terrain is flatter, while a higher slope value indicates that the terrain is steeper. Flat fields had a smooth surface, which was better for crop cultivation because it made water distribution more even and fair. From Figure 2a,b, it is observed that the slope of the study area ranges from zero to 77.44%, and the altitude ranges from zero to 255 m. Rainfall The rainfall data are collected from PERSIANN-CCS, of the (CHRS) data portal. The PERSIANN-Cloud Classification System (PERSIANN-CCS) is a real-time global highresolution (0.04 • × 0.04 • or 4 km × 4 km;) satellite precipitation product developed by the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine (UCI). The PERSIANN-CCS system enables the categorization of cloud-patch features based on the cloud height, areal extent, and variability of the texture estimated from satellite imagery. Rainfall raster data were downloaded for the year 2020 for the whole country, followed by an extraction by mask in ArcGIS to get the data for the study area for further reclassification. Before reclassification, the raster has been resampled to get the desired cell size, which is compatible with the cell size of other criteria. Land Surface Temperature (LST) Land surface temperature (LST) is an important factor that directly affects the growth and development of the plant. In this study, the mean land surface temperature map is produced through a machine learning algorithm in the Google Earth Engine (GEE) platform for the years 2018-2020. A Landsat 8 Surface Reflectance Tier 1 dataset is used, which is provided by the United States Geological Survey (USGS). The atmospherically corrected surface reflectance from the Landsat 8 OLI/TIRS sensors is included in this dataset. Five visible and near-infrared (VNIR) bands, two short-wave infrared (SWIR) bands, and two thermal infrared (TIR) bands processed to orthorectified brightness temperature are included in these images. The process to produce LST is shown in Figure 3. The generated LST map is exported to Google Drive and brought to the ArcGIS environment. The image of the study area is extracted by "extract by mask" and then classified into a different category. The annual mean rainfall ranged from 1069 to 2360 mm (Figure 4a), the minimum land surface temperature (LST) is 23.21 • C, and the maximum mean LST is 31.88 • C (Figure 4b). Soil Characteristics The base map for soil characteristics such as topsoil texture, soil salinity, soil pH, soil depth, soil drainage, and inundation land type are collected from the Bangladesh Country Almanac (BCA) as vector data and imported to ArcGIS. The study area is extracted using the extract by mask function. These data are georeferenced and projected to WGS 1984 UTM 46N through the projected coordinate system in ArcGIS. Next, these vector data are converted to raster data through the polygon to raster function in ArcGIS 10.7, followed by reclassification, made into various classes, according to the FAO suitability class guidelines ( Table 2). They illustrated the various suitability class for each parameter. Land Use Land Cover This study also includes the preparation of Land Use Land Cover classifications. This was also done in the Google Earth engine platform; an analysis based on the machine learning algorithm, using the Landsat 8 dataset that is described as Landsat 8 Collection 1 Tier 1 calibrated top-of-atmosphere (TOA) reflectance with a cloud cover below 1%. The process started by loading the Region of Interest (ROI) and collecting the Landsat 8 dataset for each ROI. Next, all images were merged using the mosaic function, followed by clipping the mosaic data by the study area boundary. Then, training data were collected by collecting the sample points across the entire study area. Sample points or features are assembled with a property, storing the known class label and properties that store numeric values for the predictors. A smile cart classifier was used which is trained with training data followed by the classification of the image or feature. The accuracy assessment was done using the Confusion Matrix function, and a 99.3% accuracy was estimated. Finally, the produced image was exported. This land use land classification is a supervised classification. Then, this map was imported to the ArcGIS environment, which was followed by the resampling of raster data to make the cell size compatible with other data. A reclassification assigning the score was done to use in AHP. Finally, the raster data were changed to a projected coordinate system with UTM WGS 84, and the area was calculated for each class. The flow chart of the process is shown as Figure 5. In the map, five Land Use Land Cover classes, viz. forest, water, agricultural land, bare/fallow land, and settlement, were produced ( Figure 6). Reclassification of All Parameters All the parameters described above were reclassified using the reclassify function in ArcGIS into four classes, namely, highly suitable (S1), moderately suitable (S2), marginally suitable (S3), and not suitable (NS), according to the FAO suitability guideline [50] (Tables 2 and 3). Finally score was assigned to each class of each parameter. The reclassified criteria illustrate the areal and spatial distribution of the various suitability levels of the criteria. The priority levels among the criteria were determined with the help of the AHP analysis, and a weight was assigned to each criterion. This was followed by the deployment of a weight overlay in ArcGIS and the development of a final suitability map. The entire process was carried out in a model builder, which is shown in Appendix A. The area in hectares was calculated for each individual class of each criterion and for each class of the final suitability map. Preference of the Criteria/Parameters in the Decision-Making Process The preference of parameters can be described by the weights, assigning the weight (value) to each criterion. The objective of weighting is to represent the relative importance of each criterion to others on the growth and development of plants and crop yield. Based on the review of literature and opinions from experts, especially agriculturists and agronomists, critical requirements for pulse (green gram) production are identified and the relative importance of each criterion to others is determined. This process is referred to as Multicriteria Decision Approach. The process of the Multicriteria Decision Approach contains several phases. At first, various factors and constraints for crop production were identified [51]. Next, a pairwise comparison matrix was constructed using the abovesaid factors. Among the various approaches in the development of weight, the Analytical Hierarchy Process (AHP), a pairwise comparison matrix in the context of a decision-making process, was used in this study. The comparison determines the relative importance of two criteria associated in determining the suitability of the stated objective [51]. Analytical Hierarchy Process (AHP) The Analytic Hierarchy Process (AHP), which was developed by Saaty [52], was applied to resolve highly complex decision-making problems which involve multiple scenarios, criteria, and factors [53]. In terms of both the qualitative and quantitative aspects of decision-making, the AHP is one of the most powerful and flexible decision-making processes, allowing people to set priorities and make the best decision [54]. The AHP is a commonly used protocol that is widely recognized as the most reliable multicriteria decision-making technique [55]. The method was applied to a series of parameters in order to create a hierarchical structure by assigning a weight to each parameter in the entire decision-making process [56]. As a result, a number of decisionmaking approaches attempt to determine the relative value, or weight, of the alternatives in terms of each parameter in each decision-making problem. According to Saaty [57], the AHP establishes a structural foundation for quantifying the robust comparison of design factors in a pairwise technique, thereby decreasing the complexity of the decision-making process. The priority of the variables (elevation, slope, rainfall, land surface temperature, inundation land type, soil pH, soil drainage, topsoil textures, soil salinity, soil depth, and LULC) was determined using weights, and the suitability of various land uses for pulse production was determined using weights. For the weighted overlay applications using GIS, the resulting AHP weights were used to calculate the priority of each factor. The parameters/criteria of the decision model were arranged into a hierarchy for land suitability in the first stage of the analysis. The criteria were then scored using pairwise comparisons and relative importance scoring scales in the second stage (Table 4). A fundamental 9-point scale measurement is used in AHP to express individual preferences or judgments [57], creating a matrix of pairwise comparisons (Table 5). These pairwise comparisons simplify the decision-making process by allowing independent analyses of each factor's impact [58]. Table 4. The comparison scale in AHP [56]. Degree of Importance Definition Explanation 1 i and j are equally important. Two acts are equally important in achieving the goal. 3 i has lower importance than j. One activity has a modest advantage over another based on experience and judgment. 5 Substantial importance of i over j One activity has a considerable advantage over another based on experience and judgment. 7 Remarkable importance of i over j In practice, an action is greatly preferred, and its domination is evident. 9 The absolute importance of i over j The evidence that favors one action over another is of the greatest possible quality. When there is a necessity for compromise Above-Nonzero Reciprocals When compared to activity j, if activity i has one of the above nonzero numbers assigned to it, then j has the reciprocal value. The comparative results (for each pair) are expressed as a number ranging from 1 (equal) to 9 (extremely different), while a higher value means that the chosen criteria are more important compared to other factors. A score of 9 means that the row factor is more important in comparison with the column factor. A rating of 1/9, on the other hand, means that the importance of the row factor is less than that of the column factor [59]. A value of 1 is given when the column and row elements are equally important. When comparing soil salinity and slope parameters, for example, a score of 1 means they were equally significant in assessing appropriateness, while a score of 9 suggests that soil salinity is more significant than slope. The diagonal and reciprocal scores were placed in the lower left triangle of a pairwise comparison matrix, which included all of the scores. When the row factor was found to be less significant than the column factor, reciprocal values (1/3, 1/5, 1/7, and 1/9) were used ( Table 5). Third, we calculated the matrix and double-checked the consistency of the pairwise comparison factors. The AHP also included measurements for calculating the normalized values of each factor, as well as the normalized principal eigenvalue and priority vectors. The pairwise matrix was computed and can be expressed as follows: Next, The pairwise matrix's sum for each column was calculated as follows: To create a normalized pairwise matrix, each element of the matrix is divided by its column total as follows: Finally, the weighted matrix of priority factors is calculated by dividing the column sum of the normalized matrix by the number of factors (n), as follows: Consistency: It is vital to double-check the consistency of judgments after they have been entered. The following example best illustrates the concept of consistency: If an orange is twice as good as a lemon, and a lemon is twice as good as a guava, how much would we prefer an orange with respect to a guava? The answer is 4, which is mathematically correct. Similarly, if we give the first criterion a value of 2 over the second and the second criterion a value of 3 with respect to the third in the pairwise comparison matrix, the value of preference for the first criterion with respect to the third should be 2 × 3 = 6. If the decision-maker assigned a value of 4, 5, or 7, however, there would be some inconsistency in the matrix of judgments. In AHP analysis, some inconsistency is expected and accepted [60]. Some errors in the final matrix of judgments are unavoidable because the quantitative values are derived from individual subjective choices. It's a matter of deciding how much inconsistency is acceptable. In this regard, AHP derives a consistency ratio (CR) by comparing the consistency index (CI) of the matrices in issue (the ones containing our judgments) to the consistency index of a random-like matrix (RI). A random matrix is one in which the judgments are input at random, and as a result, it is likely to be highly inconsistent. To be specific, Random Index (RI) is the mean of CI of 500 randomly filled in matrices [60]. Table 6 shows the calculated RI value for matrices of various sizes, as calculated by Saaty [61]. The consistency ratio in AHP is denoted by the letters CR, where CR = CI/RI. According to Saaty [61], a consistency ratio (CR) of 0.10 or below is sufficient to continue the AHP analysis. If the consistency ratio is more than 0.10, it is required to review the judgments in order to identify and address the source of the inconsistency. The initial consistency vectors were calculated by multiplying the pairwise matrix by the weights vector in the following way: The principal eigenvector (λ max ) was computed as follows: Now, consistency index (CI) can be calculated as follows: where CI denotes the consistency index, n indicates the number of factors used for the comparison in the matrix, and λ max is the highest or principal eigenvalue of the matrix. If the consistency index does not meet a certain threshold, the comparison results are re-examined. The consistency judgment must be checked by CR for the appropriate value of n to ensure the consistency of the pairwise comparison matrix. The CR coefficients are computed using Saaty's methodology. CR coefficients of less than 0.1 indicate the pairwise comparison matrix's overall consistency [60,[62][63][64]. where RI indicates the average of the consistency index calculated as a result of the matrix [62]. Table 6 shows the RI values for various values of n. More consistency is indicated by a lower CR ratio. If CR is greater than 0.10, inconsistencies can be seen in the matrix's weight values. In this case, the AHP may not produce useful findings unless the judgments are re-examined, and changes are necessary to reduce the inconsistency to less than 0.10. [57,63,65]. Weighted Overlay Analysis for Land Suitability The reclassified all raster data which were classified based on their suitability level were then put in the weighted overlay process. Once the weight of each criterion was determined through the AHP process, the weight was employed in the weight overlay process in the spatial analyst tool in ArcGIS. Finally, a Green gram suitability map was produced ( Figure 7). All the reclassified raster data were projected in 1984 UTM Zone 46 N to achieve the same geographic extent. The calculation of the area in hectares and the percentage for each class was done for all suitability classes of the final suitability map and all reclassified maps. Sensitivity Analysis The weights assigned to the various parameters would have a significant impact on overall goals. A "what-if" strategy can be used to see how the final results might have altered if the parameter weights were different. Sensitivity analysis is the term for this procedure. Sensitivity analysis helps us figure out how solid our initial decision was and what factors affected it (i.e., what factors influenced the original outcomes). This is a vital step in the process, and no final decision can be taken without conducting a sensitivity study [61]. In this research, equal weights were assigned to each parameter in the weighted overlay process, and, accordingly, a suitability map was generated. This was done to check to what extent the areal and spatial distribution of each suitability class varied if the weight of the criteria were changed. Reclassification The result of reclassification of the eleven criteria considered for green gram production are described by the four classes of suitability for each parameter. The determined proportional, areal (Table 7), and spatial distribution (Figure 8a-k) of the classified criteria are discussed below. The reclassification result for the soil salinity criteria indicates that 67.8 percent of the study area is found to have no saline soil, which is highly suitable for agricultural crop production, including mung bean, while only 1.2% of the land area is not suitable. As much as 13.9% and 17.1% of the coastal area have a slightly saline and a moderately saline soil, respectively. From Figure 8a, it is found that the salt concentration is very high in the southwest coastal site. In terms of soil drainage, Table 7 shows that soil drainage conditions are unfortunately very poor in the majority of the area. More than 72% of the area has a poorly drained soil, whereas only 8.3% of the area is found to have a well-drained soil, which is indicated as a highly suitable area. Only the eastern part of the study area showed a well-drained soil, while the western area showed a very poorly drained soil (Figure 8b). The majority of the coastal area (79.5%) soil is in the loam, silt, and silt loam category, considering the highly suitable area, whilst 18.5% of the area is predominantly clay soil, which is marginally suitable for plant growth. Only 1.2% of the area belongs to the silty clay loam, and the silty clay class ranked second in terms of good textured soil. Few areas with muck, peat, and sandy soil were found. It is observed that 60.5% of the land area has a deep soil (highly suitable), while 38.3% of the land is considered to be moderately deep. Around half of the land contains a slightly acidic to slightly alkaline soil, with a pH ranging from 5.5 to 7.3, which is regarded as highly suitable, while 40% of the soil is moderately suitable soil-slight to moderately to highly acidic or alkaline. The results of the land inundation parameter indicate that around 80% of the land area is highly suitable-high land to medium high land-and the remaining area is found to be medium low land to low land-moderately to marginally suitable. For topographical criteria such as slope and elevation, it is very fortunate that more than 96% of the land area is highly suitable; only a few areas belong to the moderately, marginal, or not suitable area. In terms of climatic factors, the results of the Land Surface Temperature (LST) reclassification show that the majority of the area (85.3%) has an annual mean temperature of 26-30 degrees Celsius, and is thus highly suitable for green gram cultivation, while 12.6% of the area is moderately suitable, with an annual mean LST of 25-26 degrees Celsius (Table 7, Figure 8i). From Figure 8j, it is observed that the annual mean rainfall is higher (>1800 mm) near the seashore, which is considered a marginally and not suitable area. As much as 45.2% of the area is found to be highly suitable (1069-1500 mm rainfall) for crop cultivation (Table 7). Land Use Land Cover determines the particular land area that is occupied by a particular component, such as vegetation, crop, buildings, and so on. Most vegetation areas are highly suitable for any kind of crop production, while urban areas are not suitable. The land use land cover classification shows that around 28% of the total land area is considered agricultural land (highly suitable), and 19.6% of the land is moderately suitable (fallow/bare land). Forests and built-up areas (not suitable areas) cover 35.1% and 16.8% of the land area, respectively (Table 7). AHP In the AHP analysis, we calculated the weight of the 11 criteria to determine the relative importance (priority) of the criteria, which influences the final decision process of green gram suitability. Accordingly, the criteria were also ranked based on the estimated weight. From the AHP result, it is found that soil salinity has the highest contribution (29%), while slope, elevation, and LULC have the lowest contribution (2%) to the final decision process. From Table 8, it is observed that the criteria are ranked according to the weight. Soil salinity ranked first, soil drainage ranked second, soil pH ranked third, soil texture ranked fourth, soil depth ranked fifth, and inundation land type, rainfall, and LST ranked sixth, seventh, and eighth, respectively. LULC, elevation, and slope have the same rank (ninth position). Our judgments and preferences determine the priority of the factors rather than being assigned arbitrarily. These priorities are both mathematically correct and intuitively interpretable as measurement values produced from a ratio scale. Final Green Gram Suitability Map The final green gram suitability map was produced through the weighted overlay process, and the weights of the criteria were utilized. In the statistical analysis of the final land suitability classification, it is found that the majority of the land area, 82.3% of the coastal region (2,282,800.5 ha), belongs to the moderately suitable land for pulse crop (green gram) production, while only 12.4% of the total study area with the 344,619.5-ha area is determined as highly suitable land for green gram crop production. More than 5% of the total area (144,246.0 ha) is estimated as marginally suitable, and a small part of the area (less than 1%) is considered as not suitable land (Table 9). In terms of spatial distribution, it is observed that highly suitable lands are confined mostly to the easternmost part of the country, while the largest portion of marginally suitable lands is in the southwestern region (Figure 9a). From the studies, it can be inferred that the land is less suitable when closer to the sea, the Bay of Bengal, which is caused by the cumulative effect of climate change. On the other hand, the lands that are far from the coast are more suitable for agriculture. Sensitivity Analysis The sensitivity analysis, which is carried out using equal weights for all criteria, produced a significantly different result in suitability at a different level. It is estimated that 3.3% of the land (91,546.2 ha) is highly suitable, while the majority of land-67.4% of the area, with 1,869,628.5 hectares-is found to be moderately suitable for green gram cultivation. In addition, 778,642.9 hectares of land (28.1%) belong to the marginally suitable category, and 1.2% of the land area is not in the suitable class (Table 9 and Figure 9b). Discussion This study examined the suitable land for pulse (green gram) production in coastal Bangladesh, and this section will describe the suitable land area with some important criteria based on the study results. The suitability level of each reclassified criteria significantly affects in the final green gram suitability. Soil salinity, soil drainage, soil pH, and topsoil texture are important criteria among them. In terms of soil salinity, it is found that the salt concentration is very high in the southwestern coastal site. The land area, which is very close to the coast, is highly affected by soil salinity. This is due to the fact that natural disasters enhance this salinity intrusion. The salt concentration in the soil might impair the nutrient uptake by the plant and eventually deter the growth and development of the plant. Salinity is expected to reduce crop production in up to 20% of irrigated lands around the world, and this loss will rise to about 50% of arable land by the mid-twentieth century [66]. Several studies have recently shown that soil salinity stress reduces the physiological attributes of crops like mung bean (Vigna radiata L.) [67]. Hence, an increase in soil salinity implies a decrease in the suitability for crop production. Soil drainage conditions directly affect the growth and development of crop plants. Agricultural soils need good drainage conditions to increase agricultural crop production by maintaining the water sources [68]. This plays a key role in aeration in the root system, affecting crop growth. The majority of the study area is not well-drained soil, and is thus highly prone to water stagnation, which inhibits green gram cultivation. Soil texture is a physical property and a significant factor in crop development and field management [69]. The coastal area of Bangladesh contains predominantly loam, silt, and silt loam soil, which is a very favorable soil condition for any kind of plant production, including green gram. Few areas in the eastern part contain muck and peat soil, and green gram cannot be produced in that kind of soil. Soil depth also has a significant effect on green gram cultivation. Root penetration may be physically limited by any discontinuities in the soil profile, from sand or gravel layers to bedrock. When using irrigation, it can also cause issues. To grow and increase physical fertility, soil macro-and mesobiota need sufficient soil [70]. Most of the coastal area is found to contain well-depth soils. In the case of soil pH, alkaline soil is found at almost every corner of the study area, which is not a favorable condition for green gram production. The middle part of the study area, including the Barishal region, contains predominantly alkaline soils, with a pH value greater than 7. Soil pH is one of the most important factors affecting the plants' absorption of trace elements, with a higher soil pH resulting in higher adsorption (and thus lower availability) [71][72][73][74]. High land (HL), medium high land (MHL), medium low land (MLL), low land (LL), and very low land (VLL) are the five inundation landforms in Bangladesh [75]. The majority of the study area is covered with high land to medium high land. Low land and very low land are highly vulnerable to floods, which leads to significant crop losses. As regards the topography, the slope is considered as the most important parameter for land suitability studies for agricultural crops in all areas. Usually, slope contributes indirectly to crop cultivation. When it comes to mechanization, slope, an integral feature of landform, plays a significant role. The only land with a slope of less than 8 degrees should be used, according to Navas and Machin [76], to prevent soil erosion and other problems associated with the use of machinery. Green gram is highly suitable at a slope of 0-10 percent, moderately suitable at 11-20 percent, slightly suitable at 21-35 percent, and not suitable at slopes above that percentage, according to Grealish et al. [77], in an Australian analysis on the soils and land suitability of agricultural growth areas. Generally, higher elevations are less suitable for mung bean production, and vice versa. Green gram grows best at the height of 0-1600 m above the sea level [78], with a maximum elevation of 2000 m [78,79]. Higher annual mean rainfall is found in the seashore line, while a comparatively lower rainfall, considered a favorable condition, is observed mostly in areas distant from the coastline. A suitable land surface temperature is observed in almost all parts of the study area. Mung bean (green gram) is a warm-season crop, which can be cultivated in temperatures between 20 and 40 degrees Celsius [80]. However, seed germination and plant growth are best at temperatures between 28 and 300 degrees Celsius [80][81][82]. From the final green gram suitability map, it is found that only 12 percent of the land area is highly suitable for green gram production. This is due to the fact that a significant land area is affected by soil salinity, and the majority of the area contains a poorly drained soil. These two parameters have a remarkable impact on the decisionmaking process. According to the AHP analysis (weighted overlay process in ArcGIS), the weight of their was 29% and 19%, respectively. From the spatial distribution point of view, it is found that the highly suitable area belongs mostly to the eastern part of the country. This spatial distribution of highly suitable land also coincides with the spatial distribution of the "agricultural crop land" class in the land use land cover map, where agricultural cropland is found above all in the eastern part of the country. Additionally, from the above discussion, it can be concluded that the eastern part of the study area is highly suitable in terms of all criteria considered, when compared to other parts of the area. The impact of the weights of the criteria can be measured by the sensitivity analysis and consequently provide a validation of the GIS-based multicriteria decision-making model. In sensitivity analysis, when the weight is changed for the criteria, the final suitability results also change. The areas with different levels of suitability are significantly altered when equal weights are assigned to each parameter. Conclusions In order to ensure the food security of the coastal region of Bangladesh, a land evaluation system is required to find out the potential land area for the cultivation of a specific crop. This study was carried out to find out the best potential land for pulse (green gram) crop cultivation in the coastal region of Bangladesh. The study used GIS and remote sensing with multicriteria analysis while considering 11 parameters associated with topography, climate, and soil. The study identified soil salinity as a major problem or constraint for pulse production, with the highest importance (30%). Another important parameter is soil drainage, with a weight of 19%, causing a decrease of the area belonging to the highly suitable class. In the final assessment, it is found that only 8.36% of the study area is highly suitable (S1) land, while the largest area, representing 74% of the land, is moderately suitable (S2). Along with a higher weight value, a poor soil drainage condition covering more than 73% of the land results in less land area in the highly suitable (S1) class in the final suitability map. Additionally, it is observed that the highly suitable (S1) land area belongs mostly to the southeastern part of the country. The results of this study can be of great importance for policymakers of the agriculture ministry of Bangladesh, as they will help them formulate and implement the necessary policies to optimize pulse production. The government should prioritize the southern part-Chittagong, Cox's Bazar, and Noakhali districts, for example-to enhance green gram cultivation by providing incentives such as seed, fertilizer, etc., to the farmers, and avoid the peripheral coastal area. In addition, the government, donor agencies, and NGOs need to implement strategies to reduce soil salinity and improve soil drainage to improve pulse production. This study can be used as a model for land evaluation in many other agricultural crops in the country and abroad.
v3-fos-license
2022-11-18T06:18:06.093Z
2022-11-01T00:00:00.000
253578492
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "418cd72fb95c172ebc04b76a7451ec7a5ced9479", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44706", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "854d66052993dc46dfdc468cca4018ed5b351ca0", "year": 2022 }
pes2o/s2orc
Single-digit-micrometer-resolution continuous liquid interface production To date, a compromise between resolution and print speed has rendered most high-resolution additive manufacturing technologies unscalable with limited applications. By combining a reduction lens optics system for single-digit-micrometer resolution, an in-line camera system for contrast-based sharpness optimization, and continuous liquid interface production (CLIP) technology for high scalability, we introduce a single-digit-micrometer-resolution CLIP-based 3D printer that can create millimeter-scale 3D prints with single-digit-micrometer-resolution features in just a few minutes. A simulation model is developed in parallel to probe the fundamental governing principles in optics, chemical kinetics, and mass transport in the 3D printing process. A print strategy with tunable parameters informed by the simulation model is adopted to achieve both the optimal resolution and the maximum print speed. Together, the high-resolution 3D CLIP printer has opened the door to various applications including, but not limited to, biomedical, MEMS, and microelectronics. Image analysis scheme for extracting 2D printed feature from SEM images An image analysis scheme is applied to extract the line width from the SEM images. Shown here is a sample line edge profile extracted from a 15µm width line design obtained from SEM. The SEM images are first imported into ImageJ and a single line or hole edge profile is extracted (Fig. S2). The edge profile is then subjected to a simple algorithm through peak and valley extraction. The critical dimension (CD) extracted from the SEM is based on the 50% intensity threshold method [66], where the line width reported is based on the x-coordinates extracted at the 50% intensity threshold at the outer edge. Model parameters used in the transport and kinetics model The final solution of the coupled partial differential equations (PDEs) for both the un-reacted monomer concentration and the oxygen concentration is obtained through MATLAB PDE solver and the parameters used can be found in Figures. 5(a-d). While some parameters are directly obtained through references, there are also values that were estimated or directly measured. We provide a brief discussion of the estimated parameters. The initial exposure time for resin to cure onto the build platform is roughly around 3s experimentally, depending on the design. Therefore, this gives us a rough estimation of H which should be around 10µm, given that 3s is sufficient for print part to adhere onto the build platform for continuous print to proceed. Note that while we don't have a direct measurement of the exact light intensity from the 3.5µm or 1.5µm lens, however we did a rough estimation from the intensity that was configured for the 30 µm lens. The intensity 0 is obtained using the approximation of the known maximum intensity of our light engine for a 30µm printer, which is around 40mW/cm 2 . The current value is assumed to linearly scaled by the related Lightcrafter 0-255 control. Nonlinearities in the LED itself as well as temperature fluctuations in the final light intensity have not been considered. To obtain the rough estimation of the light intensity, we took account for the single pixel projected area reduction (30µm: 30; 3.5µm: 3.75; 1.5µm: 1.5) along with f # differences (30µm: 1.3; 3.5µm: 12; 1.5µm: 16). With the listed details, we estimated the initial exposure at UV intensity 1 is 1.1W/m 2 for 3.5µm lens and 4 for 1.5µm lens. The [ ] concentration is obtained from known photo-initiator concentration 2.5 wt% that is used in our system. The oxygen concentration at the surface of the window is estimated to be 3 times the concentration of a PDMS surface, due to the fact that a Teflon AF 2400 has 3 times higher permeability to oxygen than PDMS [55]. Further experimental validation of the modeling parameters is crucial for a more accurate prediction. We also note that several key elements of oxygen transport are currently ignored, including the solubility of oxygen in the TMPTA resin as well as the permeability of oxygen through the Teflon AF 2400 window. We use model parameters obtained from known references as a rational framework to understand the CLIP printing process and dead-zone formation. Derivation of lubrication theory applied to CLIP technology -Newtonian fluid From the CLIP schematic in (Fig. 4), the derivation of the lubrication theory after applying the appropriate scaling ~ℎ ; �~; � , �~ ; � = ; � = ; ̃= ℎ and assuming ≪ 1, we obtained the simplified governing equations for Newtonian fluid as follows: Continuity equation: Where ∇ is the gradient operator in the � − � plane, The corresponding boundary conditions for the velocity are and (ℎ) is used to describe the velocity of the top plate. From (Eq. (3)), pressure is thus only a function of (0) ( , ). We can then integrate (Eq. (2)) and applied boundary conditions To solve for , and determine p we integrate (Eq. (4)) from 0 to h: Applying boundary conditions, we obtain From (Eq. (4)), we know Substituting into (Eq. (6)), we obtain Assuming gap thickness ℎ is a constant, we obtain (ℎ) = ∇ 2 � ℎ 3 12 � (9) We can further express as Therefore, If we assume at surface of plate, both velocity and height are maximum and normalized to 1, we can thus set (ℎ) = 1 and ℎ = 1, we then obtain the velocity profile in both x and z direction in the dead-zone regime as follows: We can solve for the pressure field within the dead-zone regime if we first assume the part footprint is instantaneously a cylinder and L is the radius of the cylinder. Moreover, we assume = 0,̃= 1. Integrating (Eq. (9)) then gives: We can then integrate the pressure within the circular build area to obtain the Stefan force: Derivation of lubrication theory applied to CLIP technology -non-Newtonian fluid For a non-Newtonian power-law fluid we assume. Note that for Newtonian fluid, n = 0. Resin stress relaxation time and print radius The transient stress relaxation time required for resin (TMPTA + 0.3wt% BLS1326 + 2.5wt% TPO) with print diameter ranging from 0.4cm to 2.2cm is plotted in Figure. S5 (a). The longest relaxation time is extracted by replotting Figure. S5 (a) in semi-log plot to extract the average longest relaxation time within the transient stress-relaxation process. It is found that the stress-relaxation time increases with increased diameter. Finally, we are aware of the effect of resin shrinkage during curing has a potential impact on the stress-relaxation. However, from Figure. S4 (a) within the 100ms during exposure time, there's no observable relaxation occurring. Ongoing efforts include using Optical Coherence Tomography (OCT) to better elucidate the effect. Resin re-flow and print defects under insufficient interlayer time Based on our prediction on the required time for resin to reflow for the 8.9mm by 5.6mm square area for a resin (TMPTA + 0.3wt% BLS1326 + 2.5wt% TPO) with viscosity approximately 0.2PaS at shear rate of 0.1 (1/s), we have conducted a characterization of the defect versus the interlayer time. The measured interlayer time showed that the defect is gone after approximately at > 200ms (Fig. S4).
v3-fos-license
2022-10-02T15:20:56.891Z
2022-01-01T00:00:00.000
252655590
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2022/11/matecconf_iccrrr22_02003.pdf", "pdf_hash": "b4c5bbbd1df635e8838611e3fef50c600527dce9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44709", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "da63657f8bcbfbc7e025843d863209159048ca0d", "year": 2022 }
pes2o/s2orc
Fly ash geopolymer concrete durability to sulphate, acid and peat attack . The durability of concrete has a profound impact on the service life of structural elements. Indonesia has extensive peat soils, which provide a highly aggressive environment for concrete structures. Geopolymer concrete has demonstrated good durability when exposed to acid /sulphate conditions similar to those encountered in peat soils. This paper investigates the performance of geopolymer concretes produced using Indonesian type F fly ash under sulphate and acid chemical attack. Geopolymer concrete specimens have been exposed for 12-months in a range of solutions: 5% sodium sulphate, 5% magnesium sulphate, 1% and 3% sulphuric acid, and simulated peat solution. The mechanical and durability properties of specimens together with a control concrete have been monitored for compressive strength, change in mass, water absorption and volume of permeable voids, ultra pulse velocity, air and water permeability, pH profile, and microstructural analysis (XRD, SEM/EDS). The control immersed in water achieved 56.93 MPa at 12-months of age. Magnesium sulphate exposure had a significant deterioration impact on the compressive strength of geopolymer concrete, demonstrating an 11% reduction in strength, while those exposed to sodium sulphate had an 8.9% increase in strength. Specimens exposed to peat solution displayed a slightly increased strength and those in acid conditions a 1.2% and 4.5% decrease in 1% acid and 3% acid, respectively. In general, the geopolymer concrete displayed a high level of resistance against sodium sulphate, 1% sulphuric acid and simulated peat attack. Introduction The industrial sector produces large quantities of waste products, creating disposal problems and contributing to global warming, one of the most pressing environmental hazards.This problem is exacerbated by cement manufacture, which releases greenhouse gases such as carbon dioxide.Accordingly, various studies are investigating the use of industrial waste products as a substitute for ordinary Portland cement (OPC).Fly ashbased geopolymer material is one of the most popular alternatives to concrete due to such factors as availability [1], superior mechanical properties [2,3], low environmental costs [4], and better durability [5] than conventional Portland concrete. The demand for construction materials to have a long service life and low-cost maintenance requires durable concrete, particularly under aggressive conditions.A significant chemical deterioration reaction in concrete is sulphate attack due to expansive chemical reactions.Studies have shown fly ash geopolymer concrete (FAGC) have excellent resistance to sulphate attack compared to cement-based materials [6][7][8].S. Wallah et al. [9] confirmed that the high alkali content in geopolymer improves resistance to sulphate attack.From a short-term study (up to 5 months exposure), Bakharev [10] indicated the deterioration of FAGC is more significant in sodium sulphate than magnesium sulphate solution.It also noted very different durability of FAGC specimens when interacting with sulphate solutions, where FAGC prepared using sodium hydroxide as an activator had better performance than sodium silicate or a mix of sodium hydroxide and potassium hydroxide.This observation is in agreement with the results reported by Albitar et al. [5], where FAGC with a combination of sodium hydroxide and sodium silicate as activator exhibited a decrease in strength in sodium sulphate media, and that the detrimental impact caused by sodium sulphate is due to leaching of sodium hydroxide from the geopolymer specimens when exposed to sodium sulphate.In contrast, Cho et al. [11] concluded that fly ash geopolymer mortar mass and compressive strength were not affected by 10% sodium sulphate and 10% magnesium sulphate solution after 1-year exposure, while Elyamany et al. [12] reported a 72.7 to 80.6% residual compressive strength of fly ash geopolymer mortars in 10% magnesium sulphate solution after 48 weeks. An acidic environment is another aggressive mechanism that deteriorates concrete.Past studies generally concluded that sulphuric acid has a negative impact on compressive strength development and causes mass loss in FAGC, but compared to OPC, geopolymeric materials have superior performance in an acid environment [13].Song et al. [14] investigated low calcium FAGC, an approx.61-67% residual compressive strength was reported after 56-days of exposure to 10% sulphuric acid and a further decrease in the range of 72 to 90% after 1-year exposure to 1% sulphuric acid.Mehta et al. [1] reported FAGC resistance in 2% sulfuric acid solution; and observed a mass loss of 4.28% at 3-months and 12.97% at 1-year.Moreover, only few studies have been conducted to observe the resistance of geopolymer material to a peat acid environment.Peat is the accumulated organic remains of dead plants [15] containing humic and fulvic acid [16].Peat can have a very low pH due to pyrite oxidation [15].A study on the early strength performance of various FAGC, including FAGC in peat environment, conducted by Olivia et al. [17] reported that geopolymer shows a slow gain of earlyage strength until 28 days compared to OPC, high volume fly ash, and blended cement concrete subjected to peat water.Felix Wijaya et al. [18] noted that a geopolymer hybrid (mix of low-quality fly ash containing >15% carbon and Portland cement) shows an increase in strength and a decrease in porosity.Conversely, Satya et al. [19] studied the performance of blended geopolymer mortars (90% of fly ash and 10% of palm oil fuel ash, mass ratio) in peat water (pH 4-5) for 120 days.They concluded that peat water generally decreases the strength and enhance the porosity and sorptivity of the blended geopolymer mortars. Research Significance Past studies have investigated the durability of FAGC in sulphate and acid solutions.However, there are conflicting reports in these studies on the performance of fly ash geopolymer, particularly under sulphate and peat acid attack.This study addresses this gap providing comprehensive data regarding the deterioration of FAGC when exposed to sodium and magnesium sulphate, sulphuric acid, and a simulated peat solution for up to 12months.The findings of this study are significant for the utilization of fly ash in Indonesia and the durability of FAGC in the native Indonesian environment.Indonesia has a significant area of peat and acid sulphate soils where concrete is subject to an aggressive environment and extensive durability issues due acid and sulphate attack.This work reports compressive strength, change in mass, water absorption and volume of permeable voids, ultra pulse velocity, air and water permeability of FAGC in these aggressive environments.The study employs various analytical and chemistical methods for the analysis of the FAGC, including XRD, SEM and EDS. 3 Experimental Procedure Materials Type F fly ash obtained from Paiton Power Station, East Java Province, Indonesia, is used with a specific gravity of 2.21, a specific surface area of 1041 m 2 /kg and loss on ignition of 0.32 (ASTM D7348-13).X-ray fluorescence (XRF) is used to evaluate the chemical composition.Table 1 gives the XRF result, and according to ASTM C618-19, the fly ash is classified as class F (low calcium).Fig. 1 shows the XRD patterns, identifying 17.5% of crystalline phases (quartz, hematite, maghemite, and mullite).Fig. 2 observes the spherical particles of microspheres, irregular mineral fragments, and large enclosed-plerosphere containing microspheres from Scanning Electron Microscopy (SEM) analysis of fly ash.Uncrushed river sand is used as fine aggregate with a fineness modulus of 2.65 (ASTM C33/C33M-18) and specific gravity of 2.60 (ASTM C128-15).The sand is dried at 110 °C for 24 h in the oven, then cooling to room temperature.The 7 and 10 mm sizes of crushed coarse aggregate have a SSD specific gravity of 2.585 and 2.641, and water absorption of 0.80% and 0.64%, respectively.A mix of sodium silicate (Na2SiO3) and sodium hydroxide (NaOH) is used as the alkaline reagent in liquid form.Na2SiO3 is supplied by PQ Chemicals Australia (14.7% Na2O and 29.4% SiO2), with specific gravity of 1.53.The 15M NaOH is manufactured by Australian Chemical Reagents with specific gravity of 1.45. Concrete synthesis The FAGC is made with Na2O dosages of 10% and alkali modulus (AM) of 1.375, where AM is mass ratio of SiO2 to Na2O in the alkaline solution.The ratio of fly ash to aggregate and the ratio of water to solid (fly ash, solid in alkaline solution) is fixed at 0.808 and 0.35, respectively.The aggregate volume is a combination of 40% of max.size of 10mm, 19% of max.size of 7 mm and 41% of sand.FAGC specimens are prepared by mixing the coarse aggregates using a 120-litre mixer for 3 minutes.Fine aggregate is then added and mixed for another 3 min, followed by the addition of fly ash and mixing for 5 min.The liquids, Na2SiO3, NaOH and 80% of additional water, are added, stirred to homogenize and left for 5 min.The remaining water is added followed by mixing for another 5 min.The fresh mix mortar is poured into teflon moulds in 2 layers and vibrated for 20 sec using a vibration table for each layer.The concrete specimens are stored for 24 hours at room temperature (21±1 o C) followed by heat curing at 80ºC for 24 hours in the oven.After the heat-curing, the specimens are demoulded and stored in a humidity chamber (22 o C, 70%) for 1-month prior to placement in solutions for chemical exposure. Sulphate and acid exposure Once reaching 28-days, the 100x100x100 mm 3 cube specimens are immersed in water for 24 hours to obtain water-saturated specimens before being exposed to chemical solutions, and the initial saturated weights of the specimens are measured.The specimens are exposed to 5% sodium sulphate (Na2SO4), 5% magnesium sulphate (MgSO4) solutions, 1 and 3% sulfuric acid (H2SO4), and peat acid (0.49% humic acid, 0.49 fulvic acid, and 0.03% sulfuric acid, pH 2.5) solutions.The immersed specimens are then for 12-months in a humidity-controlled room at a temperature of 22 o C, and relative humidity of 70%.The specimens are immersed in a 52L sealed container with 20L chemical solutions.The sulphate solutions are renewed every 3 months, the peat solution is refreshed monthly, while the sulphuric acid solution is refreshed every 6 months, and the pH is periodically monitored every month.A set of control specimens immersed in water is prepared for the same duration. Testing method The compressive strength test is performed using Technotest concrete testing equipment at a loading rate of 0.33 MPa/s (ASTM Standard C109/C109M).The reported 1-month and 12-months compressive strength values are the average of three cube specimens (100x100x100 mm 3 ).The specimens are taken from the solutions and stored at room temperature 24-hours prior to compressive strength testing and tested in semisaturated conditions.The density, absorption and volumetric proportion of void test is undertaken following ASTM C642-13.The air and water permeability tests are performed using the Autoclam Permeability System.Both permeability tests are conducted at 1-month and 12months after casting.Three prism specimens of 100x300x300 mm 3 are tested.The average of three cube specimens of 100×100×100 mm 3 is measured monthly to evaluate the mass change of concrete, measured in saturated surface dry conditions.Ultrasonic measurement is conducted on 100x200x200 mm 3 cylinder specimens using a portable ultrasonic non-destructive digital indicating tester (Proceq Pundit PL-200PE) with a 54 kHz transducer.All tests are conducted at 1-month and 12months after casting.Three samples are used in each test and the results presented are the average of the data obtained.The suspension method was applied to obtained the pH profiles by mixing the concrete powder with deionised water with a powder-to-suspension ratio of 2:1, and stirring for 15 mins immediately after grinding each depth.The pH of the solution is measured using a calibrated pH probe.X-ray fluorescence (XRF) is carried out using a Bruker S4 Pioneer, while X-ray diffraction (XRD) data is acquired using a Bruker AXS D4 Endeavor wide-angle X-ray diffractometer with copper anode at 40kV and 35 mA.The powder samples for XRD and pH profiles are taken at an interval of 3-mm by using a profile grinder from Germann Instruments.The microstructure morphology of fly ash and FAGC is observed using FEI Quanta 200 SEM employing secondary electron.Energydispersive X-ray spectroscopy (EDS) detector is used to observe the element and further analyzed by Aztec-4.3 software.3 depicts the compressive strength of the control and geopolymer concrete specimens subjected to sulphate and acid solutions at 12-months.The notation for the data is control specimens in water (W), 5% sodium sulphate (Na), 5% magnesium sulphate (Mg), 1% sulfuric acid (1SA), 3% sulfuric acid (3SA), peat acid (P).The compressive strength of the control concrete at 28-days age is 46.54 MPa, corresponding to an increase of more than 22% over the 1-year duration in the control specimen.In sulphate solution exposure, an 8.9% strength increase is measured in the sodium sulphate solution, but approx.11% strength decline in magnesium sulphate.In the case of immersion in acid solutions, FAGC exhibits a decline compared to control specimens, corresponding to approx.1.2% and 4.5% for 1SA and 3SA, respectively, at 12-months, but an increase approx.1.25% for peat specimens.The percentage mass loss of the geopolymer concrete specimens is presented in Fig. 4. A minor mass loss is observed in the water specimen of 0.1% over the 12months period.When exposed to sulphate solutions, the Na and Mg specimens presented a slight gain in mass over the exposure time, 0.24% and 0.25%, respectively.However, a significant decrease in mass is noted in both specimens subjected to sulphuric acid attack.The 1SA specimen had a mass loss of approaching 2% and 3SA more than 3% at the end of 12 months.Specimens submerged in peat solution gave a slight mass loss of approx.0.27%. UPV, water and air permeability, water absorption and volume of permeable voids Table 3 displays the UPV, water and air permeability, water absorption and volume of permeable voids of room specimens.The result shows the air permeability index is between 0.1 and 0.5 Ln(mbar)/min at 1-month and 12months, conforming to good quality concrete [20].The control concrete is classified as low water permeable concrete at 1-month and 12-months, as the water permeability index did not exceed 1.3x10 −7 m 3 /√min.The concrete displays an enhanced UPV with age.The values are between 3000 and 3500 m/s, which is identified as medium quality concrete [21].This implies that the concrete contains defects, such as voids or cracks, which may adversely affect long term performance.A higher UPV value would indicate a higher solid density and lower porosity.The water absorption of FAGC is more than 5% in the first month and decreases to less than 5% at the end of 12 months.Water absorption is greater than 5% and is classified as high permeable concrete in conventional concrete [22].Thus, the geopolymer concrete indicates a highly porous external surface at an early age.The volume of permeable voids shows a similar trend to water absorption, decreasing with time.In PC concrete, a volume of permeable voids less than 13% is classified as good quality concrete, while greater than 18% is classified as poor quality concrete [23].Thus, the geopolymer concrete, which has a volume of permeable voids less than 13% indicates good concrete, with limited pore interconnectivity between the capillary pores, gel pores and air voids within the structure. Visual observation Fig. 5 shows the appearance of geopolymer concrete immersed in sulphate and acid solution after a period of 12-months and the control specimens.The results illustrate that the visual degradation is minimal, and the specimens generally remain structurally intact in the control and test solutions.Rough surfaces, a more porous and less dense structure are observed in the sulphuric acid specimens, with greater delamination in the 3% sulphuric acid. XRD The XRD analysis results of the first 1 mm from the surface of the geopolymer concrete specimens are presented in Fig. 6.The quartz, microline, muscovite and albite observed are attributed as minerals derived from the aggregate used in the mixtures.A broad hump is visible between 25° and 35° 2θ for all XRD spectra, confirming the formation of amorphous N-A-S-H gel reaction products.Specimens immersed in sulphuric acid solutions indicate mineralogical character changes due to the formation of gypsum.There is no evidence of ettringite in the acid and sulphate samples. EDS Table 4 shows the summary of elements' atomic and atomic ratios from EDS analysis from an average of three locations.The atomic ratio of Si/Al of room sample displays a ratio of 2.66, while specimen in water is slightly higher, 3.15.For both sulphate samples and the peat samples, the ratio of Si/Al on the surface is within the range 3.0 ≤ Si/Al ≤ 4.After being exposed to sulphuric acid for 1 year, the top layer of 1SA and 3SA samples exhibit significantly increased Si/Al ratios of 6.79 and 10.98, respectively.EDS spectra detect the presence of sulphur in Mg, 1SA, and 3SA specimens but the Na specimen is free from sulphur, as is the control specimen. Durability properties FAGC stored in water shows good strength development over time, an increase approx.24% between 1-month and 12-months, which indicates ongoing geopolymerization.The increasing strength is coupled with a decrease in the permeable voids ratio, as demonstrated by an increase in UPV and a decrease in water absorption.It is also supported by the decrease in air and water permeability index during the same period.SEM images exhibit pores with a range of sizes at 1-month.The larger pores may be caused by air bubbles introduced in the mixing process.At 12-months, the number of pores is reduced in number and relative size.This is attributed to continuing chemical reaction between fly ash and the alkaline solution, forming N-A-S-H gel, which fills the pores and densifies the matrix [24].Moreover, there is no discernible difference between 1-month and 12-months specimens based on visual observations, and there is no significant change in mass over the 12-month period, with less than 0.3% mass gain being observed. Sulphate resistance The FAGC concrete shows superior resistance to sodium sulphate compared to magnesium sulphate.The study notes an increase of nearly 9% in compressive strength in the sodium sulphate solution to the control specimen at 12-months of age.The increase in strength is likely due to continuing of the geopolymerization reaction in the sulphate solution [25].Cho et al. [11] reported an enhanced strength of FA geopolymer mortar after 1-year exposed to 10% sodium sulphate media.However, there is a significant reduction in compressive strength in the magnesium sulphate specimens (residual strength of 89%) compared to the control specimens.This suggests the formation of magnesium aluminium silicate hydrate (M-A-S-H) gel in the geopolymer matrix.Long et al. [26] stated that the M-A-S-H gel has lower strength than N-A-S-H gel; and is produced by the reaction of magnesium sulphate with N-A-S-H gel.The EDS analysis in the Mg specimen depicts high Mg and S concentrations in the matrix of geopolymer concrete, evidence of magnesium and sulphur ion migration into the concrete, while Na ions are not detected, likely caused by migration of Na ions into the solution [10].Similar to this study, Thokchom et al. [27] reported up to 56% compressive strength loss of fly ash geopolymer mortar after 24 weeks of exposure to 10% magnesium sulphate, while Elyamany et al. [12] observed up to 20% compressive strength reduction after 48 weeks with similar concentration magnesium sulphate solution. According to Ismail et al. [28], the cation accompanying the sulphate anions plays a key role in controlling the deterioration mechanism for geopolymers.It appears sodium sulphate favors the structural maturity of the binding phases, while magnesium, as the cation accompanying the sulphate anions, promotes decalcification and destruction of the main binding phase. The data demonstrates negligible changes in the mass with exposure to sulphates, an increase of less than 0.24% for sodium sulphate and 0.25% for magnesium sulphate over the 12-months.Elyamany et al. [12] reported a weight loss in FA geopolymer mortars exposed to 10% magnesium sulphate solution at 48 weeks was in the range of 1.13-1.49%.Meanwhile, Bakharev [10] stated that FA geopolymer of specimens gained mass of between 3.1% and 4.7% over 5 months exposure to 5% sodium sulphate solution, and 1.4% to 5.3% of weight gain when exposed to 5% magnesium sulphate solution. X-ray and SEM investigation identified no new crystalline phases, such as ettringite or gypsum, in the specimens exposed to sodium and magnesium sulphates.The absence of the expansion products ettringite and gypsum could account for the relatively minor increase in mass observed.This finding is consistent with the previous studies [25,29], but contrary to Bakharev [10], where his study observed the formation of ettringite in the 5-months exposed specimens to sulphate solution using a sodium silicate activator.Elyamany et al. [12] reported the formations of gypsum crystals in fly ash geopolymer mortars after 48 weeks in 10% magnesium sulphate exposure.Sata et al. [30] suggested the formation of ettringite and gypsum may depend on the calcium content from precursors and aggregates, where a higher calcium content led to the formation of expansion products when attacked by sulphates. Acid resistance A reduction in compressive strength in 1% and 3% sulphuric acid of approx.1.2% and 4.5% is noted in comparison to the control specimen after 1-year.Valencia-Saavedra et al. [31] reported a compressive strength loss of almost 68% after 1-year submerged in sulphuric acid with pH 0, while Albitar et al. [5] noted a 12% loss over 9 months in 3% sulphuric acid.According to Mehta & Siddique, 2017 [1], the decrease in mechanical performance is influenced by the stability of the N-A-S-H network when under sulfuric acid attack.Song [14] explained the deterioration mechanism of sulphuric acid was initiated by the penetration of hydrogen and sulphate ions from the acid solution into the concrete matrix.This is verified in this study by the low pH profile and EDS spectra which identify the S ions on the surface of the specimen.Proton exchange occurs and depletes cations such as Na ions from the geopolymer matrix, as supported by EDS analysis, where the surface of the specimens is free from Na ions.Acid exposure also results in the partial removal aluminium ions in the geopolymer gel which results in the destabilization of Si-O-Al bonds and formation of Si-OH and Al-OH groups of bonds, such that a silicon-rich amorphous layers remain.The major increase in the Si/Al ratio for both sulphuric acid specimens supports this proposed mechanism indicating the dealumination of the gel network.Meanwhile, more than 3% mass reduction is measured over 12-months of exposure in 3% sulphuric acid, while approx.2% mass loss is observed in 1% sulphuric acid.The sulphuric acid specimens retain their shape structurally, with minor delamination on the surface and at the edges of the specimens.This may be due to a concentration of sulphate ions combined with different cations in voids near the surface [14], such that specimens did not experience severe visual deterioration as is commonly seen in OPC concrete specimens attacked by sulfuric acid.The results are consistent with previous experiments, which reported mass losses in the range from 0.51% to 5% and reductions in compressive strength between 30 and 66% in fly ash geopolymer concrete [14,31] On the other hand, specimens in the peat solution generally show good performance, with a 1.25% increase in strength noted at the end of 12-months compared to control samples and a 0.26% loss in mass.No noticeable deterioration is observed on the surface of specimens.The surface of the peat specimen is darker than the control specimen, attributed to the dark colour of humic acid.Unlike in sulphuric acid specimens, EDS spectra indicate specimens free from S ions migration, and Na ions still exist in the geopolymer matrix.Nonetheless, compared to control specimens, Na ions concentration in peat specimens is lower, corresponding to the higher value of Si/Na and lower value of Na/Al ratio, indicating the possibility of proton exchange noted in the sulphuric acid specimens, though this is significantly less than that observed in the acid specimens. Conclusions Based on the laboratory analysis of the deterioration of FAGC subjected to sulphate and acid attack, the main findings could be concluded as follows: 1. FAGC demonstrates very different resistance to sulphate attacks.In sodium sulphate solution, compressive strength develops a higher value (9%), while in magnesium sulphate media, FAGC experiences strength loss (11%) when compared to control specimens.However, both types of media do not contribute to a mass change (0.25% increase), supported by the absence of expansion product in the FAGC specimens.2. Based on the microstructural analysis, migration of ions in the matrix of FAGC when exposed to sulphate solution demonstrate that sodium cations in sodium solution improve the structural maturity of the binding phase, while magnesium cations in magnesium sulphate solution decay the binding phase.3. FAGC decreases in compressive strength in 1% and 3% sulphuric acid solutions by 1.2% and 4.5%, respectively, with mass loss of up to 3%.Meanwhile, FAGC in peat media demonstrates good performance, with a 1.25% increase in strength and a 0.26% loss in mass.4. Based on microstructural data, when exposed to the sulphuric acid solution, FACG indicates a proton exchange mechanism, leading to the instability of the N-A-S-H network, while the peat media appears to have little or no effect on the structure of the matrix. Fig. 3 . Fig. 3. Compressive strength FAGC specimens exposed to various sulphate and acid solutions at 12-months Fig.3depicts the compressive strength of the control and geopolymer concrete specimens subjected to sulphate and acid solutions at 12-months.The notation for the data is control specimens in water (W), 5% sodium sulphate (Na), 5% magnesium sulphate (Mg), 1% sulfuric acid (1SA), 3% sulfuric acid (3SA), peat acid (P).The compressive strength of the control concrete at 28-days age is 46.54 MPa, corresponding to an increase of more than 22% over the 1-year duration in the control specimen.In sulphate solution exposure, an 8.9% strength increase is measured in the sodium sulphate solution, but approx.11% strength decline in magnesium sulphate.In the case of immersion in acid solutions, FAGC exhibits a decline compared to control specimens, corresponding to approx.1.2% and 4.5% for 1SA and 3SA, respectively, at Fig. 4 . Fig. 4. Measured mass in change for the 12-months duration Fig. 7 . Fig. 7. SEM secondary electron images at 12 month This project is funded by ARC-ITRH (Australian Research Council-Industrial Transformation Research Hub) research grant (IH200100010) allocated for Transformation of Reclaimed Waste Resources to Engineered Materials and Solutions for a Circular Economy (TREMS).We acknowledge PT.Lestari Anugrah Tritunggal, Probolinggo, Indonesia for providing fly ash from PT. PJB Paiton.RMIT University provided the microscopy, Xray facility and microanalysis facility for this study also acknowledged. Table 1 . Chemical composition of raw fly ash Tabel 2 . Measured pH of concretes in different depthThe pH values are measured in intervals of 3 mm from the top surface of the 100-mm cube concrete specimens.For concrete in water, the pH is 11.72.The pH of the first 3 mm from the top of specimens in sodium and magnesium sulphate solutions decreased to 10.92 and 10.29, respectively.A significantly lower pH is detected in specimens exposed to sulphuric acid.The surface pH of specimens in 3% sulphuric acid is less than 5, but increases to 10.24 at 12 mm depth, while in 1% sulphuric acid, the pH is 5.5 at the surface and 11.17 at 12 mm.In peat solution, the pH changes are similar to those immersed in sulphate, where pH in the initial 3 mm is approx.10.50 and increases to pH 11.46 at a depth of 12 mm. Table 4 . Atomic elements and atomic ratios data from EDS analysis after 12-months exposure
v3-fos-license
2017-09-30T20:00:04.997Z
2020-03-01T00:00:00.000
142425660
{ "extfieldsofstudy": [ "Political Science", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/journal_contribution/An_Experiment_in_Hiring_Discrimination_via_Online_Social_Networks/16890532/1/files/31231558.pdf", "pdf_hash": "22c8e5e6f78a2bf26e99fccd4748272d83324d73", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44711", "s2fieldsofstudy": [ "Business" ], "sha1": "d47cac1c0f64f8c28a21936530c9ecff4aa6e9d9", "year": 2020 }
pes2o/s2orc
AN EXPERIMENT IN HIRING DISCRIMINATION VIA ONLINE SOCIAL NETWORKS . We investigate whether personal information posted by job candidates on social media sites is sought and used by prospective employers. We create profiles for job candidates on popular social networks, manipulating information protected under U.S. laws, and submit job applications on their behalf to over 4,000 employers. We find evidence of employers searching online for the candidates. After comparing interview invitations for a Muslim versus a Christian candidate, and a gay versus a straight candidate, we find no difference in callback rates for the gay candidate compared to the straight candidate, but a 13% lower callback rate for the Muslim candidate compared to the Christian candidate. While the difference is not significant at the national level, it exhibits significant and robust heterogeneity in bias at the local level, compatible with existing theories of discrimination. In particular, employers in Republican areas exhibit significant bias both against the Muslim candidate, and in favor of the Christian candidate. This bias is significantly larger than the bias in Democratic areas. The results are robust to using state- and county-level data, to controlling for firm, job, and geographical characteristics, and to several model specifications. The results suggest that 1) the online disclosure of certain personal traits can influence the hiring decisions of U.S. firms and 2) the likelihood of hiring discrimination via online searches varies across employers. The findings also highlight the surprisingly lasting behavioral influence of traditional, offline networks in processes and scenarios where online interactions are becoming increasingly common. Introduction The rise of Internet and social media services like online social networks has created new channels through which employers and job candidates can find information about each other. Those channels can facilitate and improve the matching between firms and workers. However, job seekers also reveal, online, information that would not be easily discovered during the interview process, and which may be even illegal for employers to request or use in the hiring process. Thus, although new online tools can facilitate labor market matching, they may also create a new arena for labor market discrimination. To date, no field data has demonstrated how online information affects the hiring behavior of U.S. firms. In surveys, employers admit to using various online services to research job candidates. 1 However, the Equal Employment Opportunity Commission (EEOC) has cautioned firms about risks associated with searching online for protected characteristics, 2 and thus may have dissuaded some firms from using social media in the hiring process. Some states have even drafted bills limiting employers' ability to access candidates' online information. 3 Thus, whether hiring bias results from personal information posted online remains an open question. Furthermore, in surveys, employers claim to use social media merely to seek job-relevant information about candidates. 4 However, much more private information can be gleaned from the online presences of prospective hires. On a social media profile, a status update can reveal a candidate's place of worship, a comment can suggest sexual orientation, and a personal photo can reveal ethnic origins. In a country known for its cultural assimilation of immigrants (Vigdor 2008), private differences that have traditionally been scrubbed out for work or education might now be more visible online. Whether U.S. employers react to such online personal information, rather than merely to the online professional information they may seek, is not known. We present a randomized field experiment testing the joint hypothesis that (i) firms search online for information about job applicants and (ii) change their hiring behavior according to manipulated online personal information. The experiment relies on a methodology consisting of the creation and careful design 1 Various surveys suggest that U.S. employers search job candidates online, but the reported frequency of searches varies considerably across surveys. See "Survey Shows 48% of Employers Conduct Social Media Background Checks." EmployeeScreenIQ. Accessed February 26, 2016. http://www.employeescreen.com/iqblog/social-media-2/48-of-employers-conductsocial-media-background-checks/; "Employers Using Social Networks for Screening Applicants." Wikibin. Accessed February 26, 2016. http://wikibin.org/articles/employers-using-social-networks-for-screening-applicants.html; "Ponemon Institute/Littler Mendelson Study." International Association of Privacy Professionals. Accessed February 26, 2016. https://www.privacyassociation.org/publications/2007_12_ponemon_institute_littler_mendelson_study; Johnston, Stuart J. 2010. "Microsoft Survey: Online 'Reputation' Counts." Internet News, January 27. Accessed February 26, 2016. http://www.internetnews.com/webcontent/article.php/3861241/Microsoft+Survey+Online+Reputation+Counts.htm. 2 See Theodore Claypoole, "EEOC Regulations Spotlight Social Media," Womble Carlyle, last modified May 24, 2011, accessed February 26, 2016, http://www.wcsr.com/Insights/Alerts/2011/May/EEOC-Regulations-Spotlight-Social-Media. On the potential risks of using social media in hiring for firms, see Harpe (2009). 3 For instance, Texas S.B. 118 aims to prohibit an employer from "requiring or requesting access to the personal accounts of employees and job applicants through electronic communication devices; establishing an unlawful employment practice." See S.B. 118, 83rd Leg., Reg. Sess. (Tex. 2013). 4 See surveys listed in footnote 1. of online presences of fictional individuals. We design social media profiles for four job candidates -a Muslim versus a Christian candidate, and a gay versus a straight candidate -to manipulate personal information that may be hard to discern from résumés and interviews, and that may be protected either under federal or some state laws (henceforth referred to as "protected information"). 5 We manipulate the candidates' personal information exclusively via their online profiles, using material revealed online by actual members of popular social networking sites and job seeking sites. Candidates' professional background and résumés are kept constant across conditions. After vetting the realism and quality of candidates' online profiles in a randomized online pilot experiment (henceforth the "online pilot"), we submit résumés and cover letters on behalf of those four candidates to over 4,000 U.S. job openings, with a single application sent to each employer (henceforth the "field experiment"). The résumés and letters contain no references or links to the candidates' manipulated personal information: To be treated by our manipulation of online profiles, the employer must independently choose to search online for information about the candidate using the name indicated on the submitted résumé. The main dependent variable in the field experiment is the number of interview invitations each candidate receives (i.e., callbacks). We compare the callback rate for the Christian candidate to that for the Muslim candidate, and the callback rate for the straight candidate to that for the gay candidate. We control for independent variables used in related literature (e.g., Bertrand and Mullainathan 2004;Tilcsik 2011), including firm characteristics, job characteristics, and geographical characteristics of the job's location. We also use Google and LinkedIn data to estimate the frequency with which employers search the candidates online. We test for a stronger callback bias in states and counties that have a higher prevalence of demographic traits associated with bias in prior research. More negative attitudes to both Muslims and gay people have been found among survey respondents who are Republican, older, and who do not personally know Muslims or gay people (Arab American Institute 2012;Pew Research Center 2012;Pew Research Center 2013). Thus, we test for stronger bias in firms located in states and counties that have a high fraction of Republican voters, a high median age, and low fractions of Muslims (in the Muslim-Christian manipulation) or gay people (in the gay-straight manipulation). Nationwide, we detect no difference in callback rates for the gay candidate compared to the straight candidate, but we find 13% fewer callbacks for the Muslim candidate compared to the Christian candidate. While the difference is not significant at the national level, it exhibits significant and robust heterogeneity in bias at the local level. We find that discrimination a) against the Muslim candidate and b) in favor of the Christian candidate varies significantly and robustly with employer characteristics in manners predicted by 5 Different types of personal information enjoy different levels of protection across U.S. states. Some personal traits cannot even be inquired about in interviews, while others cannot be used in the hiring decision. Some are protected across all states, and others only in some states. For simplicity, we refer to information about these traits collectively as "protected" information, but investigate statelevel differences in the degree of their protection through our empirical analysis. both theoretical and empirical previous work. In counties with a high fraction of Republican voters, the callback rates for the Muslim and Christian candidates are, respectively, 6.25% and 22.58%. In contrast, in the Democratic counties, the callback rates for the Muslim and Christian candidates are, respectively, 12.37% and 12.13%. The results are robust at the state level. Using the callback rates for the gay and straight candidates as a benchmark, we find that our results are driven by both a negative bias against the Muslim candidate, and a positive bias toward the Christian candidate. Furthermore, we find that the findings are robust to the inclusion of firm characteristics (including firm size and ownership status) and a host of additional controls, as well as to different categorizations of Republican, mixed, and Democratic states or counties based on electoral results and Gallup Organization surveys. These results are consistent with those of the online pilot, where we find significant bias against the Muslim candidate, relative to the Christian candidate, among subjects with hiring experience who selfidentified as Republican and Christian. These findings make a number of contributions to the literature. First, an emerging stream of work in information systems research investigates the impact of online information systems on offline behaviors (for instance, Chan and Ghose 2014 investigate the potential role of online platforms such as Craigslist in the transmission of STDs). The findings of our experiment suggest that, while hiring discrimination via online searches of candidates may not yet be widespread, online disclosures of personal traits can significantly influence the hiring decisions of a selected set of employers. At the same time, the results suggest an intriguing phenomenon for scholars of online transactions: to the extent that group-beneficial cooperation plays a role in explaining the findings, the experiment highlights the lasting behavioral influence of traditional offline networks of people physically close to each other. Even as online networks and interactions become more common, they may sometimes facilitate parochial cooperation in local, physical networks. Second, the findings highlight an emerging tension between modern information systems and institutional regulation written for a pre-Internet world. The latter aims to preclude certain information from being used in the hiring process; the former can effectively bypass legislation by allowing individuals to make their information openly available to others online. A similar tension is being studied in the growing empirical (Miller and Tucker 2009;Goldfarb and Tucker 2011) and theoretical (Acquisti, Taylor, and Wagman 2015) information systems literature on privacy and its economic aspects (Stigler 1980;Posner 1981). Third, the paper introduces a methodology -the manipulation of Internet profiles -for field experiments on digital discrimination. One advantage of this method, which we exploit here, is its suitability for investigating discrimination associated with traits that may be protected under a country's laws or that may be difficult to realistically manipulate in résumé-only or audit studies (as most job candidates may refrain from revealing certain types of personal information on résumés). In fact, this methodology (creating online presences to test dynamics in the offline world) may be used not only in studies of discrimination in the job market, but also in comparable studies of bias in access to credit, housing, or educational DRAFT 5 opportunities. Fourth and finally, this paper illustrates a wrinkle in the literature investigating the role of information in economic outcomes. Economists have long been interested in the role of information (Stigler 1962) and signaling (Spence 1973) in job market matching. Recent work has highlighted how hiring mechanisms that reduce candidates' information available to employers (for example, by making the candidate anonymous) may increase interview opportunities for certain categories of applicants (Goldin and Rouse 2000;Aslund and Skans 2012) and may raise social welfare (Taylor and Yildirim 2011). In this manuscript, conversely, we find that Internet and social media platforms can affect interview opportunities for some categories of applicants at the expense or benefit of others, by making more information about candidates available to employers. 6 The extent to which this new information channel will improve labor search efficiency (Kroft and Pope 2014) and reduce labor market frictions (by allowing better matching of employers and employees), or will in fact lead to more discrimination, is likely to be an issue of increasing public policy relevance. Public Disclosures, Social Media, and Social Networking Sites The rise of social media has both fueled and been fueled by an arguably unprecedented amount of public sharing of personal information. The shared information frequently includes disclosures and revelations of a personal, and sometimes surprisingly candid, nature. In certain cases, personal information is revealed through fully identified profiles. 7 Other times, users provide personal information under pseudonyms that may still be identified. 8 Some social media users take advantage of privacy settings to manage and restrict their online audiences (Stutzman, Gross, and Acquisti 2012). Others think they do, but actually fail to protect their information. 9 Employers can access shared personal information in many ways. Some job candidates make their online profiles (on social media or blogging platforms) openly accessible to strangers. 10 Others are more 6 In competitive markets, firms may overinvest and collect an "excessive" amount of information in equilibrium, because of a contrast between social incentives and firms' data-gathering incentives (see Burke, Taylor, and Wagman 2012). In a labor market search model, Wagman (2014) finds that firms may search for negative news about applicants and end up collecting too much information, resulting in applicants inefficiently matched with firms of lower productivity. Seabright and Sen (2014) examine how reductions in the cost of applying for jobs may increase the number of applications to a degree that adversely affects firms. 7 For instance, an overwhelming majority of Facebook users in a sample of North-American college students surveyed by Acquisti, Gross, and Stutzman (2014) used real first and last names on their profiles. 8 Numerous examples exist of techniques through which seemingly pseudonymous online profiles can be re-identified across a variety of platforms and scenarios. See, for instance, Narayanan and Shmatikov (2009). 9 Consider the gap between stated and actual privacy settings of online social network users reported by Acquisti and Gross (2006). Similar results have been subsequently found by Madejski, Johnson, and Bellovin (2012). 10 For instance, using data collected for this study (see Section 4.2), we estimate that, in 2011, 42% of all profiles members of the Facebook network of a major North-American college shared "likes" publicly (where a like could be an interest, book, movie, music, or a TV program). Data reported in Acquisti, Gross, and Stutzman (2014) also shows that a majority of Facebook members in a North American city used facial images in their primary profile photos (which are public by default). Facebook data analyzed by Johnson, Egelman, and Bellovin (2012) indicates that around 54% of Facebook members made available to strangers at least some of their profile information. selective, but sensitive information such as religious affiliation, sexual orientation, or family status may still be indirectly inferable from seemingly more mundane data. 11 Finally, some employers engage in social engineering (such as using friends of friends connections to view a candidate's profile), or even ask for the candidates' passwords to access their profiles. 12 Although the legal consensus seems to suggest that seeking information online about job candidates (or employees) may not violate U.S. law, such searches do raise privacy and legal issues, such as the discriminatory behavior that may result from discovering protected information (Sanders 2011;Sprague 2011;Sprague 2012). Above-cited surveys suggest that U.S. employers have been using social media to screen prospective job candidates. However, no prior controlled experiment has measured the frequency of firms' usage of online profiles in hiring decisions, and how profile information actually affects those decisions. The study that comes closest to our experiment is Manant, Pajak, and Soulié (2015), who investigate the role of social media in the French labor market. Other than differences in the country of study (USA vs. France) and sample size (about 4,000 versus about 800 employers), Manant et al (2015) focus on a different research question from our study: this manuscript focuses on testing the joint hypothesis that firms search online for information about job applicants, and change their hiring activities based on the personal information they find; Manant et al (2015) focus on investigating the impact of different search costs for finding candidates' information via social media. In addition, Bohnert and Ross (2010) use a survey-based experiment to investigate how the content of social networking profiles can influence evaluations of job candidates. Garg and Telang (2012) analyze how job applicants can use LinkedIn to find connections that may lead to a job offer. Kluemper, Rosen, and Mossholder (2012) assess the relationship between the content of a person's Facebook profile and her future job performance. Discrimination and Résumé Studies Following Bertrand and Mullainathan (2004), experiments using written applications to real employers have found evidence of discrimination against people with various traits. Pertinent prior literature has found discrimination against job candidates with Muslim rather than Swedish names in Sweden (Carlsson and Rooth 2007), candidates who openly signal Muslim beliefs (relative to other religious beliefs) on their résumés in New England , and candidates who explicitly or implicitly signal same-sex sexual orientation (Weichselbaumer 2003;Ahmed and Hammarstedt 2009;Tilcsik 2011), 13 but no evidence of discrimination against Muslims (and little systematic discrimination by caste in new economy sectors) in India (Banerjee et al 2009). 11 For instance, Jernigan and Mistree (2009) show that a Facebook member's sexual orientation may be inferable from knowledge of her friends' network. 12 Cases in which employers asked job candidates to provide passwords to their online profiles have been reported in the media. See Stern, Joanna. 2012. "Demanding Facebook Passwords May Break Law, Say Senators." ABC News. Last modified March 26. Accessed February 26, 2016. http://abcnews.go.com/Technology/facebook-passwords-employers-schools-demand-access-facebooksenators/story?id=16005565#.T7-fVMWQMQo. 13 See also Drydakis (2009) and Hebl et al (2002). One crucial difference between existing résumé studies and our approach is that we focus on employers' volitional choices to search online for candidates' information. A second important difference is that we focus on candidates revealing information in personal online social profiles, rather than volunteering personal traits in a professional context. This approach facilitates the investigation of discrimination based on protected information that candidates do not frequently provide in their résumés or during interviews. Specification Prior literature on job market search strategies has highlighted employers' use of formal or informal information networks (Rees 1966), and their reliance on intensive or extensive searches (Barron, Bishop, and Dunkelberg 1985). A central privacy concern in online job market search is that Internet information exchanges may bundle job relevant information and personal information to an extent not seen in traditional labor market practices. This raises the question we investigate in this manuscript: Is there evidence that employers search for and discriminate on the basis of publicly posted yet personal information? We test for the existence of hiring discrimination that stems jointly from two, possibly dependent, actions: first, each employer decides whether to search online for a candidate's information; and, second, each employer who finds the candidate's profile is unknowingly treated to the experimental manipulation and chooses whether or not to interview the candidate. It is important to note the difference between our question and more frequently asked questions in the economics literature on discrimination: Does discrimination exist, to what extent, and in what form? That literature has developed sophisticated methods for testing whether or not discrimination is actually present and for empirically separating taste-based from statistical discrimination (Mobius and Rosenblat 2006;Neumark 2012;Ewens, Tomlin, and Wang 2014). In this paper, we do not attempt to separate bias from search, let alone separate taste-based discrimination from statistical discrimination or other forms of bias. We instead focus on the more basic question of whether or not social media profiles can impact hiring via the combined effects of online search and bias. Using a between-subjects design, we test for an effect of random assignment to treatment conditions on callbacks: where A i is an indicator of random assignment of employer i to either the Muslim condition (A i = 1) compared to the Christian condition (A i = 0) or to the gay condition (A i = 1) compared to the straight condition (A i = 0), ß 0 and ß 1 are unknown parameters, x i and z i are vectors of, respectively, observed and unobserved regressors capturing employer i's traits, γ and δ are vectors of unknown parameters, and ɛ i is an error term. Callback i equals one if employer i contacts the candidate for an interview and zero if there is no response or an explicit rejection. Assuming successful randomization of A i , the estimate of ß 1 will be an unbiased estimate of the effect of manipulated online profiles on callbacks. It is analogous to an intent-to-DRAFT 8 treat effect, but we refer to it as an assignment effect to emphasize that the main goal of our design is testing the joint effect of (i) choosing to search (and thus unknowingly self-selecting into our experiment's treatment) and (ii) being treated, whereas researchers in the intent-to-treat literature are primarily interested in isolating the treatment effects. We do not expect search rates to differ across condition assignments, because employers are treated by our manipulated profiles after they choose to search. However, it is likely that both search and discrimination probabilities will vary with employer characteristics, leading to heterogeneous assignment effects. 14 We test for heterogeneous assignment effects in our regression analysis by including interactions between the random assignment and regressors that prior literature predicts may interact significantly with our manipulations. Dependent and Independent Variables Our experimental design manipulates personal information that is unlikely to be volunteered by candidates in résumés and that may be risky under federal or state law for employers to inquire about during interviews, but which may be obtained online. Building on résumé studies by Weichselbaumer (2003), Carlsson and Rooth (2007), and Tilcsik (2011), we focus on sexual orientation and religious affiliation. The dependent measure is interview invitations that the candidate receives from actual employers, either by email, phone call, or letter. Based on prior evidence, we expect callbacks to be more biased in certain areas than in others. Independent surveys consistently find three variables that are associated with more negative attitudes to both Muslims and gay people. Surveys on attitudes to Muslims and Arabs find more bias against Muslims among respondents who are Republican, older, and who know fewer Muslims (Arab American Institute 2012). Separately, surveys on attitudes to gay people find more bias against gay people among respondents who are Republican, older, and who know fewer gay people (Pew Research Center 2013). In communities with a high prevalence of types of people who show bias in these surveys, employers themselves may be more likely to share the attitudes expressed by community members, or they may face incentives to hire candidates whose traits match those of existing employees and community members. Thus, we test for stronger bias in firms located in states and counties that have: (i) a higher median age than the U.S. median age, (ii) a lower fraction of the population that is Muslim than the U.S. fraction of Muslims (in the Muslim-Christian manipulation), or a lower fraction of households that are male-partner households than the U.S. fraction of male-partner households (in the gay-straight manipulation), and (iii) a high fraction of voters in the 2012 Presidential election who voted for the Republican candidate (see Section 5.2.2). In addition, we control for a host of other variables used in comparable résumé studies, particularly Bertrand and Mullainathan (2004) and Tilcsik (2011), including employer characteristics (such as industry or number of employees), job characteristics (such as job-specific requirements), and regional characteristics based on location of the job (for instance, variations in state and local policies to protect gay people). Finally, we include controls that are specific to our experimental design's online dimension, such as a measure of Facebook penetration across states (a proxy for the likelihood that an individual in a particular state may be inclined to search online for the candidate's social media profile). Interpreting a Null Result Given that our design tests jointly for online searching and for discrimination on the basis of manipulated online profiles, a null effect in the field experiment may signal low levels of either of these behaviors. In addition, the experiment could fail to reject the null hypothesis in a number of additional ways. First, employers may search the candidates online but fail to find our profiles. Our design addresses this concern by using name-selection criteria that produce high-ranked search results, and by checking, for the duration of the experiment and across a variety of platforms, that candidate name searches produce the desired results (see Section 4.2). Another possibility is that the manipulations may fail to signal the traits we intended to manipulate. To address this possibility, prior to conducting the field experiment, we use an online pilot to test whether we successfully manipulated beliefs about the candidates' personal traits. A further concern is that the candidates may be either under-or over-qualified, creating floor or ceiling effects in callback rates. We address this possibility by designing the résumés and professional backgrounds of the candidates in concert with human resources professionals, and testing their quality during the online pilot. An additional possibility is that employers may not pursue the candidates because of suspicions that they are fictional. Consequently, we design résumés and profiles using information that real individuals (including members of job search sites and members of the same social network the candidates were purported to belong to) had also made publicly available on their profiles or on job search sites. Furthermore, we choose privacy and visibility settings that were common among the profiles of real members. The online pilot tests the success of our efforts at creating realistic profiles by asking questions aimed at discerning whether online subjects had doubts about the veracity of the candidates (see Section 5.1). Finally, a null effect may be consistent with scenarios in which employers may search, but at a later stage in the hiring process (for instance, after interviews with the candidates). Our design would not capture this behavior, and would not be able to distinguish this scenario from one with little or no discrimination. Design We implemented a between-subjects design with four treatment conditions consisting of social media profiles for a single job candidate. We manipulated religious affiliation (a Christian versus a Muslim male) and sexual orientation (a gay versus a straight male). 15 Thus, the experimental conditions represent a range of traits that include federally protected information (religious affiliation) 16 and information protected only in certain states (sexual orientation). 17 While these traits enjoy different levels of protection under U.S. law, job candidates can signal this information online both in an explicit manner (e.g., self-descriptions on a profile indicating one's sexual orientation) and an implicit one (e.g., a primary profile photo suggesting the individual's religious affiliation). For each of the four candidates, we designed: (i) a résumé; (ii) a profile on LinkedIn (a popular online professional network commonly used by human resource professionals and job seekers, henceforth referred to as "PN," as in "Professional Network"); and (iii) a profile on Facebook (a popular social networking site commonly used for socializing and communicating, henceforth referred to as "SN," as in "Social Network"). 18 We designed candidates' résumés and PN profiles to show identical information across conditions, except for the candidates' names. In contrast, the candidates' SN profiles manipulated information about the candidates' personal traits using various profile fields, including those that some employers admit to using during the hiring process in semi-structured interviews (Llama et al 2012). Each applicant's name corresponds to an experimental condition; the names link together a candidate's résumé, PN profile, and SN profile. 19 Because a single name links all of these materials, we can submit just one application to each employer in our sample. 20 Submitting multiple candidates' applications to the same employer would increase the risk of an employer detecting social media profiles which have identical photos and generic information, yet come from candidates with different names and condition-specific information. Design Approach and Priorities We designed profiles that (i) were realistic and representative of the population of SN users with the traits we were manipulating, and (ii) held constant direct signals of productivity outside of beliefs stemming from the trait itself. As described in the rest of this section, we populated the résumés and online profiles using existing information posted online by actual SN members demographically similar to the candidates, and by individuals who listed their résumés on job searching sites. The SN profiles are the vehicles for the manipulations, and responses to these profiles comprise our study's core. The remainder of this section discusses our design in more detail. Candidates' Information Names. We designed first and last names representing a U.S. male for each of the conditions used in the experiments. We chose first names common among U.S. males in the same age group as our candidates, and assigned identical first names to candidates in matching pairwise conditions (the gay and straight candidates were both named Mike, and the Muslim and Christian candidates were both named Adam). 21 We then designed last names by altering letters of existing but similarly low-frequency U.S. last names. The last names were chosen to have the same number of syllables, identical lexical stress, and similar sounds across matching pairwise conditions (see Appendix Table A1 for more detail). A potential concern is that Christian and Muslim names often differ. However, roughly 35% of the about 2.35 million American Muslims are, like our candidate, US-born, and many have Anglo-Saxon names. 22 We iteratively tested several combinations of first and last names until we found a subset that satisfied three criteria. Criterion 1 was that the exact first and last name combination would be unique on SN and PN: no other profile with that name should exist on the network. Criterion 2 was that SN and PN profiles designed under each name would appear among those names' top search results conducted with the most popular search engines (Google, Bing, and Yahoo), as well as searches conducted from within the SN and the PN social networks. We continuously monitored the fulfillment of these criteria for the experiment's duration. 23 Criterion 3 was the most critical one: names alone should not elicit statistically significant differences in perceptions of the manipulated traits. We conducted two checks of this criterion. First, we tested that names, by themselves, did not influence perceptions of the traits manipulated in the profiles. We recruited 496 subjects from Amazon Mechanical Turk (or, MTurk) -a popular platform for behavioral experiments 24 -and presented each with one first and last name combination, randomly chosen from the list of possible names. Each subject then responded to a questionnaire on associations between that name and the traits manipulated in field experiment. We selected names for the field experiment that did not elicit statistically significant differences in perceptions of traits (see Appendix Table A5), and then randomly assigned them to candidates. The second check of Criterion 3 combined names, résumés, and professional online profiles, and checked that names and professional information did not elicit differential propensity to invite a candidate to an interview (see Section 5.1). Email addresses and telephone numbers. For inclusion in the résumé and cover letters sent to companies, we designed email addresses using a consistent format across the profiles and registered a telephone number for the applicants. We used automated message recordings for the number's voice messaging service. This message was identical across conditions and recorded with the voice messaging service's standard greeting. We also used the same postal address for all candidates, corresponding to a residential area of a mid-size North-American city. Résumés. The résumés contained information depicting a professional and currently employed candidate. Each résumé contained the candidate's contact information, educational background, and work experience, as well as technical skills, certifications, and activities. The résumés were held constant across conditions, except for the names of the applicants and their email addresses. 25 Hence, professional and educational backgrounds did not vary across experimental conditions, thus holding constant the candidates' job market competitiveness. The information included in the résumé was modeled after résumés found on websites such as Monster.com and Career.com for job seekers demographically similar (in terms of age and educational background) to the candidates. The résumé represented a candidate with a bachelor's degree in computer science and a master's degree in information systems. 26 Creating a single SN and PN profile per candidate constrained the types of jobs to which we could apply. Hence, all résumés needed to be consistent with the constant information provided in the candidates' social media profiles (for instance, all candidates' profiles exhibited the same master's degree in information systems). Therefore, the openings (and respective résumés) tended to be technical, managerial, or analytic in nature. Two human resource recruiters vetted the résumés for realism, professionalism, and competitiveness before they were tested in the online pilot. A design objective (also tested during the online pilot) was to create a sufficiently competitive candidate for stimulating the level of interest necessary to generate an online search, but not so competitive as to outweigh any potential effect arising from an employer's perusal of the candidate's online profile. The résumés did not include links to the candidates' personal profiles or references to the personal traits we manipulated on the SN. The experimental design relied entirely on the possibility that employers 25 Research assistants blind to the experimental conditions were allowed to add up to two technical skills (for instance, Java) or certifications (for instance, CISSP certification) to a résumé if the job description required those skills or certifications as preconditions. This fine-tuning always occurred before a candidate's name was randomly added to the résumé, and therefore before the candidate was assigned to a job application. 26 We prepared 10 versions of the same résumés, focusing on slightly different sets of expertise: web development, software development, quality assurance, project or product management, medical/healthcare information, information systems, information security, business intelligence, business development, and analytics. A sample résumé is presented in Appendix Table A2. would autonomously decide to seek information about applicants online, searching for their names either on popular search engines or directly on popular social networks. PN ("Professional Network") profiles. We designed PN profiles for each name, maintaining identical profile information across conditions. The content of the profiles (see Appendix Table A3) reflected the information provided in the résumés. To increase realism, we also designed additional PN profiles for other fictional individuals and connected them to the candidates' profiles, so that they would become "contacts" for our candidates. We took great care to avoid any possible linkage between the actual candidates' profiles. 27 The profiles with data extracted from actual profiles of social network users who were demographically similar to the candidates; we made sure that the overall amount of public self-disclosure in our profiles would be equivalent to the amount of self-disclosure in actual social network profiles of users demographically similar to our candidates; we designed profiles to include a combination of information that changed based on the experimental condition and information that was held constant across all conditions; and finally, we posted the same amount of manipulated and constant information for each candidate (therefore, the Christian and Muslim candidates' profiles, and the gay and straight candidates' profiles, presented equivalent information about the strength of their religion, or sexual orientation). First, we downloaded the public profiles of 15,065 members of the same Facebook college network from which the candidates purportedly graduated. The vast majority of those profiles belonged to individuals with a similar age range (their 20s), current location (a North-American mid-size city), and educational background as our candidates. As further detailed below, among that set of profiles we then focused on those with the same gender (male) as our candidates, and who self-disclosed traits matching the ones manipulated in the experimental conditions (that is, Christian, Muslim, straight, or gay). Then, we designed several components of each SN profile based on the information we had mined from real online profiles: personal 27 We also created websites, email accounts, and PN presence for some of the companies reported in the candidates' résumés, as well as other workers at those companies and potential letter-writers, in order to have a complete, believable background should anyone search. The candidates were also registered as alumni from the institution that granted their degrees. As noted, we took down these materials after the completion of the experiment. 28 and therefore visible to a human resources professional. To avoid potential confounding effects from overdisclosure, we refrained from showing certain fields that only a minority of users of the network actually publicly show, such as posts, status updates, and friends list. Appendix Table A3 presents the resulting information included in each of the four SN profiles. By design, some of these profile components (the non-condition-specific traits) were constant across the different profiles and thus across the different experimental conditions. Other components (the condition specific traits) were manipulated across conditions. We chose a mix of condition-specific and non-conditionspecific traits that replicated the combination and balance we observed in the actual profiles we mined and analyzed (as described above). The manipulation of fields representing different types of personal information was meant to increase the realism and ecological validity of our experimental design -which we tested in our pilot experiment. We discuss each type of component in the following sub-sections. Information kept constant across SN profiles. By design, we made the candidates' primary profile photo, secondary photos, current location, hometown, age, education, employment history, and friend list constant across conditions. Basic personal information (such as the SN member's current location and city of birth, educational background, employment history, hometown, and age) was made publicly visible on the profile and kept constant across the conditions. That information was made consistent with the content of the résumé. The candidate was presented as U.S. born and English speaking. 30 The photo on the candidates' social media profiles was of a Caucasian male with dark hair and brown eyes. We picked the photo after recruiting non-professional models on Craiglist and asking them to submit a portfolio of personal photos. About forty subjects (recruited via MTurk) rated all models who submitted their photos along two 7-point Likert scales, indicating their perceived attractiveness and professionalism. We selected one male model that received median perceived attractiveness and professionalism ratings. 29 Johnson, Egelman, and Bellovin (2012) report that only 14.2% of the users of a popular social network whose profile information they mined had a public wall (that is, visible status updates and comments). That noted, we reiterate that the profiles used in the experiment contained equivalent amounts of information across conditions, so that any difference in callback rates could not be merely imputed to a higher amount of disclosure by one type of candidate over another. Furthermore, we note that several fields (such as name, gender, primary photo, profile photo, and networks) are mandatorily public on the network we used for the experiment. 30 The Muslim candidate's social network profile also presented him as speaking both Arabic. We designed numerous "friends" profiles to connect to the candidates' profiles. Again, the number and identities of friends were identical across conditions. 31 The set of friends showcased a mix of names, backgrounds, and profile visibility settings. We set the friends list to "private" -that is, not visible to an employer -because of extant research suggesting that the overwhelming majority of members of the network do not publicly disclose their friend lists. 32 Information manipulated across SN profiles. Some fields in the profiles were manipulated across conditions: close-ended text fields (e.g., interests), open-ended text fields (e.g., quotations), and the background image. Furthermore, we manipulated the candidates' sexual orientation by filling out the field "interested in" (either male interested in females or interested in males), and we manipulated religious affiliation through the "religion" field (either Christian or Muslim, with no specific denomination). We abided by a number of principles in designing the manipulated information. First, we constructed "baseline" profile data, which was constant across all conditions, including information such as specific interests and activities statistically common among the SN profiles we had previously mined ("baseline information"). We then augmented the profiles with additional data (such as additional interests or activities) specific to the traits that we wanted to manipulate ("treatment information"). Both "baseline" and "treatment" information were extracted from real, existing SN profiles. The profiles therefore represented realistic pastiches of actual information retrieved, combined, and remixed from existing social media accounts. 33 We took care to avoid confounding the trait we were manipulating with signals of worker quality beyond signals inherent in the manipulated trait itself. Furthermore, the different candidates' profiles were designed to disclose the same amount of personal information, including the same quantity and types of data revealing their sexual orientation or religious affiliation, so as not to create profiles with unbalanced condition-specific disclosures. For similar reasons, we took great care to ensure that the information revealed by the candidates would not be construed as over-sharing, or as a forced caricature of what a Muslim (or Christian, or gay, or straight) profile should look like. The combination of baseline and treatment information of true existing profiles was one of the strategies we used to meet this goal; using only information existing in other SN profiles, and making certain fields publicly inaccessible, were two others. The online pilot 31 Also in the case of the SN, we attempted to create a number of friends that would match the median number of connections of U.S.-based SN profiles at the time the experiment started (around 100). However, we were only able to create 49 friends, due to technical barriers such as (as noted earlier) the need to provide unique mobile phone numbers to create new profiles on the network. As noted, we prevented cross-linkages of the candidates via their common networks of friends by appropriately selecting privacy and visibility settings for the social network profiles. 32 See Johnson, Egelman, and Bellovin (2012). Although we made the list of friends private, we did not delete the friend profiles because 1) some of those friends' comments appeared on the candidates' profile photos, adding realism to the profiles; 2) the presence of a network of friends decreased the probability that the candidates profiles could be identified as fake and deactivated by the network (none of the candidates profiles was deactivated during the duration of the experiment). 33 Technical limitations inherent in designing an experiment via online social networks precluded us from the possibility of randomizing the entire content of the candidates' profiles by creating thousands of different profiles for each candidate. On the other hand, our approach achieves ecological realism by relying on existing profile data. experiment, with its open-ended questions about the profiles, tested whether our goal was attained (see Section 5.1. and Appendix A). Close-ended text fields such as interests and activities were extracted using a combination of manual and statistical analysis from real SN profiles of people who shared demographic characteristics with the candidate (for the "baseline information") and who exhibited the same characteristics we manipulated (for "treatment information"). For example, when text involved countable objects such as one's favorite books or movies, we calculated the most popular items listed by the overall population of SN profiles and used that as the "baseline information" for the candidate. Then, we repeated the operation, focusing on the subset of the public disclosures of over 15,000 SN profiles that displayed the same traits we manipulated, in order to create the "treatment information." For instance, we constructed a portion of the profile text indicating activities and interests for the Christian male profile using statistical analysis of the entire sample of over 15,000 profiles that also included non-Christian ones; the remaining portion of the profile text concerning his activities and interests was constructed using statistical analysis of the sub-sample of profiles of Christian males at his university. If our sample did not provide enough information (for instance, movies) for individuals with a given trait, we complemented the statistical analysis with a manual analysis of the same SN profiles. We also extracted open-ended fields (such as personal self-descriptions, personal quotations, or non-textual fields such as profile background images) through manual analysis of existing profiles, since the extreme variety of styles and contents across open-ended texts made a statistical approach unfeasible. Online Pilot Experiment Before submitting résumés to actual job openings at U.S. firms, we conducted a pilot experiment, consisting of a randomly assigned questionnaire with online participants and a between-subjects design. The online pilot was designed to test whether the treatment conditions successfully manipulate relevant beliefs, such as the religious affiliation or sexual orientations of the candidates. Furthermore, its open-ended questions were designed to test the perceived realism of the profiles. Finally, the pilot experiment was designed to test whether, in absence of the manipulated personal profiles (SN), the candidates' names, résumés, and professional profiles elicited differential propensities to invite the candidate for an interview. Details of the design, analysis, and findings of the online pilot are presented in Appendix A and are summarized here. We recruited over 1,750 U.S. residents as participants using Amazon Mechanical Turk. Participants were presented with the candidates' social profiles and résumés prepared for the actual field experiment. Participants included individuals with previous hiring experience -something we exploited in the analysis of the results. Participants were randomly assigned to one of four conditions (gay, straight, Christian, or Muslim candidate), and were provided links to one candidate's résumé, PN profile, and SN profile. The survey instrument had four elements: (i) introduction and randomized manipulation; (ii) measurement of perceived employability; (iii) measurement of beliefs for the purpose of manipulation check; and (iv) open-ended questions and demographic characteristics. Participants were asked to evaluate the candidate. The main dependent variables were hypothetical willingness to call the candidate for an interview and perceptions of the candidate's suitability for the job. The online pilot found, first, that the treatment conditions successfully manipulated relevant beliefs, such as the religious affiliation and sexual orientation of the candidates. Second, open-ended questions checked for perceived realism of the profiles; they provided no evidence of doubt, among the participants, that the candidates were real. Third, the online pilot tested whether, in absence of links to the manipulated personal profiles (SN), the candidates' names, résumés, and professional profiles elicited differential propensities to invite the candidate for an interview; we found no evidence that the candidates' names and professional materials elicited different responses in the absence of the manipulated online social media profiles. Finally, responses of hypothetical hiring behavior and judgments of employability provided evidence to complement the findings of our field experiment. Consistent with our field findings (presented further below in this section), manipulated profiles in the online pilot elicited no bias against the gay candidate, relative to the straight candidate. However, among subjects with hiring experience, we found highly significant bias against the Muslim candidate relative to the Christian candidate, especially among those who self-identify as Republican and Christian. Field Experiment The field experiment consisted of a between-subjects design in which each employer was randomly assigned to receive one job application from either the Muslim, Christian, gay, or straight candidate. The job application period lasted from early 2013 through the summer of 2013. Samples used in résumé studies (see Section 2.2) have ranged from a few hundred to a few thousand employers. We aimed for the higher end of the spectrum, ultimately applying to every U.S. job opening that fit our pre-defined criteria of a reasonably suitable position for our candidates. This amounted to 4,183 U.S. job applications (or roughly 1,045 employers per experimental condition). Ten applications failed to meet criteria we had defined in our procedures for acceptable and complete applications, leaving us with 4,173 usable applications. We extracted job openings from Indeed.com, an online job search site that aggregates jobs from several other sites. 34 We selected positions that fit the candidates' backgrounds -namely, positions that required either a graduate degree or some years of work experience. For each position, we sent the most appropriate version of our résumé. (Recall from Section 4.2 that we designed different versions of the résumé to fit ten different job types covering a combination of technical, managerial, and analytic positions.) We defined several criteria that jobs and companies had to pass for us to apply to them. We focused on private sector firms. Primarily, the job had to be related to the candidates' background and level of 34 Appendix Table A4 lists the search terms used to find different type of jobs. experience, although we also included (and controlled for) positions for which the candidates could be considered slightly over-or under-qualified. In addition, we carefully avoided sending two applications to the same company, or to companies that were likely to share HR resources such as databases of applicants (for instance, parent companies of firms to which we already applied). 35 We also excluded staffing companies, companies located in the same geographic region as the candidates' reported current location, and companies with 15 or fewer employees (to limit the costs imposed on them by the process of searching and vetting fictional job candidates). 36 All applications (résumés and cover letters) were submitted online, either by email or through employer-provided web forms. We recorded the city and state listed on job postings, and, when possible, we recorded the city and state where the job would be located. When job location was not provided, we used the location of the company's headquarters. We obtained this measure for all observations and employed it for our state level analysis. From the company name, we were able find the street address of the company headquarters for all but a few hundred observations. We used ArcGIS 37 to match this street address to its county. We then merged our data with county level data from the American Community Survey 38 based on the county where the company headquarters is located. We used a combination of two data sources to estimate the frequency with which employers searched for the candidates online. One data source consisted of Google AdWords "Keyword Tool" statistics. These publicly accessible statistics capture the number of times a certain term is searched on Google from various locations. We used this tool to estimate the number of times the exact names of the candidates were searched from U.S. IP addresses. The second data source consisted of statistics provided by the PN network (LinkedIn) via so-called "Premium" accounts. If a user subscribes to a Premium account, that user will be able to get information such as the count of visits to its PN profiles, and in some cases the actual identity of the visitors. We subscribed to Premium accounts for each of our candidates' profiles in order to track visits to those profiles. Search Trends Each of these sources of data is imperfect. Google Keyword Tool statistics provide aggregate monthly means for searches of a given term, rather than raw data. Furthermore, if the mean is higher than zero but below 10 searches per month, no exact count is provided. Similarly, "Premium" PN accounts do not actually track all visits to that account (for instance, in our tests, visits from subjects that had not logged onto the network went undetected and uncounted; visitors who only viewed the summary profile of the candidate, rather than his full profile, also were not detected or counted; in addition, certain Premium accounts may allow "invisible" views of other LinkedIn profiles). Nevertheless, considered together, these data sources do offer a rough estimate of the proportions of employers searching the candidates online. We tracked Google Keyword Tool statistics and LinkedIn data over a period of several months. Based on the documented searches we detected on Google and on the PN, we can estimate a minimum lower threshold of employers who searched for the profiles at 10.33%, and the likely proportion of employers who searched at 28.82% (see Appendix B for details). These estimates appear consistent with the results of a 2013 CareerBuilder survey of 2,291 hiring managers and HR managers, according to which 24% claimed to "occasionally" use social media sites to search for candidates' information; 8% answered "frequently;" and only 5% answered "always." 40 The search rates highlighted in this section are sample-wide, aggregate estimates, because (as noted) in most cases we could not identify the specific employers visiting the profiles. The sub-sample of cases where we could directly capture the identity of an employer searching for our profiles is small (N = 121), but, unsurprisingly, represents employers who are strongly interested in our candidates. The overall callback rate for the four candidates in this sub-sample was 39.67%. The callback rates for the straight and gay candidates were 31.25% and 40.00%, respectively, and were not significantly different from each other. However, the callback rates for the Christian and Muslim candidates were 54.84% and 32.14%, respectively and were significantly different at the ten-percent level (using a chi-squared test). This is consistent with the main results presented in the remainder of this section, where we do find evidence of callback bias in favor of the Christian candidate compared to the Muslim candidate among certain types of employers, but no evidence of bias against the gay candidate compared to the straight candidate. Employer Callbacks Our primary dependent variable is a callback dummy that equals one if the candidate was contacted for an interview and zero if the candidate received a rejection, no response, or a scripted request for more information. Of the 4,173 observations from both manipulations, 11.20% were callbacks, 15.86% were rejections, 69.29% had no response, and 3.32% were scripted requests for more information. Timing of Callbacks. Figure A1 (in the Appendix) shows the geographical distribution of applications across the United States. Figure A2 ( Muslims or gay people). We thus test for differences in bias according to geographical differences in the prevalence of these traits in the population. Although we cannot identify the individual making the callback decision, and thus cannot observe his or her traits, following the literature we use demographic measures in the firm's geographical area. Specifically, we indicate states and counties with a: (i) higher median age than the U.S. median age, (ii) lower fraction of the population that is Muslim (or gay) than the U.S. fraction, and (iii) a high fraction of voters who voted for Mitt Romney in the 2012 Presidential election. Our measure of political party support in (iii) follows Tilcsik (2011), who used Presidential election data in his audit study, and the Gallup Organization, which regularly produces lists of the ten most Republican and ten most Democratic states based on survey data. Voting results are available at the county level and are based on real behavior. We thus indicate, respectively, the ten states with the highest fractions of 2012 Romney voters and the ten states with 41 We did not apply to job openings that had been posted for over 30 days. It is possible that some job openings may have been filled or were otherwise inactive. We expect these cases to be uncorrelated with the condition assignments. the lowest. We refer to the remaining states as politically mixed. 42 interaction is less robust than the political party support interaction: it is not present at the state level (we show in Section 5.4 that theories of underlying mechanisms can fully explain the age interaction, so it may be a statistical artifact). Finally, we find no significant differences in bias between high and low Muslim counties or states. In the gay-straight manipulation, we find no differences in bias across any of the geographical areas. For instance, in Republican states, 15.38% of the straight candidate applications and 42 We use Presidential election data from Leip (2013), which is compiled, where possible, from final election results from official sources (e.g., certified election results posted by state election boards). We found that election data from news organizations is often not updated to include final election results. In Section 5.2.3 we show that the results are robust with respect to using the Gallup Organization's classification of Republican and Democratic states for the year of 2012, according to daily tracking of self-reported political party identification. 43 Consider an employer search rate of 30% (see our search rate estimates in Section 5.2.1). Suppose there are three types of employers: Type 1 rejects the application and does not search the candidate online; Type 2 considers the candidate further and searches online; and Type 3 considers the candidate further but does not search online. Suppose that 65% of the employers are Type 1, 30% are Type 2, and 5% are Type 3. In this example, the overall search rate is just the prevalence of Type 1 employers, i.e., 30%. If we assume that the callback rate by Type 3 employers (who do not search) equals the average of the callback rates for the two candidates by Type 2 employers (who do search), then we have two equations with two unknowns: P1*0+P2*M+P3(M+C)/2=.02 and P1*0+P2C+P3(M+C)/2=.17, where P1, P2, and P3 are the probabilities of employers being types 1, 2, and 3; and M and C are the callback rates for the Muslim and Christian candidates among employers who search. Sample-wide callback rates of 2% for the Muslim candidate and 17% for the Christian candidate can be explained by callback rates among employers who search of 2% for the Muslim candidate and 52% for the Christian candidate, and a callback rate when employers consider the candidate but do not search of 27%. It is worth noting that if, in Section 5.2.1, we underestimated actual employers' search rates, or if employers in Republican states have a higher than average search rate, then this example implies a smaller callback bias for employers who search. 14.29% of the gay candidate applications received callbacks (see Table 2, Columns (4) and (5)). In Democratic states the callback percentages for the straight and gay candidates were 11.24 and 11.71, respectively. 44 We also test whether the biases found in Table 2 stem from both the Muslim and Christian conditions or only from one. We compare callbacks in each of these conditions to callbacks in the pooled gay-straight conditions -using the latter as benchmarks. Consider columns (6), (7), and (8) of Table 2: both the Muslim and the Christian conditions contribute to the biases shown in Panel A (i.e., in the bias-prevalent areas). Columns (7) and (8), respectively, test for differences in the number of callbacks in (i) the Muslim condition to the pooled gay/straight conditions and (ii) the Christian condition compared to the pooled gay/straight condition. The first row shows that in Republican states, the Muslim callback rate of 2.27% is significantly lower than the pooled gay/straight callback rate of 14.77%. The remaining rows of Panel A show that in Republican counties, older counties, and states and counties with a lower fraction of Muslims, the callback rate in the Christian condition is significantly higher than the callback rate for the pooled gay/straight condition. Thus, both the Muslim and the Christian conditions play significant roles in generating callback bias in areas with a high prevalence of types who show bias in surveys. The result that both the Muslim and the Christian conditions contribute significantly to the bias in Republican areas is not sensitive to which sample is chosen as the comparison sample. We find similar results when we separate the pooled gay/straight candidate sample into just the gay candidate sample or the straight candidate sample. Regardless of whether we use the sample with the gay candidate or the sample with the straight candidate, we find that the callback rate for the Muslim candidate in Republican states is significantly lower and the callback rate for the Christian candidate in Republican counties is significantly higher. In the next sub-section, we present regression analysis using both the state-level and county-level measures of Republican, politically mixed, and Democratic areas. We present the state-level analysis first. However, the state-level analysis has two drawbacks. First, taking all four conditions together, there are 1,745 employers in 10 Democratic states or districts (including Washington DC), 2,244 in 31 mixed states, and just 184 in 10 Republican states. Second, we cannot fully control for state-level covariates with statelevel definitions of political areas. Thus, we address these issues by also using county-level regressions with state-fixed effects. The county-level measures, in addition to allowing state controls (and thus focusing on within-state variation), have a larger sample size in the Republican areas. The sample sizes in the Democratic and Republican counties are 2,475 and 228, respectively. Finally, in Section 5.2.3, we test for robustness with respect to changes in estimators and types of standard errors, and with respect to additional specifications and measures of political areas. (1) the interaction between the Muslim assignment effect and the Democratic states dummy is positive, 0.145, and significant at the five-percent level. The interaction between politically mixed states and the Muslim assignment dummy is also positive, 0.120, and significant at the ten-percent level. Column (2) is the same as column (1), except that it includes additional state-level control variables (see Table 3 notes). The Muslim assignment effect and the interaction effects are similar to column (1). Column (3) is the same as Column (2) except that is adds firm-and job-level controls to the regressions (see Table 3 notes). 46 These results are, again, consistent with the results of Columns (1) and (2). Regression Analysis. Column (4) is identical to Column (1), except that it uses county-level measures with state fixedeffects. 47 The effect of the Muslim assignment dummy in the default category -namely, Republican counties where the median age is higher and the fraction Muslim lower -is negative and significant at the one-percent level. The interactions between the Muslim assignment dummy and the politically mixed and Democratic counties are positive and significant at the ten-and five-percent levels, respectively. The interaction between a low median age and the Muslim assignment dummy is positive and significant at the five-percent level. Column (5) is the same as Column (4), except that it includes additional county-level control variables analogous to the state-level controls that were included in Column (2) (see Table 3 notes). Column (6) is the same as Column (5), except that it adds the firm level controls that were included in Column (3). The results in Columns (5) and (6) are consistent with the Column (4) results just summarized. Furthermore, looking 45 Table 3 presents robust standard errors with no clustering. The results are robust with respect to different specifications of the model, as discussed in Section 5.2.3 (see Appendix Table A16 for probit estimates and Appendix Table A17 for OLS estimates with clustered standard errors). 46 Fewer than 500 employees is a widely used cutoff standard for identifying small firms in U.S. Small Business Administration programs. 47 The county-level sample is smaller because we were unable to identify counties for all of our observations. We lose a few additional observations in the county-level regressions because of missing county-level demographic data. across Columns (4)-(6), one can see, in rows 2 and 3, that the callback rate for the Christian candidate is slightly lower in politically mixed and Democratic counties than it is in Republican counties. Table 4 presents regression results for our sexual orientation manipulation. In contrast to the results of the religion manipulation, there are no significant interactions between the treatment assignment and interacted geographical areas. In short, the regression analysis highlights results that are robust to a variety of specifications. The results in Tables 2, 3, and 4 are consistent with prior results that were suggestive of similar patterns but not definitive. First, they are consistent with our online pilot experiment (see Section 5.1): the online pilot showed significant bias against the Muslim candidate relative to the Christian candidates among subjects with hiring experience, but no bias against the gay candidate relative to the straight candidate (see Appendix A). Second, in a Republican area, Gift and Gift (2014) find evidence (significant at the one-percent level in a one-tailed test) of discrimination against job candidates who signal Democratic Party affiliation on résumés compared to those who signal Republican Party affiliation. In a Democratic area, there is a directionally consistent bias against job candidates who signal Republican Party affiliation, but it is not significant at the five-percent level in a one-tailed test. This is consistent with our finding of stronger discrimination in Republican areas. In Section 5.2.4, we consider a theory that explains why bias is stronger in some places, and derive testable predictions about where to expect stronger bias. The lack of evidence of bias against the gay candidate differs from the results reported by Tilcsik (2011), who found a significant effect of sexual orientation on callbacks. Rather, our finding is consistent with more recent results presented by Bailey, Wallace, and Wright (2013), who do not find evidence of discrimination against gay men or lesbians. Evolving attitudes toward gay people may explain the differences between our results and Tilcsik's (2011), whose experiment was conducted in 2005. According to our analysis of General Social Survey data, 48 acceptance of gay marriage increased among self-identified Republican, Independent, and Democratic respondents from 2006 to 2012. Robustness checks Despite having a binary dependent variable, we used OLS in our analysis above because of concerns about interaction effects in probit regressions (Ai and Norton 2003). Appendix Table A16 is identical to Appendix Table A19 presents regression results using the Gallup list (Column 1) and Combined list (Column 2) definitions of Republican, Democratic, and politically mixed states. Columns (1) and (2) (3) and (4) show the smallest and largest interactions with the Democratic states that we could find by deleting each of the 50 states (including Washington, DC), one regression at a time. Column (3) removes California from the sample and column (4) removes Idaho. The results remain robust with respect to dropping one state at a time. Even the smallest interaction with the Democratic states dummy is significant at the five-percent level. Columns (5) and (6) are the same as column (2) except that they add, respectively, the additional controls included in columns (2) and (3) of Table 3. Again, the results are robust with respect to the inclusion of these additional controls. Limitations A number of limitations are inherent to the experiment's design. First, a null effect in the Gay-Straight manipulation may not necessarily be interpreted as an absence of discrimination, because a number of potential explanations may exist. Second, heterogeneity in the assignment effect according to firm characteristics cannot be separated into firms' differences in search rates and differences in the treatment effect. Third, the results stem from candidates with certain characteristics, applying to certain types of jobs, 50 Gallup produced this list of states from a sample of 321,233 respondents surveyed by Gallup Daily tracking over the course of the year at the rate of 1,000 respondents per day. with certain types of employers; they may not apply to other types of candidates, positions, or organizations. Fourth, we measure employers' traits with state or county level data; we do not capture the traits of the actual employers or human resource decision makers. Finally, discrimination based on information posted online may stem partly from the signal that the posted information sends about a candidate's judgment in choosing to post that information. However, the disclosures we investigated in this paper took place in personal profiles that employers must actively seek, rather than in résumés in which candidates intentionally reveal potentially sensitive information to employers, and the consequences of the fact that people use online social networks to disclose this information in this way, is part of what we sought to understand. Our experimental approach -namely, designing profiles to replicate information found in real social media profiles of individuals demographically comparable to our candidates (see Section 4.2) -increased the realism and ecological validity of our experimental design, and decreased the chances that employers may interpret the profiles as "over-sharing" beyond the levels that we observe in real online social network profiles. Conclusion We set out to answer the question: do hiring decision makers seek online information which should not be used in the hiring process and are they affected by what they find? We focused on the effects of religious affiliation and sexual orientation. We used randomized experiments to answer those questions, capturing discrimination both in behaviors of real employers and attitudes of survey subjects. Our results were broadly consistent across the online pilot and the field experiment. They suggest that while hiring discrimination via Internet searches and social media may not be widespread for the companies and jobs we considered in this experiment, revealing certain traits online may have a significant effect on the hiring behavior of selfselected employers who search for the candidates online. The findings provide further evidence of a phenomenon increasingly studied in the information systems literature: the impact of online IT services on offline outcomes and behaviors. They also highlight the novel tensions arising between regulatory interventions designed in a non-digital world (that attempt to protect personal information), and technological innovations that bypass those protections (by making said information otherwise available). This work suggests various directions for future research. There is a tension between a potentially efficiency-enhancing effect of online information on labor market search and a potential for that same information to increase labor market discrimination. To the extent that online markets facilitate not just matching but also discrimination, people in disadvantaged categories may face a dilemma: if they censor their online activity, they may be able to protect their privacy, but a limited online presence might, by itself, signal that there is something to hide, or that something is missing. Imagine a charismatic Muslim candidate with both a strong social network in his religious community and many professional contacts. If he were to reduce his online presence to hide his friendships in his religious community, at least three problems arise. .1284 [257] There are no significant differences in callback rates shown in Columns (4) and (5). 0.004 0.007 0.042 0.030 0.038 0.073 * p < 0.10, ** p < 0.05, *** p < 0.01. Dep. Var. =1 if the candidate is contacted for an interview, zero otherwise. Numbers reported are OLS coefficients (robust standard errors in parentheses). Additional statelevel geographical controls included in column 2 and 3 are: 2012 unemployment rate, fraction foreign-born, natural log of median income, fraction non-white, fraction college educated or more, fraction evangelical Christian, fraction urban, Facebook penetration, and legal protection from religious discrimination. Additional county-level geographical controls in col. 5 and 6 are: 2012 U.E. rate, fraction foreign-born, natural log of median income, fraction non-white, fraction college educated or more, fraction evangelical Christian, and rural-urban continuum code dummies. Firm-level controls included in columns 3 and 6 are: dummies for women and minority owned, public firm, large firm (500 employees or more), federal contractor, and included application-level characteristics are dummies for entry level position, references required, preferred salary required, master's degree required, one or more years of experience required, multiple fields of experience required, and 9 dummies for field of employment. The continuous state and county variables -namely, 2012 unemployment rate, fraction foreign-born, natural log of median income, fraction non-white, fraction college educated or more, fraction evangelical Christian, fraction urban, and Facebook penetration -are centered on their means. The omitted categories for dummies capturing variables with more than two categories are: Republican states or counties; counties with less than 20,000 people; and jobs in information systems. The remaining variables are binary. Full regression results are available in the Appendix. Table 2, with the exception that columns 2 and 3 control for legal protection from sexual orientation discrimination, instead of legal protection from religious discrimination. Full regression results are available in the Appendix.
v3-fos-license
2017-08-04T23:57:01.862Z
2015-10-14T00:00:00.000
10243248
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0140683&type=printable", "pdf_hash": "47bf4bf11594c488190fe7bae6131662c44bf11f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44713", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "47bf4bf11594c488190fe7bae6131662c44bf11f", "year": 2015 }
pes2o/s2orc
Increased Levels of Sphingosylphosphorylcholine (SPC) in Plasma of Metabolic Syndrome Patients Recent developments in lipid mass spectrometry enable extensive lipid class and species analysis in metabolic disorders such as diabesity and metabolic syndrome. The minor plasma lipid class sphingosylphosphorylcholine (SPC) was identified as a ligand for lipid sensitive G-protein coupled receptors playing a key role in cell growth, differentiation, motility, calcium signaling, tissue remodeling, vascular diseases and cancer. However, information about its role in diabesity patients is sparse. In this study, we analyzed plasma lipid species in patients at risk for diabesity and the metabolic syndrome and compared them with healthy controls. Our data show that SPC is significantly increased in plasma samples from metabolic syndrome patients but not in plasma from patients at risk for diabesity. Detailed SPC species analysis showed that the observed increase is due to a significant increase in all detected SPC subspecies. Moreover, a strong positive correlation is observed between total SPC and individual SPC species with both body mass index and the acute phase low grade inflammation marker soluble CD163 (sCD163). Collectively, our study provides new information on SPC plasma levels in metabolic syndrome and suggests new avenues for investigation. Introduction Metabolic syndrome (MetS) is characterized by a combination of different metabolic abnormalities including abdominal obesity, dyslipidemia, increased fasting glucose, and hypertension [1]. These metabolic disorders collectively lead to more complex diseases including type-2 diabetes (T2D) and cardiovascular diseases (CVD) [1]. Several lines of evidence support the role of lipids in metabolic health. The advancement of plasma lipidomic analysis allowed the identification of numerous lipid classes and species as surrogate markers for age-related diseases, including hypertension, T2D, and CVD [1][2][3]. Previous studies have reported the association of several lipid molecules such as cholesteryl esters (CE 18:2, CE 16:0), the phosphatidylethanolamine glycerophospholipid species (PE 36:2), and the diacylglycerol species (DG 36:2, DG 34:0) with MetS risk factors [1,4,5]. Other plasma lipids such as healthy controls (n = 12), patients at risk for diabesity (n = 19), and MetS patients (n = 33) depending on the WHO criteria [28]. MetS patients had at least three out of the five WHO criteria: central obesity (BMI>30kg/(m)2), elevated triglycerides (TG) (150 mg/dl), increased fasting plasma glucose (100 mg/dl), along with a decrease in HDL levels (40mg/dl). Risk patients were those who had less than three criteria and healthy controls did not have any criteria. Characteristics of the subjects are presented in Table 1. Clinical chemistry analysis A standard clinical chemistry analyzer (ADVIA-1800, Siemens) was used to determine the levels of cholesterol, triglycerides, HDL-cholesterol, LDL-cholesterol, and VLDL-cholesterol. Lipid analysis Extraction of lipids was performed in the presence of non-naturally occurring lipid species as internal standards following the protocol described by Bligh and Dyer [29]. Plasma lipid species determination was completed using direct flow injection ESI-MS/MS in positive ion mode as described previously [30,31]. A precursor ion of m/z 184 was used for phosphatidylcholine (PC), lysophosphatidylcholine (LPC), and sphingomyelin (SM) [30,32]. A fragment ion of m/z 264 was used to analyze Cer while a fragment ion of m/z 369 was used for the analysis of free cholesterol (FC) and CE after selective derivatization of FC [33,31]. PE and PI were analysed following neutral loss fragment of 141 and 277 Da, respectively [34,35]. The analysis of PEbased plasmalogens was done as described by Zemski-Berry [36]. On the other hand, SPC and sphingosine-1-P (S1P) analysis was done using our previously established liquid chromatography-tandem mass spectrometry (LC-MS/MS) protocol [11,37]. Self programmed Excel Macros were used for data analysis of all lipids [30,38]. Lipid species were annotated according to the LipidomicNet proposal for shorthand notation of lipid structures derived from mass [39]. Glycerophospholipid species annotation was based on the assumption of even-numbered carbon chains only. Sphingomyelin species were assigned based on the assumption of a sphingoid base with 2 hydroxyl groups. Statistical analysis Results are expressed as means ± standard errors (SE). Statistical analysis was performed using IBM SPSS 20 Software Package. Comparisons between the different groups were evaluated using ANOVA followed by Dunnett test. Linear relationships were studied using Pearson's correlation coefficient. The level of significance was set at 0.05. Plasma lipid profile in patients at risk for diabesity and MetS Plasma lipid species levels from all groups (control, risk, and MetS) were analyzed using previously published ESI-MS/MS or LC-MS/MS procedures as described in the methods section. Interestingly, while the alterations observed in total levels of the majority of lipid classes did not reach significance, total levels of SPC and LPC were significantly different between the different groups (Figs 1A and 2A). In order to decipher the specific species responsible for the observed alterations, plasma from the studied patients was further analyzed and the levels of SPC-and LPC-species were determined. The most significant difference was seen in the levels of SPC species. As compared to controls the total SPC levels in the plasma of MetS patients were significantly elevated by 3.8-fold (p<0.001) (Fig 2A) and associated with a similar increase in all detected SPC species i.e. SPC d16:1 (4.7-fold, p<0.0001), SPC d18:1 (3.6-fold, p<0.001), and SPC d18:2 (4.3-fold, p<0.001) ( Fig 2B). However, the increase of total and individual SPC species in the plasma of patients at risk for diabesity did not reach significance (Fig 2A and 2B). The level of S1P, which either originates from ceramide under conditions of increased sphingosine levels or from SPC under the action of autotaxin or other ecto-lysosphingomyelinase(s), was also measured in the plasma of all groups. Even though the plasma level of S1P is decreased in patients at risk for diabesity as compared to controls, it only reached significance (1.5-fold, p<0.05) in MetS patients ( Fig 2C). Albeit the decrease in total plasma levels of phosphatidylcholine (PC) did not reach significance between the different groups, significant alterations in LPC levels were observed. When compared to healthy subjects the plasma levels of LPC were significantly decreased in MetS patients (p<0.05) ( Fig 1A). The observed decrease was due to a pronounced decrease in individual LPC species i.e. LPC 18:1; LPC 18:2; LPC 20:3; LPC 20:4; LPC 20:5; LPC 22:5 (p<0.05) (Fig 3) while changes in the levels of the other detected species were not significant (Data not shown). On the other hand, neither total nor individual LPC species were affected in patients at risk for diabesity. Correlation of SPC, S1P and LPC with obesity and inflammation The relation between lipids, obesity, low levels of inflammation and MetS is established [25]. MetS is strongly associated with increased risk of T2D and CVD. Therefore, we tried to elucidate in our patient groups the relation of SPC, S1P and LPC with both BMI and the low grade inflammation marker sCD163. Table 2 shows a strong significant positive correlation of BMI with total SPC (r = 0.694, p<0.0001) and individual species SPC d16:1 (r = 0.573, p = 0.004), SPC d18:1 (r = 0.693, p<0.0001), and SPC d18:2 (r = 0.724, p<0.0001). Also total SPC (r = 0.422, p = 0.045), SPC d18:1 (r = 0.429, p = 0.041), and SPC d18:2 (r = 0.439, p = 0.036) species correlated significantly with sCD163 while no correlation existed with SPC d16:1 ( Table 2). In contrast to SPC, S1P showed a significant negative correlation with BMI (r = -0.422, p = 0.045) while the correlation with sCD163 did not reach significance (Table 3). A significant negative correlation was also observed between several LPC species and both BMI and sCD163 (Table 4). Whereas total LPC did not significantly correlate with BMI, a significant negative correlation between 6 LPC species and BMI was observed for LPC 18 (r = -0.580, p<0001), and LPC 22:6 (r = -0.473, p<0001). Comparison of the different species showed that unsaturated LPC species including mono-and polyunsaturated species but not saturated LPC species negatively correlated with BMI and sCD163 (Table 4). Discussion High levels of toxic lipid intermediates play a major role in the progression of many diseases including obesity, type-2 diabetes (T2D), and cardiovascular diseases (CVD) [40]. The aim of this study was to monitor changes in the plasma lipidome of patients at risk for diabesity and MetS patients and to evaluate the contribution of these changes to the evolution of disease pathology. The classification of the subjects used in this study was based on the WHO guidelines and is summarized in Table 1. In accordance to our previously published data [41], this study shows that vs. controls the alterations of PC, SM, PE, PE P, and Cer levels did not reach significance in the plasma of MetS patients. Our data related to Cer levels are in contrast to the data presented by Meikle et al., [6], showing a significant increase in the levels of Cer in prediabetes and T2D. However, the average age of prediabetes patients used in the former study was 69 (58-74) years which differed from the average age of the patients at risk for diabesity and MetS in our study which was 48.5 ± 2.3 and 54.9 ± 1.8 years, respectively. Furthermore, the size of our study groups is significantly smaller than that of Meikle et al., [6]. Consequently, the observed discrepancy may, therefore, be age and/or study size dependent. Glycerophospholipids and their metabolites lysophospholipids (LPLs) are involved in signal transduction mechanisms regulating cell proliferation and apoptosis, as well as in disease development [42,43]. LPLs are generated by the enzymatic reaction of members of the phospholipases A2 family [44] and by the action of lecithin-cholesterol acyltransferase (LCAT) activity [45]. While the decrease in PC levels did not reach significance, a significant decrease of LPC plasma levels was observed in MetS patients. The widespread reduction in LPC species in the plasma of MetS individuals was not surprising as we [41] and others [46] have previously reported a similar trend. Similarly to our earlier findings [41], this study shows a significant negative correlation between individual LPC species and both BMI and the low grade inflammatory marker sCD163 (Table 4), further supporting a potential anti-inflammatory role of LPC in diabesity and MetS patients. In the last 10 years more than 150 articles have been published on SPC but none reported its modulation in the plasma of MetS patients. Therefore, to our knowledge this is the first study demonstrating a significant increase in SPC levels in the plasma of MetS patients (3.8-fold vs. control, p<0.001). Detailed analysis showed that all detected species SPC d16:1, SPC d18:1 and SPC d18:2 were increased by more than 3.5-fold in the plasma of MetS patients with respect to their normal counterpart. SPC occurs naturally in plasma at a concentration of 50±15nM [10,11]. SPC is also generated intracellularly from SM under the action of SM deacylase (Fig 4). An increase in SPC levels could represent a spillover mechanism upon increased SM hydrolysis or from a decrease in SPC metabolism. SPC is metabolized to sphingosine-1-phosphate (S1P) by autotaxin (ATX), an exoenzyme with lysophospholipase D activity, or by other ectonucleotide pyrophosphatase/phosphodiesterases (ENPPs) [13,47,48]. The occurrence of ATX in the plasma suggests that SPC could be directly metabolized to S1P [48]. ATX is a multifunctional protein that has the ability to hydrolyze membrane or secreted glycerophospholipids to produce bioactive mediators such as LPA and S1P [13]. ATX is also established as an adipose derived secreted enzyme that controls adipose tissue expansion, brown adipose tissue function as well as energy expenditure [49]. A significant down-regulation in the expression of ATX was reported in the retina of diabetic rats and in human subcutaneous fat along with a significant reduction in its levels in the serum of obese patients [49]. Our study shows, a significant decrease in plasma S1P levels in MetS patients as compared to control subjects while no significant changes in the levels of plasma SM was observed. Therefore, an increase in SPC levels at Hypothetical scheme depicting SPC and S1P synthesis. Ceramides (Cer) are generated from the sphingomyelinase pathway and other pathways such as the salvage pathway and the de novo synthesis pathway (not shown in this scheme). Sphingosine (SPH) which can be formed from the degradation and recycling of complex sphingolipids and glycosphingolipids in an acidic environment (salvage pathway) may also contribute to Cer metabolism. SPH is phosphorylated by SPH kinase to sphingosine-1-phosphate (S1P), a bioactive lipid intermediates with several effects. In addition to Cer, SPC is another biologically active lipid metabolite generated from sphingomyelin (SM) under the action of a SM-deacylase. Secreted SPC is likely a substrate for autotaxin (ATX), an exoenzyme with lysophospholipase D activity, which leads to S1P generation. Alternatively, S1P can be converted from SPC by ectonucleotide pyrophosphatase/phosphodiesterases (ENPPs). SPC and S1P can both induce their effects through binding to G-protein coupled receptors present on different cell types. It cannot be excluded that an intracellular SPC-> S1P conversion also occurs by a yet unidentified SPC-ase, and not SPC but rather S1P is secreted from cells predominantly. Extracellular S1P might be converted back to SPC through not yet identified enzyme(s). Substantial amounts of extracellular SPC and S1P are loaded to preβ-HDL particles and therefore they may contribute to the composition and/or maturation of α-HDL lipoproteins. UC: unesterified cholesterol, PC: phosphatidylcholine, LPC: lysophosphatidylcholine, PM: plasma membrane. doi:10.1371/journal.pone.0140683.g004 least in part could be due to a decrease in its metabolism. The significantly lower levels of LPC species in MetS are likely the consequence of the lower levels of HDL-cholesterol in MetS and likewise to the decreased LCAT activity. The latter is a major determinant of plasma LPC levels. Interestingly, it is worth mentioning that although S1P and SPC are both constituents of HDL [11,14,50], the significant decrease in HDL levels observed in MetS patients (Table 1) inversely correlated with plasma SPC levels ( Table 2) but showed no correlation with plasma S1P levels (r = 0.359, p = 0.101). As SPC might act through different types of receptors differentially distributed on many cell types [51], an increase in SPC plasma levels may lead to a defect in diverse cell functions in MetS patients. Since the studies on SPC plasma levels are scarce, a better understanding of the function of SPC might help to clarify the relevance of its elevation in the plasma of MetS patients. SPC is a pleiotropic lipid mediator involved in many physiological and pathological effects depending on the type of tissues and/or disease [14,52]. SPC plays a crucial role in calcium (Ca 2+ ) regulation [15,16]. This function is of utmost importance for the function of the cardiovascular system. Consequently, any disturbance of this function may lead to disease development. SPC acts as an extracellular and cellular Ca 2+ mediator [51]. This has been proven by the fact that extracellular application of SPC leads to a rapid increase in intracellular Ca 2+ in different cell types [15,16]. The detrimental effects of a sustained increase in Ca 2+ levels are well known. While a transient increase in cytosolic Ca 2+ is necessary for cell function such as contraction in contractile cells per se a constitutive elevation of Ca 2+ is pathogenic. For instance, a sustained increase in intracellular Ca 2+ levels leads to the switching of vascular smooth muscle (VSMC) cells from contractile to synthetic phenotype which is a hallmark of atherosclerosis [53]. Interestingly, SPC induces the proliferation and migration of VSMC in the nM range, an important effect that leads to the formation of a neointima in blood vessels [14]. SPC also induces the contraction of VSMC [54] and plays a major role in the pathogenesis of abnormal contraction of cerebral arteries [20,55]. It has been recently reported that the extent of SPC induced VSMC contraction correlates with total plasma cholesterol [20]. In this study we did not see a correlation between SPC and cholesterol as the levels of this latter one were not significantly different between the different groups (Table 1), probably since some of the MetS patients are under lipid lowering therapy. On the other hand, total and individual SPC species strongly correlated with TG levels ( Table 2). In addition to the stimulatory effects of SPC on proliferation, migration, and contraction of VSMC, SPC has also been found to act as a proinflammatory mediator in the vascular system. SPC increased the release of the inflammatory protein chemokine monocyte chemoattractant protein-1 (MCP-1), both in cultured VSMC and in cerebral arteries [12]. Addition of SPC to VSMC induced also the release of TNF-α, an inflammatory cytokine involved in cardiovascular disease [14]. This study shows a significant correlation between SPC and the acute phase response in low grade inflammation marker sCD163 (Table 2), a significant predictor of coronary atherosclerosis [27]. Nonetheless, the precise molecular mechanism of SPC in acute phase response is unclear and merits further investigation. Finally, it is noteworthy to mention that changes observed with the lipid classes in the plasma of patients at risk for diabesity did not reach significance. Based on this finding one could speculate that a significant increase in TG and sCD163 and a decrease in HDL are essential to induce significant changes in plasma lipids profile. This is further supported by our findings showing that SPC correlates significantly with TG and sCD163, and inversely correlates with HDL (Table 2). Conclusion Lipidomic analysis of major and minor lipid classes was performed by mass spectrometry in order to gain more insights into the changes affecting circulating lipids in the plasma of patients at risk for diabesity and MetS. This study confirms and extends previous findings on LPC levels in obese and MetS patients and shows, for the first time, a significant increase in SPC plasma levels in MetS patients. Our data show, collectively, that increased plasma SPC levels along with its strong correlation with BMI and sCD163 may be a reporter for the progression of the disease associated with inflammation and the risk of cardiovascular dysfunction. The fact that there were no significant changes in the plasma levels of SM and Cer, alterations already observed in T2D, underlines the significance of our findings on SPC as a potential early biomarker. Future studies in larger cohorts are needed to 1) better understand why SPC is increased in MetS patients, and 2) investigate its potential as a biomarker for MetS-associated cardiovascular dysfunction and T2D. Supporting Information S1 File. Plasma lipidomic analysis form controls and patients groups used in the study. (XLSX)
v3-fos-license
2018-04-03T00:11:43.775Z
2016-05-12T00:00:00.000
9603462
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0154949&type=printable", "pdf_hash": "5540e1235618e2c430a8a9476b03ec65347448c0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44715", "s2fieldsofstudy": [ "Biology" ], "sha1": "5540e1235618e2c430a8a9476b03ec65347448c0", "year": 2016 }
pes2o/s2orc
Subsets of Visceral Adipose Tissue Nuclei with Distinct Levels of 5-Hydroxymethylcytosine The reprogramming of cellular memory in specific cell types, and in visceral adipocytes in particular, appears to be a fundamental aspect of obesity and its related negative health outcomes. We explored the hypothesis that adipose tissue contains epigenetically distinct subpopulations of adipocytes that are differentially potentiated to record cellular memories of their environment. Adipocytes are large, fragile, and technically difficult to efficiently isolate and fractionate. We developed fluorescence nuclear cytometry (FNC) and fluorescence activated nuclear sorting (FANS) of cellular nuclei from visceral adipose tissue (VAT) using the levels of the pan-adipocyte protein, peroxisome proliferator-activated receptor gamma-2 (PPARg2), to distinguish classes of PPARg2-Positive (PPARg2-Pos) adipocyte nuclei from PPARg2-Negative (PPARg2-Neg) leukocyte and endothelial cell nuclei. PPARg2-Pos nuclei were 10-fold enriched for most adipocyte marker transcripts relative to PPARg2-Neg nuclei. PPARg2-Pos nuclei showed 2- to 50-fold higher levels of transcripts encoding most of the chromatin-remodeling factors assayed, which regulate the methylation of histones and DNA cytosine (e.g., DNMT1, TET1, TET2, KDM4A, KMT2C, SETDB1, PAXIP1, ARID1A, JMJD6, CARM1, and PRMT5). PPARg2-Pos nuclei were large with decondensed chromatin. TAB-seq demonstrated 5-hydroxymethylcytosine (5hmC) levels were remarkably dynamic in gene bodies of various classes of VAT nuclei, dropping 3.8-fold from the highest quintile of expressed genes to the lowest. In short, VAT-derived adipocytes appear to be more actively remodeling their chromatin than non-adipocytes. Introduction There is a critical need to perform cell-type-specific epigenetic analyses of adipocytes within adipose tissues, because of their likely direct role in obesity and its comorbidities, including neuronal cellular nuclei that were decondensed and expressed exceptionally high levels of stem cell and cell cycle neurotrophic, synaptotropic, and chromatin modifying factors markers, relative to the majority of neuronal and non-neuronal nuclei [28]. Overexpression of this machinery may be associated with the rapid turnover of chromatin modifications in cell types most likely to be potentiated to respond to their environment and more rapidly record cellular memories [29]. Using FNC and FANS to study cellular nuclei as surrogates for isolated cells is still in its infancy, but these technologies are relatively simple to employ for "problematic tissues" and have the potential to reveal a great deal about epigenetically distinct cell populations within adipose tissues. There is mounting evidence that adipocytes within adipose tissues may be epigenetically programmed in response to obesity, obesity-related diseases, exercise, diet, and sleep, as well as when adipocytes exit from the cell cycle and proceed through adipogenesis [30][31][32][33][34][35][36]. Our longterm working hypothesis is that adipose tissue contains epigenetically distinct subpopulations of adipocytes, which are differentially potentiated to record cellular memories. However, there is currently limited information about subpopulations of adipocytes from within tissue or their mechanisms of cellular memory. A number of nucleosomal histone side chain modifications as well as modifications to DNA cytosine residues are correlated with the adipogenic program and postulated to play a role in programming preadipocytes and mature adipocytes (see Discussion). Of particular interest is the recent evidence that gene-region-and enhancer-regionspecific 5-hydroxymethylcytosine (5hmC) at CG dinucleotides may define genes poised to change their expression or already having increased expression in part through localized loss of 5mC [37,38]. During the in silico differentiation of 3T3-L1 preadipocytes to adipocytes there is 2-fold increase in 5hmC levels in activated vs. repressed gene regions and as much as a 10-fold increase in 5hmC in the fatty acid binding protein 4, FABP4 gene [39]. Hence, the oxidation of 5mC to 5hmC is strongly associated with adipogenesis. The cyclic turnover of 5´-modified cytosine is summarized (see Fig 1). DNA methyltransferases (DNMTs) methylate DNA cytosine to 5mC, while Ten-eleven translocation methylcytosine dioxygenases (TETs) catalyze its conversion to 5hmC and to other more oxidized forms (5fC, 5caC). Although, 98% of TET activity is restricted to modified cytosine residues in the CG dinucleotide context. Thymine-DNA glycosylase (TDG) acts on 5fC or 5caC to generate an abasic site (-OH). The base excision repair pathway (BER) and factors like the GADD45s recognize a G residue in the antiparallel DNA strand and restore cytosine. In general, 5hmCG dinucleotides mark a small subset of antiparallel CGs, which may or may not also be 5mC modified (5mCGs) in the antiparallel strand, such that increases in one of these two modifications at a site is not always correlated with the loss of the other. TET dioxygenase-catalyzed oxidation of 5mC to 5hmC at constitutive CTS (CTCF binding sites) and PPARg enhancers (PPAREs) appears to be part of and perhaps may be essential to adipogenesis [40,41]. The ADP-ribose polymer attached to parylated-PPARg binds TET enzymes to catalyze the localized conversion of 5mC to 5hmC [42], which begins to outline a mechanism connecting 5hmC modification and adipogenesis. Enhancer cytosine hydroxymethylation appears to be tissuespecific, where it acts on adipocyte-specific enhancers during a 3T3-L1 cell's differentiation to an adipocyte and on neuronal-specific enhancers during neurogenesis in a cultured neural progenitor cell type [39]. Little is known about the precise role of gene-region distribution and changes in 5hmC in adipocytes, although in neurons it has been proposed that high levels of gene-region 5hmC "creates pre-modified sites that are poised for subsequent demethylation and activation at a later developmental stage" prepared for "on demand gene regulation" [38,43]. Initial pioneering studies in situ suggest that large-scale gene silencing by DNA methylation might be essential to the commitment to adipogenesis. In particular, using 5´-deoxyazycytidine Turnover cycle for DNA cytosine modification at CG dinucleotides and its potential impact on adipose tissue (Diagram modified from Dubois-Chevalier, 2015 and Kohl, 2013). A model is suggested in which the dynamic modification cycle of DNA cytosine residues (C) is linked to ubiquitous (CTCF) and adipocyte-specific (PPARg) transcription factor enhancement of gene expression during adipogenesis and in mature adipocytes. CTCF and PPARg recruit TET enzymes to promote 5mC hydroxymethylation and activate transcription of PPARg. The lower panel shows the cyclic turnover of modified cytosine (C) residues and emphasizes that TETs catalyze the rate-limiting step of removing 5mC by oxidation to 5-hydroxymethylcytosine (5hmC). TET activity further oxidizes 5hmC to 5-formalcytosine (5fC) and 5-carboxycytosine (5caC). The essential roles of other factors include DNMTs in the methylation of C to 5-methylcytosine (5mC), thymine DNA glycosidase (TDG) and methyl-CG binding domain protein 4 (MBD4) in the excision of 5fC or 5caC by creating a single nucleotide gap, and gap repair back to a C residue by base excision repair (BER) machinery such as the GADD45s. The gene-region-specific balance of these activities determines the levels of C, 5mC, and 5hmC. The diagram was modified from those in previous publications [41,115]. to inhibit DNMT catalyzed cytosine methylation during the contact inhibition and licensing (i.e., specification) stage prior to differentiation causes a severe reduction in the efficiency of subsequent adipogenesis [44]. After licensing and the addition of a differentiation cocktail of growth factors there is a brief period of mitotic clonal expansion. The differentiation to lipid body-rich mature adipocytes proceeds after preadipocytes exit the cell cycle [44]. Two days after the differentiation of 3T3-L1 cells begins global DNA cytosine methylation levels increase, as do 5mC levels at the CEBPA promoter region [44]. DNMT1 levels increase rapidly during the first 24 hours after inducing differentiation [45] and decline later as mature adipocytes are formed. But DNMT1 is defined as a maintenance enzyme, not a de novo methyltransferase. Hence, DNMT1 levels may account for the increase in 5mC by more efficiently maintaining methylation and by supporting increases in de novo methylation. Small interfering RNA silencing of the de novo cytosine methyltransferase DNMT3a in 3T3-LI preadipoctyes significantly blocks adipogenesis [44] emphasizing a positive role for DNA methylation. However, additional work stands against this simple positive role for increased global 5mC levels in adipogenesis. Small RNA silencing of the maintenance methyltransferase DNMT1 in 3T3-L1 preadipocytes accelerates adipogenesis [45]. In addition, treating bone marrow derived MSCs, a normal precursor of adipocytes, with 5-azacytidine (5-azaC), a cytidine analog and inhibitor of methyltransferases, decreases both cell proliferation and differentiation into adipocytes and results in concomitant down-regulation of PPARg [46]. Finally, treating atrial cardiac cells with 5-azaC, an inhibitor of all DNMT activity, reduces 5mC to produce an interesting outcome, wherein these cells trans-differentiate into lipid body-containing adipocytes [47]. Among the likely possibilities that might explain these complex results, the starting epitype of the progenitor preadipocyte cell undoubtedly affects their developmental potential, as does their existing chromatin modification machinery. Additionally, the C-residue sequence specificity and regulation of the cytosine modification cycle will affect the genes being altered and the developmental outcome. Hence, the role of the DNA cytosine modification cycle appears distinct during stem cell differentiation into preadipoctyes, during adipogenesis, and in mature adipocytes. The roles of cytosine modification in adipogenesis are more complex than simply removing its silencing effects on appropriate adipogenic gene-regions and enhancers. In any case, we focused on the formation of 5hmC, because it appears to be the rate-limiting step in removing 5mC at CG dinucleotides and hence rate limiting to the turnover of modified cytosine. Herein, we extend FNC and FANS to the analysis of adipocyte nuclei within adipose tissue. We developed techniques to rapidly isolate cellular nuclei from fixed adipose tissue, such that both nuclear structure and chromatin modification would be preserved. We showed that cytometry was easily applied to characterize subpopulations of adipose tissue nuclei. Nuclear sorting identified subpopulations of adipocyte and non-adipocyte nuclei that differentially expressed a significant fraction of the epigenetic machinery we assayed. Adipocyte nuclei were identified that had highly elevated levels of factors involved in the regulation of histone methylation and DNA cytosine modification, and in particular displayed widely divergent levels of 5-hydroxymethylcytosine (5hmC) across the gene body of different groups of genes. Sus scrofa tissues Animals (Sus scrofa 6 month old, 220-280 lbs) were slaughtered at UGA's abattoir, which is a USDA licensed facility (Establishment #7421A). All institutional and USDA guidelines for the care and use of animals were followed. Fresh kidney-associated visceral adipose tissue (VAT) was harvested and chilled on ice for no more than 2 h prior to being processed to purify nuclei 5hmC in Isolated Visceral Adipose Tissue Nuclei or flash frozen in liquid nitrogen and stored at -80°C until use. Pigs were hybrids from PIC line 29 sows and PIC line 337 semen. Protocol for isolating cellular nuclei from adipose tissue The following rapid protocol for isolating cellular nuclei from adipose tissue is an extension of the simplified method described recently for brain cell nuclei [28]. Freshly dissected and minced adipose tissue was treated for 1 hour at RT to 2 months at 4°C in four volumes (w/v) of 0.3SPBSTA (0.3 M Sucrose in PBSTA, 20 mM KH 2 PO 4 , 20 mM Na 2 HPO 4 , pH = 7.2, 137 mM NaCl, 3.0 mM KCl, 0.1% Triton X 100, 0.1% sodium azide), plus 3.7% freshly added formalin. The protocol works on fresh tissue, but the yield of nuclei is lower than for fixed. The fixed tissue is heated briefly to 60°C to solubilize the fat (5 to 10 min depending upon the sample size). Typically 50 g of tissue was homogenized in a prewarmed (60°C water) Polytron (Fischer Sci) for 2.5 min at a setting of 6.5 in 8 volumes (w/v) of 0.3SPBSTA. The homogenate was filtered through large pieces (10 in. sq.) of Miracloth (Calbiochem, #475855) stretched loosely over a funnel. This filtration step prevented nuclei from being trapped with large pieces of cytoplasmic debris during the subsequent centrifugation and increased the yield of nuclei several fold. The filtrate was placed in centrifuge bottles or tubes and under-layered with 0.25 volumes of 1.4 M sucrose in SPBSTA. Nuclei were centrifuged in a pre-chilled rotor (4°C) through the sucrose cushion at 3,000xg for 20 min. The supernatant was removed gently by pouring it out from one of two holes made through the hardened fat layer. The nuclear pellet under the fat floatation pellet was gently re-suspended in with 0.3M SPBSTA. 3-5 ml aliquots were pressed slowly through 25 mm diameter Swinnex Nylon Net Filters with a 41-μm pore size (EMD Millipore). The yield of nuclei from freshly fixed VAT tissue was approximately 1.0 x 10 6 nuclei/g tissue and a few-fold less from frozen tissue fixed subsequent to thawing. Nuclei were stored for up to one year at 4°C in PBSTBA (PBS + 0.1% Triton X-100 + 5% BSA + 0.02% Azide) and freshly added 4% formaldehyde. Storage did not seem to alter the quality of immunofluorescence staining for several markers assayed, but storage for more than three months did lower the yield of RNA. All reagents were purchased from Thermo Fisher (Waltham, MA), unless stated otherwise. In various experiments, this protocol has been scaled from 50 g to 50 mg of VAT, and these differed only in that the heat treatment may be omitted prior to homogenizing very small samples. This isolation protocol worked similarly, but with slightly lower yield than when using fresh unfixed or -80°C frozen adipose tissue that was immediately fixed after thawing. The protocol has worked as well with rat and mouse VAT, SAT, and BAT as it did with porcine VAT and SAT. Western blot analysis Protein was extracted and resolved on SDS-PAGE gels as descried previously for brain nuclei [28]. Equal loading of total protein amounts in adipose tissue homogenates (H) and enriched nuclear fractions (N) was predetermined by coomassie blue staining of equivalent samples electrophoresed into through the stacking gel and for a brief period into the resolving gel. Relative protein loading could not be quantified if the coomassie gel was run as long as it was for the western blot gel, because few of the bands aligned in the VAT homogenates and nuclear samples and many bands were too weak to be compared as noted previously for similar comparisons in brain [28]. IFM, FNC, and FANS analysis of nuclei Immunochemical labeling of nuclei followed exactly the protocol used for brain nuclei [28]. For FNC 100,000 to 400,000 nuclei were incubated in 200 μl blocking solution with primary antibodies (S1 Table) at dilutions of 1:100 to 1:500 w/v for 1 h at room temperature. For FANS, where as many as 100-times more total nuclei were labeled in small volumes, the antibody concentration was much higher and was estimated based on the number of nuclei being examined (0.5 to 1.0 μg antibody per 10 6 nuclei) and not based on the volume of buffer. In a typical FANS experiment, 15 μg of rabbit polyclonal antibody to PPARg2 (ab45036) was incubated with 20 x10 6 nuclei in 500 μl blocking solution for 1 hr. After 3 washes with PBSTBA samples were co-stained with DAPI or propidium iodide (PI) at 20 μg/ml for 30 min. Photographic images of nuclei and tissue sections were made on a Leica TR600 epifluorescence microscope using a Hamamatsu ORCA-CR camera and Hamamatsu SimplePCI Image Analysis software to process images and measure nuclear areas and fluorescence intensities. FNC and FANS were conducted as described previously [28]. The nuclear population was first gated for size and shape (S1A Fig) and DNA content (S1B Fig) to reduce the number of contaminating particles sorted. The fraction of 4C nuclei appeared low to undetectable in most VAT nuclear preparations, but there was a large percentage of nuclei that showed higher than 2C staining with DAPI. This signal may result from decondensed nuclei that have a very high RNA content, because DAPI has a modest fluorescence enhancement with dsRNA [48] as was observed for decondensed brain nuclei [28]. No significant population of doublet nuclei was detected during sorting (S1C Fig), and therefore, a doublet gate was not applied so as not to discriminate against large decondensed nuclei [28]. Furthermore, a pulse-width gate was not applied because of the concern that it might eliminate some very large decondensed nuclei that were of interest to this research. Figures of FNC and FANS data were prepared using FlowJo Software version 9.7.6 (Treestar, Inc. Ashland, Oregon). RNA, cDNA, and qRT-PCR RNA from formalin-fixed nuclei was prepared and reverse transcribed into cDNA for qRT-PCR analysis as described previously [28]. Primers are listed in S2 Table. Multiple primer pairs were designed and assayed for each porcine target RNA and only those showing efficient amplification of product from total VAT RNA were selected. The primer pairs selected also had a product dissociation curve with only one peak (i.e., only one cDNA was amplified). Among several commonly used control transcripts that were examined [49], beta-actin and RPL13 were relatively equivalently expressed among the four nuclear fractions, when qRT-PCR assays were normalized for equivalent cDNA input and suitable as endogenous controls. Each assay was run in triplicate and the Relative Quantity (RQ) of transcript was calculated based on the dCT method, including the standard deviation from the mean [50]. We were unable to find bona fide MBD4 or TDG sequences in the porcine genome database, and hence their transcripts were not assayed by qRT-PCR. TAB-seq and Quintile Expression Data DNA sample preparation. DNA was isolated from sorted PPARg2-High, pooled PPARg2-Med & -Low, and PPARg2-Neg porcine kidney VAT nuclei (~2 to 4 x 10 6 nuclei per sample) using a DNeasy kit (Qiagen, Frederick, MD, USA #69504) according to the manufacturer's recommendations. A heat treatment of 90°C for 1 h was included, after the proteinase K digestion, to hydrolyze off the formalin. DNA was quantified using a Qubit 2.0 Fluorometer (Invitrogen) with the Qubit dsDNA Assay Kit (Life Technologies # Q32853). TET-assisted bisulfite sequencing (TAB-seq) was performed as we previously described [51]. 0.5 ng of methyltransferase M.SssI methylated lambda DNA and 0.25 ng of 5hmC-containing pUC19 DNA were added per 1 μg of nuclear DNA prior to treatment as C/5mC/5hmC controls. 5hmC-containing pUC19 DNA was produced using PCR amplification with 5hmdCTP. After 5hmC in Isolated Visceral Adipose Tissue Nuclei beta-GT-mediated glucosylation and Tet-mediated oxidation, the sequencing libraries were then prepared following the MethylC-seq protocol [52]. DNA sequencing was performed using an Illumina NextSeq500 Instrument at the University of Georgia's Genomics Facility, with coverage estimated to range from 0.35 to 0.41 genome equivalents among the various samples (S3 Table) [53]. Due to high cost associated with deep coverage of the pig genome using WGBS, we chose an alternative strategy, looking instead at 5hmC metagene plots for hundreds to thousands of genes (groups of genes). Detailed 5mC data for the even larger maize genome were obtained using low-coverage whole-genome bisulfite sequencing and metagene plots [54]. To demonstrate that the metagene approach and our levels of coverage (i.e., 0.4X genome equivalents) for 5hmC were robust, we downloaded a published high-coverage (>13X genome equivalents) 5hmC dataset [38] for the mouse brain frontal cortex and then subsampled the number of reads ranging from 13X down to 0.2X genome equivalents of coverage. We then plotted 5hmC distribution for the six gene groups examined in this paper for each level of coverage. As can be seen in S2 Fig, coverage did not meaningfully alter the patterns of 5hmC distribution in metagene plots, even for coverage as low as 0.2X and for gene groups with as few as 60 genes. TAB-seq data analysis. The raw sequence data were trimmed for adapters, preprocessed to remove low quality reads, aligned to the Sscrofa reference genome Sscrofa10.2 (GCA_000003025.4, http://www.ensembl.org/Sus_scrofa/) and analyzed as we described previously for TAB-seq analysis of 5hmC [55]. The reference genome assembly is based on DNA from a single Duroc pig, T J Tabasco. The control 5mC modified lambda DNA sequence was used to calculate the 5mC non-conversion rate upon Tet and bisulfite treatment. Non-CG dinucleotide sites were used to compute the non-conversion rate of unmodified cytosines upon bisulfite treatment (S3 Table). The 5hmC-containing pUC19 DNA was spiked in the genomic DNA as an internal control to evaluate the protection rate in the real samples (S3 Table). The protection rate is a measure of the percentage of 5hmCG that is protected from TET oxidation by using beta glucosyltransferase. This value is used to estimate true 5hmCG in the genome as it corrects for varying degrees of protection in this type of assay. For this analysis, only cytosines in the CG context were considered. Quintiles expression data. We obtained extensive transcript expression data for porcine adipose tissue based on RNA-seq, covering a large dynamic range of expression levels [56]. Expression levels from the 16 adipose tissue samples presented were averaged to obtain a list of 25,321 expressed transcripts. Because in RNA-seq, the number of reads mapped to a gene is also a function of the total exonic length, the average expression level was divided by this exonic length of each gene to normalize expression levels. This list was broken into quintiles based on exon-normalized mRNA expression levels, resulting in "quintile of expression" gene lists with 5,064 to 5,065 genes in each list. For each quintile of transcripts, the level of 5hmC was determined using weighted methylation level calculation [57] for each of 20 bins upstream, 20 bins within genes (between annotated TSS and TTS) and 20 bins downstream of genes. Each of the upstream and downstream bins spanned 5kb for a total of 100kb spanned in each direction. The within-gene regions, no matter what their length, were evenly divided among the 20 bins. Figures were prepared using ggplot2 [58]. Availability of supporting data The TAB-seq data set supporting the results of this article is available in NCBI GEO repository, accession number GSE73684. A unique persistent identifier and hyperlink to our dataset is http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?token=uridyaiefdidpan&acc=GSE73684. Statistical analysis Bar graph data are presented with the mean ± standard error of the mean (SEM). The data for nuclear area and qRT-PCR were analyzed by one-way ANOVA with post hoc Tukey's HSD test using Statistica software 7.1 (StatSoft; Tulsa, OK, USA). For particularly valuable statistical comparisons, the value of p<0.01 is denoted with à while a value of p<0.001 is denoted with Ãà . Results Isolating and sorting adipocyte nuclei from adipose tissue We considered possible nuclear protein markers that could be used to identify subsets of adipocyte nuclei. Members of the peroxisome proliferator-activated receptor (PPAR) subfamily of nuclear receptor transcription factors positively control transcription, the metabolism of glucose and lipids, and ultimately cell division and differentiation [59]. One of the three PPAR subtypes, PPAR gamma (PPARg) is induced during and is essential to adipogenesis, and it cooperates with many other factors in the process. PPARg binds to PPAR-enhancers and activates genes involved in adipocyte differentiation, lipid synthesis, and lipid storage [60]. While the best characterized subtype of PPARg protein, PPARg1, is strongly expressed in the nuclei of maturing and mature adipocytes (MMAs), it is also expressed in other cell types in adipose tissues including mesenchymal adipocyte lineage-committed cells, dedifferentiated fat cells, endothelial cells, and leukocytes (e.g., T-cells, neutrophils) [61][62][63][64]. Because of its essential roles in adipogenesis and lack of quantitative evidence to the contrary, we considered the possibility that PPARg1 protein might be most highly expressed in the most active adipocytes and therefore a good marker to quantitatively distinguish highly active adipocyte nuclei from the nuclei of other cell types. A protocol for the rapid isolation of adipocyte nuclei was developed as outlined in Fig 2A and detailed in Materials and Methods. DIC microscopy of isolated SsVAT nuclei and co-staining of DNA with DAPI showed that the enriched preparation of nuclei contained only modest amounts of cellular debris (Fig 2B). The method required only minor modifications from that for the isolation of brain cellular nuclei [28]. To further test the enrichment of nuclei, we compared proteins in crude VAT homogenate to proteins in purified nuclei using Western blotting ( Fig 2C, 2H & 2N). The nuclear fraction (N) was highly enriched for nuclear protein histone H3 and the total VAT homogenate (H) was greatly enriched for the cytoplasmic protein actin. Our long-term interest was in the epigenetic alterations to chromatin structure, therefore, we focused on nuclei isolated from formalin-fixed fresh tissue. We characterized nuclei that had been sorted based on PPARg1 levels. A relatively large fraction of all VAT nuclei stained strongly with antibodies to PPARg1, but there was a wide dynamic range in the staining consistent with the differential expression of PPARg1 among cell types (S3A Fig). Using qRT-PCR assays for cell-type-specific transcripts, we found that nuclei with higher levels of PPARg1 were not significantly enriched for adipocyte transcripts relative to transcripts marking other cell types (S4 and S5 Figs). For example, PPARg1-High nuclei had high levels of transcripts encoding IKAROS, a leukocyte-specific marker, higher than the levels in PPARg1-Neg nuclei. Note that the levels of most nuclear transcripts are strongly linearly correlated with their levels in total cellular RNA (R correlation coefficient = 0.94, supplementary data in Deal et al [22]). Hence, neither the cell-type-specificity nor the relative levels of PPARg1 protein expression were sufficient to identify subsets of adipocyte nuclei within adipose tissues. An alternate upstream promoter and alternate RNA splicing produce a slightly longer isoform of PPARg, PPARg2, with a 28-amino-acid extension on the N-terminus relative to PPARg1 [65]. Although less well characterized than the shorter PPARg1 isoform, PPARg2 appears to be an essential adipose tissue-specific enhancer of adipocyte development that is produced throughout adipogenesis [66]. Ectopic overexpression of PPARg2 alone is sufficient to induce pluripotent stem cells to differentiate into adipocytes [67], making PPARg2 both necessary and sufficient for adipogenesis. We used antibodies targeting the distinct amino terminus of PPARg2 to identify subsets of adipocyte nuclei from non-adipocyte nuclei prepared from adipose tissue. A small subset of VAT nuclei was strongly stained with PPARg2 antibodies and there were also populations of intermediately stained and unstained nuclei as well (S3B Fig). Examination of the highly stained PPARg2-High nuclei for DAPI staining morphology revealed that many were larger and more decondensed with diameters exceeding twice that of the smallest PPARg2 negative nuclei (S3C Fig). We previously defined decondensed to mean that the nuclei had a larger DAPI staining area with less intense staining per unit area as compared to the intensely staining normal-sized 2C nuclei [28]. FNC showed that there was an approximately 100-fold dynamic range in the PPARg2 immuno-fluorescence staining intensity among VAT nuclei (Fig 3) above the background staining observed with secondary antibody alone (S1D Fig). VAT nuclei were subjected to FANS based on the levels of PPARg2 immuno-staining ( Fig 3). Four populations of nuclei were sorted (PPARg2-Neg, -Low, -Med and -High) based on PPARg2 staining, each having an approximately 5-fold increase in PPARg2 immuno-staining intensity above background staining (Fig 3A). The low background staining from the secondary antibody alone was used to define the PPARg2-Neg class of nuclei (S1D Fig). In repeated FANS experiments, we gated to collect approximately 20%, 40%, 30%, and 10% of the nuclei in each category, -High, -Med, -Low, and -Neg, respectively. The sorted nuclear fractions were rephotographed without re-staining (Fig 3C-3F). Besides the obvious difference in PPARg2 staining intensities, there is variation in nuclear morphologies. The PPRG2-Neg nuclei are mostly spherical, while the PPARg2-Med and -Low fractions contained many ovoid-and spindle-shaped nuclei typical of nuclei in the thin cytoplasmic layer surrounding the lipid body in mature adipocytes. The PPARg2-High nuclei are generally oval or round, and were predominantly larger and more decondensed relative to the other populations. However, it is worth noting that some strongly stained PPARg2-High nuclei are small, reflecting some heterogeneity in morphology (White Arrows, Fig 3F and 3H). The PPARg2-Neg nuclei in Fig 3C had a diameter of approximately 6.5 μm, typical of 2C mammalian nuclei [26]. By comparing PPARg2--Neg nuclei (Fig 3C) to PPARg2-High nuclei (Fig 3H), where the PPARg2-High nuclei are viewed for DAPI staining alone, the larger diameters of these nuclei were easily seen. When the average two-dimensional cross-sectional area of the original images of DAPI stained PPARg2--Neg nuclei were set to 1.0, the nuclear areas of the PPARg2-High, -Med, and -Low populations were 2.8-, 2.0-, and 1.6-fold larger, respectively ( Fig 3G). There were statistically significant differences among pairwise comparisons of nuclear areas of all the fractions ( à p<0.01; Ãà p<0.001) except for the comparison of the PPARg2-Low and PPARg2-Neg fractions. Hence, the nuclear volumes of adipocyte nuclei may be computed to vary over a 20-fold range. PPARg2 transcript levels were assayed among the four fractions of nuclei by qRT-PCR. PPARg2 RNA was expressed at significantly higher (~200-fold higher) levels in PPARg2-High and PPARg2-Med classes of nuclei, relative to PPARg2-Neg fraction ( à p<0.01, Fig 3B). These data are in reasonable agreement with the approximately 25-to 125-fold higher levels of PPARg2 protein immunofluorescence detected in these two fractions relative to the PPARg-Neg fractions used as the basis for FANS (Fig 3A). Transcript profile of sorted visceral adipocyte nuclei Approach Considering that neither FANS or PPARg2 staining has been implemented to separate and characterize adipose tissue nuclei previously, we profiled the relative levels of a few sets of transcripts, which were potentially informative as to the phenotypes of these four classes of nuclei. Transcripts encoding cell-type-specific markers. The four classes of adipose tissue nuclei were assayed for the relative quantity (RQ) of transcripts encoding proteins that were reasonably specific markers of adipocytes, endothelial cells, and leukocytes and their potential for cell cycle activity. Among the four fractions, beta-actin (ACTB) mRNA was determined to be an 5hmC in Isolated Visceral Adipose Tissue Nuclei equivalently expressed endogenous control relative to total cDNA input. Therefore, the expression levels of various marker transcripts were compared to actin set to 1 (Materials and Methods). For most of the cell type markers examined, significant differences in transcript expression levels were found that distinguished the various PPARg2-Pos classes of nuclei (PPARg2-High, -Med, -Low) from PPARg2-Neg class of nuclei ( à P<0.01, Ãà P<0.001, Fig 4). Adipocyte specific transcripts ADIPOQ, SREBF1, and FABP4 were approximately 4-to 20-fold more highly expressed in most of the PPARg2-Pos classes of nuclei, relative to PPARg2-Neg fraction, as shown in Fig 4. The leukocyte and progenitor cell markers IKZF1 and IHH were estimated to be 10-to 100-fold more highly expressed in the PPARg2-Neg fraction of nuclei than in the PPARg2-Pos fractions (Fig 4). IHH, a suppressor of adipocyte development, was PPARg2 transcript levels were assayed among the four fractions of nuclei by qRT-PCR. C, D, E, F. Merged immunofluorescence microscope images of four isolated fractions of nuclei without re-staining. G. Comparison of the average nuclear area for the four fractions from one experiment (N = 100). There were statistically significant differences between all of the fractions of nuclear areas except for between PPARg2-Low and PPARg2-Neg fractions. H. PPARg2-High nuclei from image H showing the DAPI staining alone to reveal decondensed nuclei, however, some strongly stained PPARg2-High nuclei are small reflecting some heterogeneity in their morphology, as indicated by white arrows. For antibodies see S1 Table. A p value of P<0.01 is denoted by an asterisk (*) and a p value of P<0.001 is denoted by a double asterisk (**). extremely highly expressed in PPARg2-Neg nuclei. Thus, the PPARg2-Neg fraction appears enriched for some nuclei from progenitor cell types not committed to adipocyte development. Interestingly, transcripts for the transcription factor GATA2, which promotes the differentiation of MSCs into adipocytes [68], and ERG3, a sterol C5-desaturase involved in cholesterol biosynthesis [69], were more highly expressed in the PPARg2-Low and -Med classes of nuclei than either the PPARg2-Neg or PPARg2-High nuclei. Perhaps, the cells from which PPAR-g2-High nuclei were derived have finished their development from MSCs and are no longer synthesizing as much lipids. In summary, the PPARg2-Pos nuclei appear to be derived from adipocytes, and the PPARg2-Neg nuclei from non-adipocytes. Summary information and references on the properties of the marker genes assayed is given in S4 Table. Transcripts encoding factors involved in multipotency and cell cycle. Highly decondensed nuclei are associated with the elevated expression of genes for multipotency and chromatin remodeling machinery as shown for decondensed neuronal nuclei in the brain [28] and a differentiating hematopoietic stem cell line [70]. Three markers of cellular multipotency, proliferation, and cell cycle activity were examined in the four nuclear fractions (KLF4, MYC, and PCNA as defined in S4 Table). They were 2-to 6-fold more highly expressed in most of the PPARg2-Pos adipocyte fractions relative to the PPARg2-Neg non-adipocyte fraction of nuclei. Thus, although there was significant variation in the expression of these markers, they were in general more highly expressed in PPARg2-Pos nuclei. Transcripts encoding chromatin-remodeling proteins. To begin testing the first part of our hypothesis, adipose tissue contains epigenetically distinct subpopulations of adipocytes, we analyzed transcripts encoding factors responsible for programming a range of chromatin modifications (Fig 5). Highly decondensed nuclei are associated with an elevated transcription level of chromatin remodeling machinery and genes of multipotency and in the brain with elevated expression of markers for learning and memory [28]. Therefore, although the cellular memory of adipocytes might be biochemically quite different from that of neurons, subsets of adipocyte nuclei still might have different capacities to record cellular memories, and hence, be distinctly potentiated to respond to their tissue environment. We performed qRT-PCR assays on transcripts from 19 genes encoding chromatin-remodeling factors, which are broken into four sets. First we assayed factors involved in DNA cytosine modification including two DNA cytosine methyltransferases, DNMT1 and DNMT3A, which catalyze the synthesis of DNA 5mC and three ten-eleven translocation methylcytosine dioxygenases, TET1, TET2, and TET3, which catalyze the oxidation of 5mC to 5hmC (Fig 5A). TETs are the major enzymes controlling the removal and turnover of 5mC [71]. Of these DNMT1, DNMT3A, TET1, and TET3 transcripts were 3-, 6-, 1.5-, and 4-fold more highly expressed in PPARg2-Pos (PPARg2-High, -Low and -Med) adipocyte factions, respectively, relative to the non-adipocyte faction. TET2 and AICDA were 19-and 40-fold higher more highly expressed in the PPARg2-Pos fraction. Second, we considered histone side chain acetylation (Fig 5B), but the differences among nuclear fractions appeared less dynamic. Transcripts for two histone lysine acetyltransferases, KAT2B and KAT3B, and three histone deacetylases SIRT1, HDAC2, and HDAC3, were assayed. Only the transcripts encoding HDAC2, KAT3B, and SIRT1 deacetylase were notably more highly expressed (e.g.,~2-fold) in PPARg2-Pos nuclei than negative nuclei, although there were also quantitative differences among the nuclear fractions for KAT2B and HDAC3. Third, transcript levels for four factors involved in nucleosomal histone side chain methylation (S4 Table) were quantified (Fig 5C). This included lysine-specific demethylase KDM4A, lysine specific methyltransferase KMT2C, histone H3K9 methyltransferase SETDB1, a cofactor that promotes histone lysine methylation PAXIP1, and Swi/Snf related helicase ATPase ARID1A, which is known to modulate H3K4me1 nucleosomes. Fourth, we examined the protein arginine methyltransferases CARM1 and PRMT5 and demethylases, including histone arginine demethylase JMJD6. The transcripts of these last two classes of genes were 6-to 65-fold more highly expressed in PPARg2-Pos nuclei than PPARg2-Neg nuclei (Fig 5D). Clearly, for the PPARg2-Pos nuclei there were much higher levels of factors involved in histone methylation than factors involved in histone acetylation. IFM analysis of chromatin modifications in isolated nuclei Because strong differential expression of transcripts encoding proteins involved in DNA and histone methylation was observed, one DNA and two histone modification products were assayed. First, the TET catalyzed oxidation of 5mC to 5hmC and 5hmC levels themselves are often dynamically regulated in the development of stem cells, germ cells, T cells, and neurons [38,72,73]. Therefore, we performed a semi-quantitative IFM of 5hmC levels [74,75] among the various fractions of VAT nuclei. Preliminary experiments showed 5hmC was concentrated in large decondensed nuclei (Fig 6). When we examined the co-distribution of PPARg2 protein with 5hmC, nearly all nuclei staining most strongly for PPARg2 also stained most strongly for 5hmC (White Arrows, Fig 6A). The coordinate expression of PPARg2 and 5hmC was examined further by FNC (Fig 6B and 6C). The cytometer resolved a wide, nearly 100-fold, range of positive staining for both markers. Nuclei stained with the secondary antibody alone used to detect 5hmC helped define background fluorescence. As PPARg2 also defined nuclear size ( Fig 3H), perhaps this correlation of 5hmC with PPARg2 levels should not be surprising, considering the evidence that 5hmC marks decondensed euchromatin [76,77]. Levels and gene-region distribution of 5hmC In view of the large differences in expression levels of factors controlling DNA cytosine methylation and turnover via 5hmC (Figs 1 and 5A) and the association of 5hmC with decondensed highly active chromatin, it seemed reasonable to consider that 5hmC levels might vary widely among the fractionated VAT nuclei and be essential to their epigenetic programming. We Distribution of 5hmC among adipose tissue nuclei. IFM and FNC were used to examine 5hmC levels among visceral adipose tissue nuclei. A. A field of VAT nuclei examined with various combinations of DAPI staining for DNA, and immunostaining with mouse anti-PPARg2 + goat anti-mouse Alexafluor488 and rabbit anti-5hmC + goat anti-rabbit Alexafluor633. White arrows indicate those large, decondensed nuclei that are stained strongly for both 5hmC and PPARg2. B. Flow Cytometry of VAT nuclei immunostained as in A. C. Goat anti-rabbit secondary antibody used in B shows only modest background staining of nuclei. Nuclei were gated for DAPI (>2C DNA content) and size and shape by light scattering as in S1 Fig. For antibodies see S1 Table. doi:10.1371/journal.pone.0154949.g006 performed TAB-seq to evaluate 5hmC levels in three classes of VAT nuclei isolated by FANS (PPARg2-High, pooled PPARg2-Med and -Low, and PPARg2-Neg). The specificity of TETenzymes and their cofactors, results in 98% of 5hmC being in the CG dinucleotide context. Therefore, the data on 5hmC levels are reported as a fraction or percent of CG dinucleotides. The percent of 5hmCG ranged from 3.40%, to 3.03% to 2.22% (scaled %5hmCG value is 6.50%, 6.00% and 4.58%, respectively) of CG dinucleotides among these three classes of VAT nuclei ( Table 1). The scaled %5hmCG values are the true 5hmCG levels in the genome as scaling corrects for varying degrees of protection rates in the TAB-seq assay. The scaled %5hmCG levels in PPARg2-High and in PPARg2-Med+Low nuclei were significantly higher than that in PPARg2-Neg population (Chi-square test p-value <0.05). We compared the gene-region distribution of 5hmCs among the three classes of VAT nuclei for 25,321 genes divided into quintiles based on RNA-seq expression data in adipose tissue [56]. Gene regions were divided into three parts: 100 kb upstream of the transcription start site (UTSS), 100 kb downstream of the transcription stop site (DTTS), and the gene body (GB) extending from TSS to TTS. 5hmC data were estimated from gene sequences divided into 20 equal bins for each region and the fraction or percent 5hmCG per CG dinucleotide was calculated. For the highest quintile of expressed genes (5 of 5, Fig 7A), the pattern of 5hmCG distribution begins with a deep valley in 5hmCG levels at the TSS, rises to a high broad plateau across the gene body, and ends with another steep valley of 5hmCG at the TTS. Across all gene regions, 5hmCG levels were the highest for PPARg2-High nuclei and lowest for PPARg2-Neg nuclei, although the pooled PPARg2-Med and -Low nuclear populations contained only slightly lower levels than that of the -High population. For the 3 rd and 4 th quintile expression gene groups the distribution of 5hmCG was relatively indistinct, although there was small peak in 5hmCG levels right after the TSS. Surprisingly, the 5hmC levels drop across the gene body for the lowest two quintiles (1 st , 2 nd ) for all three classes of cellular nuclei. As far as we are aware a gene region drop in 5hmC has not been reported for any other gene set. Fig 7B compares the wide range in 5hmC levels and differences in patterns of 5hmC distribution among the five quintiles for the PPARg2-High nuclei. Manipulating cellular nuclei from adipose tissue Considerable progress was made in simplifying the isolation of total cellular nuclei from within adipose tissue as a new tool for cell-type-specific analyses. Adipocyte, endothelial cell, and lymphoid cell nuclei were easily isolated from VAT using only slight modifications to an existing rapid protocol for isolating brain cell nuclei. The method required only bench top centrifugation through a sucrose cushion and two filtration steps and did not require ultracentrifugation as in earlier established methods [78]. Nuclei had sufficient purity from cytoplasmic debris to greatly simplify analysis by IFM, FNC, and FANS. The PPARg2 isoform of PPARg was adipocyte cell-type-specific and expressed strongly enough to identify adipocyte nuclei, while the PPARg1 isoform was not. The relative cell-type purity of the PPARg2-Pos adipocyte nuclear populations was validated by the quantitative assessment of cell-type-specific transcripts. Hence, PPARg2 appears to be a reasonable choice as a pan-adipocyte marker, although PPARg2 labeled nuclei from other fat deposits and from other species will have to be examined. Quantitative assessment of nuclear transcripts in the different sorted subpopulations of VAT nuclei revealed differential expression of some important markers over more than two orders of magnitude, providing significant resolution for expression studies. This result agrees with previous RNA expression studies on isolated sub-populations of cellular nuclei from plant roots and mouse brain [22,26,28,38]. We showed the utility of using PPARg2 as an adipocyte marker for IFM and FNC. More particularly, the three subsets of adipocyte nuclei that differed in~5-fold increments in the levels of PPARg2 expression, displayed significant differences in the expression levels of some markers, although most epigenetic markers of adipocyte identity and cell cycle activity simply distinguished adipocyte PPARg2-Pos from PPARg2-Neg nuclei. Differential programming of distinct adipocyte populations Our data give strong initial experimental support for the first part of our working hypothesis by showing that adipose tissue contains subsets of adipocytes that are epigenetically distinct. The four populations of VAT nuclei sorted based on PPARg2 protein levels differed significantly in the expression of transcripts encoding factors involved in chromatin modification and/or adipogenesis. Nineteen of the twenty-two transcripts associated with epigenetic control, pluripotency, and/or the cell cycle that were assayed showed 2-to 100-fold differences in their levels of expression among the four populations. Transcript levels were particularly distinct between PPARg2-Pos adipocyte nuclei and PPARg2-Neg non-adipocyte nuclei. IHH is the only hedgehog morphogen known to be expressed in preadipocytes, where it inhibits adipogenesis and promotes chondrocyte differentiation, proliferation, and maturation. Transcripts for IHH were far more highly expressed in non-adipocyte PPARg2-Neg nuclei than in adipocyte nuclei. IHH should not be expressed in maturing or mature adipocytes, and hence, served to confirm the identity of sorted nuclear populations of preadipocytes. We found 2-to 50-fold differences in the expression of genes specifically associated with different stages of adipogenesis including ADIPOQ, SREBF1, GATA2, ERG3, and FABP4. Three markers of pluripotency and cell cycle potential, KLF4, MYC, and PCNA, were 2-to 6-fold more highly expressed in adipocyte populations, suggesting perhaps there is reasonable developmental potential among diverse classes of adipocytes. The discussion of chromatin remodeling factors has been divided into two parts concerning the formation and removal of (1) histone modifications and (2) DNA cytosine methylation. Histone PTMs. We analyzed transcripts for 13 enzymes or protein subunits of enzyme complexes involved in making or removing histone side chain PTMs and directly assayed two PTMs directly on nucleosomes in nuclei. Many of these PTMs have been correlated with transcription in adipocytes and with adipogenesis, if not with the suggestion that they are playing a direct causal role in cellular differentiation (S4 Table). Histone lysine acetylation. The forced down regulation of histone deacetylases promotes adipogenesis [79,80], however, only barely detectable changes are observed in global histone acetylation PTMs during adipogenesis of 3T3-L1 cells [81]. In agreement with these latter results, we found only small to insignificant differences in transcript levels for representative deacetylases SIRT1, HDAC2, and HDAC3 and histone lysine transacetylases KAT2B and KAT3B among the three classes of adipocytes (Fig 5B). Instead of examining the expression of histone transacetylase and deacetylase transcripts, a direct analysis of various histones modified by acetylation within populations of PPARg2-Pos adipocyte nuclei by FNC might be more informative. 5hmC in Isolated Visceral Adipose Tissue Nuclei Histone lysine methylation. Transcripts for five of the chromatin remodelers assayed (ARID1A/BAF250, MLL3, PTIP, SETDB1, KDM4A) impact histone H3 methylation at lysine 4 and/or 9. They shared an interesting common profile of differential transcript expression, being much more highly expressed in PPARg2-Pos adipocyte nuclei than non-adipocyte nuclei, and were the most highly expressed in PPARg2-Med nuclei. The differential methylation of nucleosomal histone H3 appears to play major roles in regulating adipogenesis [82,83]. The nucleosomes associated with CEBPA, CEBPB, PPARg2 and aP2 gene sequences show significant increases in the levels of H3K4me1 in the later stages of 3T3-L1 preadipocyte differentiation into mature adipocytes [81]. H3K4me1 is found in the enhancers of genes potentiated for expression [84]. By contrast, nucleosomal H3K9me1 is found in the promoters of actively expressed genes [84]. Conversion to more highly methylated H3K9me2 at the PPARg locus is associated with repressed adipogenesis [30]. Surprisingly, these two PTMs are primarily associated with subsets of expressed genes after hyperglycemic 3T3 cells are stimulated with insulin [85]. Further confusing their potential stimulatory or inhibitory role in adipogenesis, both PTMs have been found to be associated with silenced genes in euchromatin [86][87][88]. Perhaps the relevant association of H3K4me1 and H3K9me1 is with decondensed chromatin, which appears proportional to PPARg2 expression in adipocytes. This discussion will focus briefly on the observed differential expression of three factors controlling H3K4me1 and H4K9me1 levels among classes of VAT nuclei: ARID1A, KMT2C, and PAXIP1 [85]. ARID1A is the large Swi/Snf ATPase subunit defining many BAF remodeling complexes including complexes that methylate H3K4 to H3K4me1 [89,90]. ARID1A complexes regulate pluripotency genes and are essential to the conversion of ES cells into adipocytes [91]. KMT2C is a histone lysine methyltransferase that methylates H3K4 to H3K4me1 and me2 [92,93] and is physically associated with the lineage-specific enhancers and cell-type-specific factors including PPARg and FABP4 [92]. KMT2C mutant mice have less white fat and are defective in adipogenesis [94]. PAXIP1 binds to histone 3 lysine 4 methyltransferases to influence the conversion of H3K4me1 to H3K4me3 in nucleosomes associated with the promoter regions of PPARg and CEBPA. PAXIP1 is essential to their induced expression, and hence, essential to adipogenesis [95], but its activity acts in opposition to ARID1A and KTM2C because it reduces H3K4me1 levels, whereas the latter increase it. Considering that all three factors are essential to adipogenesis and that PPARg is essential to this process, it is not surprising that PPARg2-Pos adipocyte nuclei express these remodelers at much higher levels than non-adipocytes. Because turnover rates for chromatin modifications are a function of their synthesis and decay (i.e., removal) rates [29], the coordinately higher expression of these factors with opposite activities in adipocytes relative to non-adipocytes suggests more rapid turnover rates for H3K4me1. Next, we consider the two factors regulating the levels of nucleosomal H3K9me1, SETDB1 and KDM4A. SETDB1 is a H3K9 methyltransferase that represses PPARg transactivation via nucleosomal histone methylation at PPARg target genes. SETDB1 methylates H3K9 and H3K9me1 to H3K9me3, a modification associated with transcriptional repression [96]. Conversely, KDM4A is a lysine-specific demethylase that directly demethylates H3K9me3 to H3K9me1/2 [97,98]. Very early in the differentiation of 3T3-L1 preadipocytes levels of the repressive H3K9me3 mark increase 2-to 3-fold, "licensing" preadiopcytes to differentiate into mature adipocytes [44]. KDM4A is essential to recruiting PPARg to the many target genes expressed during adipocyte development [99]. It is reasonable to consider that KDM4A-catalyzed conversion of H3K9me3 to H3K9me1 directs adipogenesis to proceed. The higher levels of these two opposing activities in PPARg2-Pos adipocyte nuclei may result in an increased turnover rate for H3K9 related methylation. Histone arginine methylation. We examined the transcript levels of two histone arginine methyltransferases, CARM1 (PRMT4) and PRMT5, and one arginine demethylase, JMJD6 [100,101]. We showed that CARM1, PRMT5, and JMJD6 transcripts were 7-, 18-, and 17-fold more highly expressed in PPARg2-Pos adipocyte nuclei than PPARg2-Neg non-adipocyte nuclei, respectively. Among their multiple activities on protein substrates, CARM1 and PRMT5 are both capable of generating monomethylarginine (MMA) and then, respectively, they may synthesize asymmetric dimethyl arginine (ADMA) and symmetric dimethyl arginine (SDMA). JMJD6 activity can catalyze the demethylation of MMA, ADMA, and SDMA residues in some protein contexts. All three enzymes appear essential to adipogenesis, based for example, on the evidence that small RNA silencing of CARM1 [102] or PRMT5 [103] or JMJD6 [104] each results a an approximate 90% reduction of in situ adipogenesis starting with embryonic stem cells or preadipocytes. By contrast, another member of the family of 9 PRMTs, PRMT7, is not needed for adipogenesis, emphasizing the specificity of CARM1/PRMT4 and PRMT5 [105]. The higher levels of these two opposing activities (i.e., CARM1 and PRMT5 methyltransferases vs JMJD6 demethylase) in PPARg2-Pos adipocyte nuclei suggest an increased turnover rate for the methylation of some arginine residues in adipocytes relative to other adipose tissue cell types. TET expression and 5hmC We began with a preliminary examination of the levels TET expression and 5hmC levels in nuclei fractionated based on PPARg2 levels. PPARg2 is the major transcription factor driving adipogenesis and lipid synthesis in mature adipocytes. It acts via its binding to PPARg enhancers (PPAREs). Using IFM we found that all three TETs proteins (TET1, 2, 3) were present at significantly higher levels in PPARg2-High adipocyte nuclei than in most adipocyte nuclei staining moderately for PPARg2 or PPARg2-Neg non-adipocytes. Based on cytometry, total 5hmC appeared proportional to PPARg2 levels in nuclei. However, qRT-PCR data only showed moderate differences in TET RNA expression among fractionated nuclei and only TET2 and TET3 levels were significantly higher in PPARg2-High nuclei. DNMT1 transcript levels were relatively higher in all three classes of PPARg2-Pos nuclei compared to PPARg2--Neg, but the lowest level among these was seen in the PPARg2-High samples, perhaps reflecting the decline in DNMT1 reported for fully mature adipocytes [45]. Similarly, we found exceptionally high levels of 5hmC in PPARg2-High nuclei relative to the balance of VAT nuclei by IFM and observed their coordinate expression over more than an order of magnitude by FNC. Our TAB-seq data confirmed that PPARg2-Pos nuclei had the highest levels of 5hmC, significantly higher than PPARg2-Neg nuclei. The 2.2 to 3.4% 5hmCG per CG dinucleotides observed in VAT nuclei were low as compared to the estimated 13% 5hmCG in adult brain where 5hmC levels are the highest [106], but this represents an intermediate level among estimates for many other tissue types [107,108]. Yet, by TAB-seq there were only slightly higher levels of 5hmC in the PPARg2-High nuclei compared to the balance of PPARg2-Low/Med nuclei. The TAB-seq method undoubtedly provides one of the most quantitative and unbiased assessments of the relative levels of 5hmC. Confirmation of the absolute quantitative levels awaits analysis by a method such as LC-MS [107,109]. There are a few straightforward explanations for these differences among measurements made by qRT-PCR for TET RNAs, immuno-detection of 5hmC, and TAB-seq analysis of 5hmC. First, differential stability of TET RNAs and proteins might favor the accumulation of TET proteins in the PPARg2-High subset of cells, while TET RNA levels declined. Second, 5hmC is most concentrated in euchromatin in regions with decondensed structure [76,77,110]. We observed that PPARg2-High nuclei were extremely large and decondensed and nuclear 5hmC in Isolated Visceral Adipose Tissue Nuclei size appears to be proportional to the levels of both PPARg2 and 5hmC detected with antibodies. Both IFM and nuclear cytometry (FNC, FANS) showed a wide dynamic range for the immunological detection of PPARg2 protein and 5hmC. Perhaps a decondensed chromatin structure provides disproportionate access to immune reagents amplifying the difference in immunochemical staining among the nuclear fractions relative to condensed chromatin is other nuclei blocking access. While this potential artifact would prevent precise quantitative interpretation of our immunochemical data, it may have contributed to the wide dynamic ranges of PPARg2 and 5hmC staining observed and aided in separating classes of PPARg2 stained nuclei by FANS. Gene-region distribution of 5hmC TAB-seq analysis of three classes of VAT nuclei showed that 5hmC was concentrated in the gene bodies of the highest quintile of expressed genes, above the levels in flanking regions, and was much higher for PPARg2-Pos nuclei than PPARg2-Neg. Perhaps this relationship between PPARg2 and 5hmC may not be too surprising, considering the recent evidence that PPARg bound to PPAREs attracts TET enzymes and this results in the chromatin-localized conversion of 5mC to 5hmC [42]. 5hmC levels increase during the development of 3T3-L1 preadipocytes into adipocytes. Small RNA silencing of any one or all three TETs prevents this part of the increase, strongly supporting the view that all three contribute to 5hmC levels in adipose tissue [42]. Perhaps 5hmCGs program cellular memory in adipose tissues, tagging sites for demethylation or remethylation at a later time and creating a poised or potentiated state as suggested for the development of neurons in the brain and for embryonic stem cells [37,38,111]. Constitutive CTCF enhancers that are active throughout adipogenesis and PPAREs that become active during adipogenesis are often concentrated in CG rich regions. During adipogenesis, PPARg binding is associated with a dramatic decrease in 5mC levels and an increase in 5hmC levels at both constitutive enhancers and activated PPAREs. Changes in the methylation state of the CTCF and PPARg enhancers activates adjacent gene expression, with notable increases in expression of genes involved in glucose signaling and lipid metabolism [60]. Note that reduced 5mC at these enhancers is in contrast to increases in 5mC levels at enhancers in the C/EBP alpha promoter reported previously [44]. Because the standard whole genome bisulfite sequencing technology to determine 5mC does not distinguish between 5mC and 5hmC, it is reasonable to consider that some of the reported increases in 5mC included increases in promoter and gene region 5hmC. By contrast, simply lowering 5mC levels via treatment with 5-azaC down regulates PPARg and halts adipogenesis of 3T3-L1 cells. One explanation for this complexity is that hydroxymethylation of these CG-rich enhancer regions is required for their subsequent activation, suggesting a possible cause-and-effect direct relationship with 5hmC acting at a high level. TET2 protein does interact with both transcription factors, PPARg and CTCF, to promote DNA hydroxymethylation of 5mCGs at their associated enhancers, CCCTC-related sequences and PPAREs, respectively. Hence, TET activity appears to drive increases in constitutive and adipogenic gene expression during the development of 3T3-L1 preadipocytes into mature, lipid-rich adipocytes [40,41]. The likelihood of a specific relationship between PPARg and CTCF is further evidenced by that fact that during adipogenesis, CTCF binds disproportionately to enhancer sites that are near PPAREs and at most genes induced by PPARg [40]. It has been suggested that 3-dimensional chromatin loops bring these two enhancers into proximity to promote coordinated activity [41]. The resulting specific relationship of PPARg, TET activity, and 5hmC levels in adipocytes may also help explain the lower than average levels of 5hmC we observed in the gene bodies of the lowest quintile of expressed genes. This would occur if PPARg-associated TET activity further oxidizes 5hmCG to 5fCG and 5caCG, which 5hmC in Isolated Visceral Adipose Tissue Nuclei would not only lower 5hmC, but could lead to higher levels of 5mCGs and gene silencing. Future studies will explore the more complex examination of 5hmC levels surrounding these and other enhancers. A model for 5hmC activity Considering our results in the light of other recent publications on 5hmC in the brain suggests a model in which 5hmC enriched open chromatin in adipose tissue-specific gene regions enables appropriate patterns of adipocyte gene regulation. 5hmC levels in neurons are said to "potentiate" changes in gene expression and to prepare for rapid "on demand gene regulation," but are also proportional to steady state transcript levels [38,43]. Although, cause and effect are not yet well defined in adipose tissue, in the brain increases in a genes 5hmC level are often a prelude to changes in gene expression. Further, the loss of normal TET function causes aberrant gene expression in a number of tissues and organs. Recall that the levels of 5hmC observed in adipocytes are several fold lower than in brain, but still correlated positively with gene expression level. Perhaps the levels of gene region 5hmC and the associated open chromatin environment potentiate the most active gene regions for more rapid changes in transcription, similar to warming up a gasoline engine prior to putting it in gear. In this context, 5hmC levels may act as a throttle regulating the relative transcriptional potential and activity in different regions of chromatin. However, the throttle may be set differently in different tissues, such that the idling speed is different. By this amendment to the model the range of 5hmC-determined idling speeds would be broad among cell types and gene sets in adipose tissue, but still lower than in the brain, reflecting a lower rate of chromatin turnover and a slower rate of cellular memory formation in response to environmental influences relative to neurons. By measuring the relative turnover rates for 5mC and 5hmC in adipocytes and brain, the role of turnover in this model may be tested. Conclusions FNC and FANS offer the technical power to analyze cell-type-specific differences in chromatin structures for less accessible organs and tissues, such as adipose tissue. Cytometry provides vast numerical superiority to the analysis of the distribution of nuclear epitypes such as DNA cytosine or histone modification over any other existing approach. An examination of subpopulations of adipocyte and non-adipocyte nuclei derived from VAT demonstrated there is a wide variation in nuclear morphology and size, chromatin structure, progenitor status, and perhaps the potential to form cellular memories, providing initial support for our hypothesis. The extreme variation in nuclear size among adipocyte nuclei is only partially explained by exceptional transcriptional and epigenetic activities, and warrants further examination, particularly in light of the data from other systems directly correlating large decondensed nuclear morphology with progenitor cell status. The large size of adipocyte nuclei may simply reflect more chromatin remodeling machinery and higher rates of chromatin remodeling, independent of multipotency. This is the first report of 5hmC levels across gene regions of adipocytes and non-adipocytes isolated from within visceral adipose tissue. We found a wide range in 5hmC levels in gene regions, 5-fold differences among genes and cell types ranked based on their quintile expression level and 4-fold within the quintile expression gene groups. This is twice the difference that has been reported between neurons and non-neurons, even though the total levels of 5hmC are much lower in VAT and adipocytes than they are in the brain and neurons. Some of the greater differences in 5hmC levels we report here may be due to the greater resolution obtained by comparing DNA from more highly enriched cell types. Most unexpected were the extremely low levels of 5hmC observed for weakly expressed genes in their gene bodies, below the levels found in flanking sequence regions. These distinctions suggest important activities for 5hmC in adipose tissue development and/or maintenance, but some of these activities may be different from those in the brain. A small number of stimulating recent studies demonstrate changes in the genome-wide and gene-specific distribution of 5mC in human adipose tissue in response to obesity, metabolic syndrome, and extended exercise [31,34,[112][113][114]. Our results showing large differences in TET expression and 5hmC levels among classes of adipocytes suggest a complex role for the turnover of modified DNA cytosine in regulating gene expression in adipose tissues. It appears likely that distinct subpopulations of adipocyte nuclei within adipose tissue may be programmed with their own cytosine modification epitype. Each subset may respond differently to stresses in their tissue environment and contribute in different ways to metabolic health. A continued examination of subpopulations of adipose tissue nuclei should greatly improve the statistical significance of epitype data from VAT and should more accurately report epigenome-induced risk of disease. Due to high cost associated with deep coverage of the pig genome using WGBS, we chose an alternative strategy to look at 5hmC metagene plots for hundreds to thousands of genes (groups of genes). To demonstrate that this approach and our levels of coverage (i.e., 0.4X genome equivalents) is robust, we downloaded a published highcoverage 5hmC dataset including all genes in mouse frontal cortex and then subsampled number of reads ranging from 0.2X to 13X genome equivalents of coverage to plot 5hmC distribution for six gene groups. As can be seen, coverage didn't affect the patterns of metagene plots, even for coverage as low as 0.2X. (TIF) S3 Fig. SsVAT nuclei staining pattern for PPARg1 and PPARg2. A. Nuclei were stained with DAPI for DNA (blue) and PPARg1 (red). Nuclei were stained with monoclonal antibody to PPAR-gamma (Abcam Cat.# ab70405) and then goat anti-mouse IgG conjugated with R-PE (Invitrogen Cat.# P-852) and counter stained with DAPI (blue).B. Nuclei were stained with DAPI for DNA (green) and PPARg2 (red). Nuclei were stained with polyclonal antibody to PPAR gamma 2 (Abcam Cat. # ab45036) and then Alexa fluor 633 goat anti-rabbit IgG secondary antibody (Life technologies, A21070) and counter stained with DAPI (green). Relative quantities of marker transcripts among the four classes of VAT cell nuclei isolated by FANS were determined by qRT-PCR. Using either Beta actin or RPL13A as endogenous controls gave similar results. Cell-type-specific markers ADN, SREBF1, GATA2, ERG3, IKAROS, and CD31 were examined. The properties of maker genes and the oligonucleotide primers are described in S2 and S4 Tables, respectively. (TIF) S1 Table. Primary and secondary antibodies used in this paper. Specific information of antibodies used in this paper were listed. (DOCX) S2 Table. Oligonucleotide primers for qRT-PCR analysis of Sus scrofa transcript levels in isolated nuclei. Sense and antisense primer sequences used in this paper were listed. (DOCX) S3 Table. TAB-seq Analysis Metrics. The genome coverage achieved by TAB-seq was listed in the last column as a fraction of our coverage to Sus scrofa reference genome Sscrofa10.2 (GCA_000003025.4). (DOCX) S4 Table. Summary information and references on the properties of the marker genes assayed in this paper. The symbol, full name and description of the genes assayed in this paper were listed. (DOCX) gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11846609. PMID:
v3-fos-license
2022-05-14T06:22:48.563Z
2022-05-12T00:00:00.000
248749237
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "4464cc54483c83ed56a8036558adfb48d0fee2c9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44717", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "f2bcd448b2a2307782246bb7d03c2f70c0bba386", "year": 2022 }
pes2o/s2orc
Depression symptoms 6 years after stroke are associated with higher perceived impact of stroke, limitations in ADL and restricted participation Late post-stroke depression symptoms are understudied. This study aimed to investigate depression symptoms 6 years after stroke, and associations with perceived impact of stroke, activities of daily living (ADL), and participation in social and everyday activities. Data was collected in a 6-year follow-up in a longitudinal study of stroke. Assessments included Hospital Anxiety and Depression Scale (HADS) for depression symptoms, Stroke Impact Scale 3.0. for perceived impact of stroke, Barthel Index for ADL, Frenchay Activities Index for participation in social and everyday activities. The research questions were addressed by bivariate analyses (with HADS-D ≥ 4 as cut-off), and hierarchical multiple regression analyses using continuous HADS-D scores. Forty percent of the 105 participants (57% men, age 30–91) showed depression symptoms (HADS-D ≥ 4). Depression symptoms were associated with higher perceived impact of stroke, more dependence in ADL, and more restrictions in participation in social and everyday activities. Most of those with depression symptoms had low scores on HADS, indicating that even mild depression symptoms might be relevant to identify and target in treatment and rehabilitation of long-term consequences of stroke. www.nature.com/scientificreports/ adheres to the International Classification of Functioning, Disability and Health (ICF) 5 , which is based on the biopsychosocial model of disability. In the ICF activity is defined as "the execution of a task or action by an individual" and participation as "involvement in a life situation". Depression is more common after stroke 6 than in the general population 7 and may contribute to poorer quality of life and functioning [8][9][10] . Previous research has reported both early and late onset of post-stroke depression (PSD), with the highest prevalence during the first year 6,8 . The aetiology of PSD is still poorly understood but considered to be multifactorial, with biological and psychosocial components contributing to the development of depression symptoms 10 . In recent years, an increased attention has been given to evaluating the efficacy of treatments of PSD 10 . Furthermore, different interventions to prevent PSD have been tested, including pharmacological, psychological, and non-invasive brain stimulation treatments, but the current evidence for efficacy of any of these treatments is weak 11 . The prevalence of PSD has been reported up to 15 years 12 , but there is still a scarcity of research including factors associated with PSD in a longer time perspective. As post-stroke depression has been related to worse functioning 10 , relevant areas to investigate include perceived impact of stroke, activities of daily living (ADL) and participation in social and everyday activities. Therefore, the aims of this study were to investigate depression symptoms 6 years after stroke, and associations with perceived impact of stroke, ADL, and participation in social and everyday activities. Methods Participants. Participants from the longitudinal study "Life After Stroke Phase 1" (LAS-1), who took part in a 6-year follow-up, were eligible for inclusion. The LAS-1 was a prospective observational study on the rehabilitation process after stroke, described in detail elsewhere 13 . From an original sample of 349 patients diagnosed with stroke, consecutively recruited years 2006-2007 at stroke units at Karolinska University Hospital in Stockholm, Sweden, 183 persons who were still alive were approached by mail for participation. Informed signed consent was obtained from all participants. Ethical permission for the original study and the 6-year follow-up was granted by the Regional Ethics Committee in Stockholm (2011/1573-32, 2012/428-32), and procedures were conducted in accordance with the Declaration of Helsinki. In the current study, all participants from the LAS-1 study who consented to participate in the 6-year followup and who had completed the Hospital Anxiety and Depression Scale (HADS) 14 were included. Procedure. Data was collected with structured face-to-face interviews by experienced occupational therapists and physiotherapists. The interviews were in most cases conducted in the participant's home. If needed, a next-of-kin was present during the interviews. Data collection. Sociodemographic data was collected at baseline, within 5 days after stroke, and at the 6-year follow-up. The Barthel Index (BI) has shown good agreement with other stroke severity measures 15 and was used to classify stroke severity at baseline. A score < 15 was classified as severe, 15-49 moderate, and ≥ 50 as mild stroke 16 . Cognitive function was assessed with the Mini-Mental State Examination (MMSE) 17 . One item from the Scandinavian Stroke Scale was used to assess presence and severity of aphasia at onset 18 . At 6 years the following data were collected. Hospital Anxiety and Depression Scale. Depression symptoms were collected by the Hospital Anxiety and Depression Scale (HADS), which is a 14-item self-rating scale to screen for anxiety and depression among persons with physical ill-health 14 . The items cover non-physical symptoms of anxiety and depression, with seven items covering anxiety (HADS-A) and seven items covering depression (HADS-D). The respondent is asked to rate agreement with the statements for a period of 1 week, on a four-point scale ranging from 0 = no symptom to 3 = maximum symptom. The maximum score on HADS-D is 21 and a commonly used cut-off for depression is > 8 19 . However, it has been suggested that a lower score is more accurate to detect depression among persons with stroke 20 and a cut-off of 4 was therefore used in this study. Stroke Impact Scale. Perceived impact of stroke was assessed with the Stroke Impact Scale (SIS) version 3.0 21 , a 59-item scale assessing eight domains: strength, hand function, ADL, mobility, communication, emotion, memory and thinking, and participation. Responses on each domain are transformed into a 0-100 score, where 0 indicates maximal and 100 no perceived impact of the stroke. The scale also contains a single item reflecting perceived recovery from the stroke, which is rated on a visual analogue scale with a range of 0-100, where 0 reflects no, and 100 maximal, recovery. Barthel Index. The Barthel Index 15 was used to assess ADL. The instrument consists of ten questions about activities regarding personal care and mobility. The total sum score is 0-100 where higher scores indicate more independence. Any score below 100 indicates some level of dependence. Frenchay Activities Index. Participation in social and everyday activities were assessed with the Frenchay Activities Index (FAI) 22 . The scale consists of 15 items covering domestic chores, outdoor activities, and leisure/work. Each item is rated from 0 to 3, depending on the frequency of the activity during the past 3 or 6 months. A higher score indicates more frequent participation in social and everyday activities. A total score of < 15 is considered as the person being inactive/restricted 23 Results From the original cohort of 349 individuals in the LAS-1 study, 183 were still alive and eligible for the 6-year follow-up. Of these, 44 individuals declined participation, 18 were not possible to trace, 16 individuals had incomplete data and 105 participated in the current study. The inclusion process is presented in Fig. 1 and demographics of the 105 participants are presented in Table 1. Compared with all 183 individuals that were eligible for the 6-year follow-up, study participants were younger (median age 69 versus 74, p = 0.008). There were no differences between groups with regard to stroke severity (n = 90/10/5 for mild/moderate/severe stroke versus n = 140/25/18, p = 0.055) or sex distribution (57% men versus 54%, p = 0.618). The participants were 45 women and 60 men, aged 30-91 (median = 69) at the time of the follow-up. Most of the participants had had a mild stroke (mild n = 90, moderate n = 10, severe n = 5). Among the participants, HADS-D scores varied between 0 and 12 (mean 3.25, SD 3.15, median 2, IQR 1-5). Forty percent had symptoms of depression (HADS ≥ 4) at the 6-year follow-up. The ratings are presented in Table 2. In bivariate analyses, there were no differences regarding age, sex, or presence of aphasia between participants with or without depression symptoms (Table 1). However, participants with depression symptoms had greater cognitive impairment, as reflected by lower scores on the MMSE. Participants with depression symptoms also perceived higher impact of stroke on all domains of the SIS, except for the Memory and Thinking, and reported greater difficulties in ADL and social and everyday participation, as reflected by lower scores on BI and FAI ( Table 1). The hierarchical multiple regression models are presented in Table 3 (Supplementary Table S1 online for a more detailed description). Model 1, i.e., age, sex, and stroke severity, was significantly associated with all dependent variables, except the SIS domain ADL. Model 1 explained 4-35% of the variance of the dependent variables. When depression was added (Model 2), the explained variance increased significantly in all models, with ΔR2 ranging from 8 to 54% in increased explained variance of the dependent variables. All models were statistically significant in the second step. Hence depression symptoms were associated with higher perceived impact of stroke in all the domains of SIS, as well as more dependence in ADL as per BI and restrictions in participation measured by FAI. Discussion This study showed that depression symptoms (HADS-D ≥ 4) were present in 40% of the participants in this 6-year follow-up after stroke. Most participants had suffered from a mild stroke, and most had only mild symptoms of depression. Still, depression symptoms were consistently associated with worse outcomes in perceived impact www.nature.com/scientificreports/ of stroke, as well as more restrictions in ADL, and social and everyday participation. The study adds to the current literature on long-term follow-up 6,12 showing that depression symptoms are common also in the chronic phase after stroke. The depressive symptoms were negatively associated with perceived impact of stroke, ADL, and participation in social and everyday activities. Therefore, the clinical implications of depression symptoms after stroke seems important to evaluate. An aspect to take into account is the similarities between depression and apathy. A recent study argued that post-stroke apathy is often mistaken for depression in PSD studies 24 . However, in this study most depression items, that were prominent among the participants with depression symptoms, were clearly mood items (HADS-D item 2, 4, and 6, see Table 2) and not symptoms that could as well be expressions of e.g., lack of energy or post-stroke apathy. Furthermore, participants with depression symptoms had lower scores on Table 1. Characteristics of participants 6 years after stroke, and comparisons between participants with and without depression symptoms. a n = 103; b n = 101. www.nature.com/scientificreports/ the MMSE and the relationship between cognitive function and depression symptoms after stroke could be a relevant focus for future research. The HADS-D scores in the current study were similar to those reported in a representative sample of the Swedish population in the age group 65-80 25 , indicating that depression symptoms are equally common in the general population. However, whether the negative associations between depressive symptoms, and ADL and participation in social and everyday activities are present also in the general population is not known and should be explored. The finding of the negative impact of mild depression symptoms on several outcome measures raises the question of how such symptoms might be prevented or treated. In a previous intervention review, evidence for antidepressant or psychological interventions in preventing depression after stroke was weak 11 . However, a systematic review of cognitive behavioural therapy (CBT) for PSD showed positive effects of CBT, but the results should be interpreted with caution due to the quality of the included studies 26 . Another systematic review and meta-analysis found evidence that exercise reduced depressive symptoms in neurologic disorders 27 . Moreover, behavioural activation for PSD has been tested in a few studies with promising results, but more research is needed and recommended especially for milder forms of depression 28 . Thus, for mild depression symptoms after stroke, interventions to test in future studies might include exercise or behavioural interventions, which have few side effects and are easily applied at low cost. There were several methodological strengths of the study. The follow-up time of 6 years exceeded most previous studies of PSD. All stroke patients from Karolinska University Hospital's stroke units were eligible for inclusion in the original study and of these, all who were alive and reachable were invited to the 6-year follow-up. Data collection was performed with face-to-face interviews and no exclusion criteria were used, which enabled a broad participation. Valid and reliable outcome measures were used and included patient-reported outcomes. There were also several limitations of the study. First, the sample size was quite small and might not be representative for all persons 6 year after stroke. However, the percentage of deceased participants at the 6-year follow-up was similar to a large Swedish register-based study 29 , which indicates that the original sample was approximately representative for the general stroke population in Sweden 6 years after stroke. Those who chose to participate in the 6-year follow-up were younger than the whole group of eligible individuals alive after 6 years, which limits the representativity of the sample. Second, causal relationships cannot be established i.e., other life events than stroke may have contributed to the depression symptoms identified in this study. Furthermore, it is not possible to establish a causal relationship between the investigated variables, e.g., restrictions in participation might lead to depression and vice versa. However, as depression symptoms were associated with worse functioning, they are important to target. Conclusion The study adds to the literature by providing a long-term follow-up of depression symptoms after stroke and indicating a negative impact of even mild depression symptoms on everyday activities. Hence, long-term follow-up of persons with stroke, with sensitive screening to identify depression symptoms to initiate treatment is warranted. Data availability Since data can indirectly be traced back to the study participants, according to the Swedish and EU personal data sharing legislation, access can only be granted upon request. Request for access to the data can be put to our Research Data Office (rdo@ki.se) at Karolinska Institutet and will be handled according to the relevant legislation. Table 3. The contribution of depression symptoms on perceived impact of stroke (each domain of Stroke Impact Scale; SIS), ADL (Barthel Index) and participation in social and everyday activities (Frenchay Activities Index) (n = 103). a n = 105. b n = 101.
v3-fos-license
2023-11-29T16:18:08.839Z
2023-11-27T00:00:00.000
265486627
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2306-7381/10/12/674/pdf?version=1701059702", "pdf_hash": "0485c6c1600a7954aa43badc508327f21a780bd1", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44719", "s2fieldsofstudy": [ "Medicine" ], "sha1": "094cdcbef4c661f6430ff40bc8c944a76fb070c0", "year": 2023 }
pes2o/s2orc
Website Investigation of Pet Weight Management-Related Information and Services Offered by Ontario Veterinary Practices Simple Summary In addition to veterinary advice, pet owners rely on the internet for information on their pet’s health. Effectively managing a pet’s weight is often an underestimated component of wellness for pets, and there is an opportunity for veterinary practices to utilize an online platform to educate pet owners on the importance of weight management. The primary objective of this study was to describe the type of canine and feline weight management services, products, and information advertised or displayed on the websites of 50 veterinary practices in Ontario. An additional objective was to explore whether the size, company status, and location of the practice influences what is advertised or displayed. The preliminary results suggested that veterinary practices do not prioritize advertising weight management services, products, or educational material online for the public, and this was especially true for smaller practices with fewer veterinarians and veterinary technicians employed. Independently owned veterinary practices also seemed to advertise weight management products less than corporate practices. The exploratory findings of this study highlight the need for veterinary teams to provide educational and up-to-date online resources on weight management for pet owners. This, in turn, can provide trusted educational and accessible information for pet owners and contribute to client loyalty. Abstract Pet owners rely on information and advice from their veterinary practice to effectively manage their pet’s weight. This study investigated weight management information and services displayed on practice websites in Ontario, Canada. Information collected from the websites of 50 randomly selected small and mixed-animal practices included practice and staff demographics and the type of weight management services, products, and information advertised or displayed. The most frequently advertised weight management service and product were nutritional counselling (34%) and therapeutic diets (25%), respectively. Current bodyweight measurement was advertised on just over half of the websites (54%), while physical therapy counselling was the least-advertised service (16%). Further statistical analyses were performed in an exploratory fashion to determine areas for future research. Binary logistic regression analyses were used to investigate the association between practice demographics and the type of weight management information advertised online. A maximum of two predictor variables were included in each regression model. Exploratory analyses indicated that when controlling for the number of veterinarians in each practice, having a higher number of veterinary technicians was associated with increased odds of a practice website advertising current bodyweight measurement by 80.1% (odds ratio (OR) = 1.80, p = 0.05). Additionally, when controlling the number of veterinary technicians, having a higher number of veterinarians was associated with increased odds of a practice website advertising sales of therapeutic diets by 119.0% (OR = 2.19, p = 0.04). When using corporate practices as reference, independently owned practices had decreased odds of advertising sales of treats and weight management accessories on their practice websites by 78.7% (OR = 0.21, p = 0.03). These preliminary results suggest that advertising weight management information is not prioritized on veterinary practice websites in Ontario, especially those with lower staff numbers. The findings of this study raise awareness on the current state of weight management promotion for pets on veterinary practice websites and highlight ways to improve upon a practice’s online presence. Introduction Weight management in dogs and cats, an essential component of companion animal medicine that is often undervalued or overlooked, refers to the process of adapting longterm lifestyle changes to maintain a pet's healthy weight.This could involve weight loss and/or muscle and weight gain, while considering individualized factors such as breed, age, gender, and activity level of the pet [1,2].An effective weight management plan includes both nutritional and physical therapy counselling to ensure appropriate caloric intake, diet selection, physical activity levels, and physical therapy where appropriate to ensure body fat loss and/or restore the use of muscles, bones, and the nervous system [3,4].It also includes ways to modify the behaviors of both the owner and their pet to overcome barriers preventing them from reaching their goal [3].Individualized weight management programs that provide consistent and healthy rates of weight loss or gain have been known to improve the quality of life for the animal by increasing lifespan and reducing the risk of or combatting already existing diseases or disorders, malnutrition, and certain types of cancers [3][4][5][6][7][8]. Despite knowledge that weight management is a vital component of wellness for companion animals, it remains a challenge for many pet owners to maintain a healthy weight in their pets.Canine and feline obesity is the most common nutritional disorder within veterinary practice and its prevalence has reached epidemic levels [4,9,10].It is estimated that between 20 and 60% of adult cats and dogs in developed countries are overweight or obese [6,[9][10][11][12][13][14][15][16].The prevalence of underweight cats and dogs is less reported in the literature.However, recent studies have shown the prevalence of underweight adult dogs and cats in developed countries to be 4.2-11 and 5.3-10%, respectively [12][13][14][15][16][17].Considering these high percentages, primarily among overweight cats and dogs, it is likely pet owners are unaware of the negative impact that an over-conditioned state can have on their pet's health.For example, multiple studies have shown that pet owners continue to incorrectly believe their overweight or obese pet to be at an acceptable weight [18][19][20][21]. One way to combat this misunderstood aspect of healthcare in pets is to prioritize the promotion of weight management services and information in a method that is accessible and understandable for pet owners.Two recent studies were conducted using pet owner focus groups to understand pet owners' expectations when it comes to information exchange with their veterinarian [22,23].Many pet owners mentioned visiting the internet for additional information on their pet's health following their veterinary appointment.They also expressed the desire for supplementary resources from their veterinarian, whether that be provided through the clinic or other reputable internet sources [22,23].A recent online survey targeting pet owners in the United Kingdom found that the most common source for pet health information was the internet (449/571; 78.6%), followed by their veterinarian (441/571; 77.2%), yet veterinarians were considered the most trustworthy source of information [24].An older study conducted by Hofmeister et al. (2008) showed that clients of veterinary practices consider the internet to be the third most popular source of information regarding pet health, following general practitioners and veterinary specialists [25].Similar research from the United States has shown that 72.7% of pet owners surveyed (n = 1223/1683) considered the internet to be a supplemental source of information in addition to traditional vet care [26]. It is apparent that veterinary practice websites should provide informative and easyto-understand resources and information about pet weight management, as well as the Vet.Sci.2023, 10, 674 3 of 15 services and products they currently offer to assist in the weight management journey.Comprehensive and up-to-date websites can allow pet owners to gain confidence in their ability to manage their pet's weight and to understand what assistance can be provided by their veterinary health care team.Accessible information can result in pet owners becoming more comfortable asking questions, ultimately leading to a sense of responsibility and self-motivation to improve their pet's condition [24,26,27].It can also help to establish veterinary clinics as trusted experts in pet health and wellness, further strengthening the relationship with pet owners and promoting client loyalty [22][23][24][25][26][27].Finally, information regarding veterinary health care team members and their qualifications or interests in the field of pet weight management could be vital for pet owners when choosing the right veterinary practice for potential weight management consultations.A better understanding of what is advertised or displayed on veterinary practice websites is necessary to overcome current barriers pet owners face when understanding the importance of and pursuing weight management for their pets. The authors hypothesized that online marketing of canine and feline weight management services is not a priority for companion animal veterinary practices in Ontario.Therefore, the primary objective of this study was to investigate the type of weight management services and information that are advertised or displayed on the websites of veterinary practices in Ontario and the frequency with which they are promoted.An exploratory analysis was also performed to investigate whether any veterinary practice and staff demographics influence the type of weight management services and information advertised or displayed on clinic websites.This exploratory analysis was meant to serve as an initial look into the influence of demographic factors on the online advertising abilities and/or priorities of veterinary practices and to suggest which factors might be important to consider for future research. Data Collection A total of 50 small and mixed-animal Ontario practice websites were investigated.The practices were selected from a list of 783 practices that have referred at least one patient (dog or cat) to the Ontario Veterinary College's Health Science Centre (OVC HSC) and were within a 75 km radius from the Ontario Veterinary College in Guelph, Ontario.Practices were randomly selected using a computer-generated randomization list via www.randomization.com.The radius was chosen to ensure 50 practices could be randomly selected.The veterinary practices were not aware of the investigation at any time.The websites were initially investigated from January to March 2022 and re-investigated by a third party from November to December 2022.The websites were re-investigated to include a more in-depth search of weight management information for the purpose of this study.The only data used for this study was obtained from the most recent website investigation, and comparisons were not made between each time point. Information collected was separated into three categories: practice demographics, staff demographics, and pet weight management services and information displayed.All survey questions can be found in Supplementary Table S1.For this investigation, the only information used was that made available through websites, except for certain demographic information. Veterinary practice demographic information included the name, address, location (urban, suburban, or rural), type of practice (general, emergency and specialty, and mixed), species served, and the practice's company status (independently owned or belonging to a corporation).The name of the practice was entered into an internet search engine to access the practice's website.If the address was not listed on the website, the information was taken from the internet search engine.The location (urban, suburban, rural) was derived from the practice's address.Definitions used to define a practice as urban, suburban, or rural can be found in Table S1.If it was not explicitly stated on the website that the practice belonged to a corporation or was independently owned, lists of practices associated with all known veterinary corporations in Ontario were searched.If the practice was not listed, it was deemed an independently owned practice. Staff demographic information included the total number of staff and number of veterinarians and veterinary technicians listed on the website.The number of veterinarians and veterinary technicians with nutrition and physical therapy credentials was also recorded, as well as whether any staff members had additional training in veterinary nutrition, as noted in the staff biographies on the website.Credentials were accepted when they consisted of an ongoing or completed board certification, graduate degree, or training program included in the American Association of Veterinary State Boards' Registry of Approved Continuing Education Programs focused on nutrition or physical therapy.Accepted nutrition and physical therapy credentials for veterinarians and veterinary technicians can be found in Table S1.Additional training for staff members in veterinary nutrition included those considered nutrition advisors and those with nutrition certificates from pet food companies or other online nutrition certifications. The pet weight management services, products, and information displayed on the practice websites included the advertising of a weight management service, nutritional counselling, physical therapy counselling, current bodyweight measurement, and sales of veterinary therapeutic diets, treats, food puzzles, or measurement tools and weight management accessories such as leashes and toys, and whether they displayed educational material on weight management.Links to webstores as well as information directly displayed on the website were used to investigate whether the practice advertised sales of therapeutic diets, treats, food puzzles, or measurement tools and weight management accessories. Note that for this study, the term "canine physical therapy" is used, which is interchangeable with the term "canine rehabilitation".Canine physical therapy encompassed both conventional physical therapy and complementary and alternative therapies.Physical therapy consists of conventional evidence-based treatments that help the patient to restore the use of muscles, bones, and the nervous system.Treatment modalities include physical activity, manual massage, passive range of motion, walking, hydrotherapy, joint mobilization, and heat and cold therapy [28].Complementary and alternative therapies consist of a range of therapies that are not part of current standard veterinary medical practice but can be used in addition to conventional treatments.Treatment modalities include laser therapy, therapeutic ultrasound, acupuncture, and transcutaneous or neuromuscular electrical stimulation [29].Additionally, for this study, nutritional counselling encompassed information on body composition assessments, calculating ideal bodyweight, collecting a diet history, and recommendations on diet selection, feeding amount, feeding frequency, feeding management, daily treat allowance, and supplementation [5]. Data Analysis Descriptive statistics, including frequency (n) and percentage (%) data, were performed in Microsoft Excel version 16.71 for all veterinary practice demographics (location, company status, practice type, and type of species served), staff demographics (total number of staff, number of veterinarians and veterinary technicians, number of veterinarians and veterinary technicians with nutrition and physical therapy credentials, and whether any staff members had additional nutrition training), and pet weight management information and services displayed on the websites (whether the website advertised a weight management service, nutrition and physical therapy counselling, current bodyweight measurement, and sales of therapeutic diets, treats, food puzzles, or measurement tools and accessories and displayed educational material). All other statistical analyses were performed in R Studio, version 4.2.2 (31 October 2022).Statistical models were selected based on the nature of the dependent variables of interest, which in this case were all binary.To determine the overfitting parameter, the number of independent (predictor) variables entered into the models was chosen based on the smallest category of the dependent variables (present/absent).The smallest category of the following dependent variables of interest allowed for a maximum of two predictor variables per model: advertising a weight management service, current bodyweight measurement, and sales of therapeutic diets, treats, and accessories and displaying educational material.The smallest category of the following dependent variables of interest allowed for a maximum of one predictor variable per model: advertising nutritional counselling.The smallest category of the following dependent variables did not allow for any predictor variables to be included in the models and were therefore not run: advertising physical therapy counselling and the sale of food puzzles and measurement tools. Binary logistic regression models were designed to determine the likelihood that veterinary practice and staff demographics influenced the odds of a practice advertising pet weight management information and services.Three models were developed for the following dependent variables, measured as a binary present/absent outcome: advertising a weight management service, current bodyweight measurement, and sales of therapeutic diets, treats, and accessories.Independent variables considered in the first model included the number of veterinarians and veterinary technicians (continuous variables) listed on each practice website.Veterinarians and veterinary technicians were included in the same model as there is likely a correlation between the number of veterinarians and veterinary technicians employed within each practice.By including both staff members in the model, the influence of the number of veterinarians employed on certain outcome variables could be examined while controlling for the number of veterinary technicians, and vice versa.A linear relationship between the two numerical independent variables (number of veterinarians and veterinary technicians) and the logit transformation of each dependent variable was determined upon visual inspection through scatter plots.The independent variable considered in the second model was the location of the practice (3 categorical levels: urban, suburban, or rural).Including categorical data in a binary regression model reduces the number of predictor variables allowed by one (N-1 degrees of freedom); therefore, location was the only independent variable permitted.The independent variable considered in the third model was the company status of the practice (dichotomous variable: belonging to a corporation or independently owned).Two models were developed for the dependent variable, displaying educational material (measured as a binary present/absent outcome).The independent variable considered in the first model was location, and company status was considered in the second model.The numbers of veterinarians and veterinary technicians were not included as independent variables in a model as the linearity assumption was not met upon visual inspection.Four models were developed for the following dependent variable, measured as a binary present/absent outcome: advertising nutritional counselling.Each model consisted of one independent variable due to the overfitting parameter, and the following independent variables were used: number of veterinarians and number of veterinary technicians listed on the websites, location, and company status of the practices.For all analyses that considered location as the independent variable, two regression models were run.The first model used urban practices as the reference variable, which allowed the comparison between urban and suburban, and urban and rural practices.The second model used suburban practices as the reference variable, which allowed the final comparison between suburban and rural practices.For all analyses that considered company status as the independent variable, belonging to a corporation was used as the reference variable.In total, 28 binary logistic regressions were run. Certain veterinary practice (practice type, type of species served) and staff (number of veterinarians and veterinary technicians with nutrition or physical therapy credentials, whether any staff members had additional nutrition training) demographic information could not be included as independent variables in the models due to the lack of variation in the data.Most practices provided general care for small animals (46/50) with only four providing emergency services and one providing care for small and large animals.There was also not enough variation in the type of species served, as all practices serviced cats and dogs, and only 15 serviced pocket pets and 4 serviced pocket pets and exotic species, along with cats and dogs.Finally, most practice websites did not report any veterinarians or veterinary technicians having nutrition and/or physical therapy credentials, or any staff members with additional nutrition related training. To understand the proportion of the variance in the dependent variables that can be explained by the predictor variables (number of veterinarians and veterinary technicians, location, and company status) in the models, pseudo R 2 values were reported.Pseudo R 2 values were interpreted through the CoxSnell, McFadden, and Nagelkerke models, and a range of variance was reported.The statistical software used for analysis did not allow for the interpretation of R 2 values, so the pseudo R 2 values reported should be interpreted with caution.Statistical significance was set at p ≤ 0.05; trends were recognized if p was >0.05 but <0.1.All tables and figures were created in GraphPad Prism 9.5.1. Due to the large number of tests run, there is potential for an inflated type 1 error.Considering the exploratory nature of these analyses, it was chosen not to correct for multiple comparisons.The purpose of the regression analyses was to have an initial look into the potential influence of veterinary practice and staff demographics on the advertising of weight management information online.Therefore, the results should mainly be interpreted as insight for future, larger studies, and caution should be taken when examining the unadjusted p-values.Of the 28 binary regression analyses run, 5 were identified as having relevance for future research and are discussed below. Demographic Information To compare demographic information collected in this study to the greater population, the number of active companion animal veterinary practices and veterinarians in Ontario was collected from the College of Veterinarians of Ontario (Table 1) [30].The number of active registered veterinary technicians in Ontario was collected from the Ontario Association of Veterinary Technicians, and this number could not be filtered by species (Table 1) [31].All 50 veterinary practices investigated served small animals, with 45 (90%) general practices, and 5 (10%) emergency or specialty clinics.There were no practices investigated that served both small and large animals.Many served only cats and dogs (34/50; 68%), while 15 (30%) and 4 (8%) served pocket pets and exotics along with cats and dogs, respectively.Only one (2%) practice served only cats.Information regarding the location and company status of the practices is summarized in Table 2. Staff Demographics Of the veterinarians and veterinary technicians listed on the practice websites, only two veterinarians reported holding any nutrition credentials in their biography.Additionally, very few veterinarians and veterinary technicians reported holding credentials in the field of veterinary physical therapy.Moreover, additional training in veterinary nutrition was not mentioned in most of the staff members' biographies (Table 2).More than half of the veterinary practice websites advertised nutritional counselling (66%; Figure 1A), and just over half advertised having a weight management service and current bodyweight measurement (54%; Figure 1A).Educational material related to weight management was displayed on just under half of the websites (46%; Figure 1A).Physical therapy counselling was the least-advertised weight management service (n = 8/50; 16%; Figure 1A).Regarding the sale of weight management products, therapeutic diets were advertised most frequently (50%; Figure 1B).The same veterinary practices that advertised the sale of treats also advertised the sale of weight management accessories, and these were advertised on just under half of the practices (42%; Figure 1B).The sale of food puzzles and/or measurement tools was not advertised on any of the practice websites.Veterinary technicians with physical therapy credentials 0 0 Weight Management Services, Products, and Information Displayed More than half of the veterinary practice websites advertised nutritional counselling (66%; Figure 1A), and just over half advertised having a weight management service and current bodyweight measurement (54%; Figure 1A).Educational material related to weight management was displayed on just under half of the websites (46%; Figure 1A).Physical therapy counselling was the least-advertised weight management service (n = 8/50; 16%; Figure 1A).Regarding the sale of weight management products, therapeutic diets were advertised most frequently (50%; Figure 1B).The same veterinary practices that advertised the sale of treats also advertised the sale of weight management accessories, and these were advertised on just under half of the practices (42%; Figure 1B).The sale of food puzzles and/or measurement tools was not advertised on any of the practice websites. Influence of Veterinarians and Veterinary Technicians on Displaying Weight Management Services, Products, and Information on Websites On average, when controlling for the number of veterinarians in each practice, having a higher number of veterinary technicians was associated with increased odds of a Influence of Veterinarians and Veterinary Technicians on Displaying Weight Management Services, Products, and Information on Websites On average, when controlling for the number of veterinarians in each practice, having a higher number of veterinary technicians was associated with increased odds of a practice website advertising current bodyweight measurement by 80.1% (Table 3).When controlling for the number of veterinary technicians in each practice, the number of veterinarians did not have an influence on the advertising of current bodyweight measurement on practice websites.Regarding the advertising of current bodyweight measurement, pseudo R 2 values revealed that 16.8-27.6%(Table S2) of the variance in the data could be explained by the combined number of veterinarians and veterinary technicians working in the practice.Additionally, on average, when controlling for the number of veterinarians in each practice, having a higher number of veterinary technicians tended to increase the odds of a practice website advertising a weight management service in general by 63.5% (OR = 1.63, p = 0.08).When controlling for the number of veterinary technicians in each practice, the number of veterinarians did not have an influence on the advertising of a weight management service on practice websites.Regarding the advertising of a weight management service, 13.9-23.3%(Table S2) of the data can be explained by the combined number of veterinarians and veterinary technicians working in the practice. On average, when controlling for the number of veterinary technicians in each practice, having a higher number of veterinarians was associated with increased odds of a practice website advertising the sale of therapeutic diets by 119.0%(Table 3).When controlling for the number of veterinarians in each practice, the number of veterinary technicians did not have an influence on the advertising of selling therapeutic diets on practice websites.Pseudo R 2 values revealed 17.4-27.8%(Table S2) of the variance in the data could be explained by the combined number of veterinarians and veterinary technicians working in the practice.Binary logistic regressions did not reveal any influence of the number of veterinarians and veterinary technicians working in the practice on whether the practice websites advertised the sale of treats and weight management accessories. Influence of Veterinary Practice Company Status on Displaying Weight Management Services, Products, and Information on Websites In terms of weight management services, binary logistic regressions did not reveal any influence on whether practice websites advertised a weight management service, nutritional counselling, and current bodyweight measurement between practices belonging to a corporation and those that were independently owned. Using corporate veterinary practices as the reference, independently owned practices had decreased odds of advertising the sale of treats and weight management accessories on the practice websites, on average, by 78.7% (Table 4).Pseudo R 2 values revealed that 7.9-13.7%(Table S2) of the variance in the data could be explained by the company status of the veterinary practices.Binary logistic regressions revealed that being independently owned or belonging to a corporation did not influence whether a veterinary practice advertises the sale of therapeutic diets or the displaying of educational material on weight management on its website.Binary logistic regressions did not reveal any influence of the location (urban vs. suburban; urban vs. rural; suburban vs. rural) of a veterinary practice on whether the practice website advertised a weight management service, nutritional counselling, current bodyweight measurement, and sales of therapeutic diets, treats, and accessories. It was also investigated whether the location of a veterinary practice influenced whether the practice website displayed educational material in written, video, or blog format.When using urban practices as the reference, residing in a rural area tended to decrease the odds of a veterinary practice website displaying educational material by 86.1% (OR = 0.56, p = 0.08).There was no difference in displaying educational material on practice websites between urban and suburban practices.When using suburban practices as the reference, there was no difference in displaying educational material on practice websites between suburban and rural practices.Pseudo R 2 values revealed that 6.3-11.1% (Table S2) of the variance in the data could be explained by the location of the veterinary practices in both models. Discussion The results of this study help gain insight into the current state of accessible and reliable information that is available online for pet owners to aid in their knowledge and decision making when it comes to their pet's weight management.On the veterinary practice websites evaluated in the present study, the most-advertised weight management service was nutritional counselling.Similarly, the most-advertised products were veterinary therapeutic diets, followed by treats and weight management accessories such as leashes and toys.This finding is supported by previous research investigating the types of pet obesity information that are present or absent online [32].Making changes to a pet's diet to aid in weight management and the recommendation of a weight loss diet were mentioned most often [32].Another qualitative study found that the most frequent pet health information pet owners search online for is medical concerns and diet/nutritional information [23].The weight management guidelines put forth by the American Animal Hospital Association (AAHA) primarily focus on the nutritional component of weight loss or gain and recommend that nutritional assessments be performed regularly by the veterinary team.Additionally, the World Small Animal Veterinary Association (WSAVA) along with the AAHA have classified nutrition as the fifth vital sign following temperature, pulse, respiration, and pain [4,33,34].Companion animal nutrition has been well studied, with research leading to detailed dietary and feeding guidelines [4,5].The strength of these recommendations from the AAHA and WSAVA, along with the current state of research, should lead veterinary teams to feel most confident in pursuing the nutritional aspect of weight management with their patients.Considering this perceived importance of nutrition in the health and well-being of pets, it may explain its prominence on veterinary practice websites when compared to other weight management services and products. Measuring a patient's current bodyweight is an important first step in determining a weight management plan in pets [1,33], yet it was only advertised by half of the websites as a service.This aligns with the study mentioned above conducted by Chen et al. in 2020, who found that less than half of the online sources investigated mentioned measuring a pet's weight, with even fewer mentioning how to properly weigh a pet or describing a body condition score chart [32].It is known that body condition scoring (BCS) and bodyweight measurements are two of the most common methods used by veterinarians to assess body composition [1].These valuable tools can be used by both veterinary staff and pet owners to determine a pet's ideal bodyweight.Determining a pet's ideal bodyweight could help motivate pet owners to effectively manage their pet's weight with a tangible goal in mind [1,4].However, multiple studies have shown that pet owners continue to inaccurately determine their pet's body condition score using the BCS chart [18][19][20][21].A questionnaire sent to pet owners revealed that about half of the respondents were able to correctly estimate their dog's bodyweight using electronic scales [21].Veterinary practices may want to avoid the possibility of providing information online that has the potential to be misused, and as a result, they avoid advertising the service altogether.Nevertheless, pet owners play an important role in the monitoring of their pet's health and wellbeing [35].Providing online tools to help pet owners complete an accurate body composition assessment can contribute to the overall success of weight management in pets.Advertising the current bodyweight measurement can also allow for veterinary practices to raise awareness of its importance in the weight management journey.To further encourage the proper use of weighing scales and body composition assessment tools, detailed instructions could accompany all online resources to provide a step-by-step guide for pet owners. Physical therapy appeared to be the least-prioritized service in terms of advertisement.In the present study, only a small number of veterinary practice websites advertised any type of physical therapy, including physical activity.In a similar study evaluating online pet obesity information, a small number of online sources advertised physical therapy, and making dietary changes was addressed significantly more for weight loss than increasing levels of physical activity [32].In the present study, weight management accessories that aid in promoting physical activity, such as leashes and toys, were advertised on less than half of the practice websites.To date, there are limited studies investigating the benefits of incorporating physical therapy into a weight management plan for pets [36][37][38].Increasing physical activity is often recommended by veterinarians in conjunction with dietary changes [39], but research regarding the effectiveness of such a program is scarce.In contrast to nutritional recommendations, there is a lack of evidence to help determine an ideal physical activity program for cats and dogs [4].Determining the caloric expenditure for different types of physical activity is relatively unexplored, aside from walking in dogs [4,40].A survey sent to veterinary colleges within the United States and Canada also revealed that less than half offered a dedicated course in integrative veterinary medicine, a combination of complementary and alternative therapies with conventional care.Of those who offered courses in this area, all but one was offered as an elective [41].The lack of published research and guidelines on the use of physical therapy for weight management, along with the minimal education for veterinarians, could help to explain the low advertisement of this service on practice websites.Regardless, practice websites could benefit from having a section dedicated to physical therapy and/or physical activity.If available, veterinarians with board certifications focusing on nutrition and physical therapy could assist with content creation.Knowledge dissemination related to weight loss protocol in relation to physical therapy is closely related to expertise from these fields. Aside from services and products, providing pet owners with educational information and resources on pet weight management is a trusted and accessible way to promote proper pet weight management care.A recent study using pet owner and veterinarian focus groups found that pet owners expressed the desire for information to be explained in multiple ways, and many deemed using a visual aid along with verbal explanation, as well as being directed to reputable internet sources, to be effective [22].Veterinarians in all focus groups also felt that providing additional resources to pet owners is important [22].Owners in another recent focus group study expressed the desire for veterinarians to provide trusted online resources to supplement the content learned during the appointment [23].Unfortunately, in the present study, only about half of the veterinary practice websites displayed some type of educational material on weight management, whether that be in written or video format, or links to external sources.This aligns with previous research findings that few pet owners receive recommendations for online resources from their veterinarians, despite their willingness to use such recommendations [23,24,42,43].Perhaps since both veterinarians and pet owners express the benefits in having informative and trustworthy online resources, additional efforts should be considered by all veterinary team members in creating educational online content.Examples of ways to achieve this include creating original content, subscribing to online software that provides educational material for veterinary practices, and providing links to trusted external sources. Exploratory analyses suggested that when controlling for the number of veterinarians working in the practice, the number of veterinary technicians influenced the advertising of current bodyweight measurement and a weight management service.Limited studies examining the role of the veterinary technician during weight management assessments exist.A recent qualitative study was conducted to understand the perceptions of veterinary professionals in relation to their practice's weight management service.Many respondents believed that the qualified veterinary technicians within their practice provided nutritionand weight-related information, most commonly following the veterinarians [44].Another preliminary study targeting veterinary professionals, primarily veterinary technicians, noted that most respondents would initiate a discussion with the owner regarding their pet's weight, regardless of weight status, and many indicated that they would provide recommendations on caloric intake, measuring food, and exercise [45].It is important to note that results of these studies were not separated for each practice, and the number of veterinarians and veterinary technicians working at each practice was unknown [44,45].Preliminary results from the present study may suggest that utilizing the expertise and skills of veterinary technicians can further assist in promoting and raising awareness on the topic of weight management.Future research should investigate the benefits of delegating the promotion and advertising of weight management care to trained technicians or other veterinary support staff. Exploratory analyses also indicated that when controlling for the number of veterinary technicians in practice, the number of veterinarians influenced the advertising of selling therapeutic diets.A recent survey given to small-animal veterinary health care team members noted that veterinarians were the most common source of nutrition-related information for pet owners, followed by veterinary nurses/technicians.There was also a significant relationship between the frequency with which veterinarians performed nutritional assessments and the establishment of a normal dietary regime, calculation of energy requirements, and formulation of nutritional plans [45].Research also suggests that veterinarians remain the leading source of information when it comes to pet owners purchasing pet food [44][45][46][47].If veterinarians and veterinary technicians are the two main sources of nutritional and dietary-related services/education, veterinary practices that employ a high number of these staff members may have increased advocacy for the advertisement of nutritional counselling and products such as therapeutic diets and treats online.Additional studies should be performed on a larger scale to determine the impact of the size of the practice, specifically in terms of the number of veterinarians and veterinary technicians employed, on the advertising and promotion of nutritional and weight management products in general. A final exploratory finding worth noting from this study was the influence of a veterinary practice's company status on the advertisement of products.Practices that are independently owned had decreased odds of advertising treats and weight management accessories compared to those belonging to a corporation.This is especially interesting as less than a third of the veterinary practices belonged to a corporation, and the remainder were independently owned.Incorporation has risen in popularity in recent years.Belonging to a corporation may increase opportunities available for veterinary practices, in that staff members can focus on veterinary medicine rather than the management of a business and subsequent marketing.Additional resources might be available for advancement in the layout of corporate practice websites, including pre-existing templates that ensure available products are present online, as well as links to any online veterinary stores [48].This could also include shared educational tools and handouts that are available for all practices within the corporation to help advertise products and services, and to help educate pet owners on all topics related to weight management.To build upon these preliminary results, studies could focus on the comparison of website design between independently owned practices and those belonging to a corporation.These studies could also determine whether a relationship exists between advertising of services and products and profit of sales. There were a few limitations to this study, the first being that information presented on the websites may not accurately reflect what is offered in person in veterinary practices.It was unknown whether the websites were complete and up to date regarding all aspects of the practice.Statistical analyses were performed with an exploratory purpose to determine areas for future research, and there is the potential for an inflated type 1 error as the results have not been corrected for multiple comparisons.Both the sample size and geographical radius of veterinary practices chosen were small, which limited the number of inferential statistics that could be run, to avoid overfitting the model.The small number of predictor variables permitted in the models also did not allow for the inclusion of co-variates or confounders, which might have influenced the results.Possible co-variates could include the education level/credentials of veterinarians and veterinary technicians in the field, the type of practice (emergency, specialty, etc.), and the species served.Possible confounders include the availability of services and/or products, knowledge of website design, and the economic status and level of establishment of the practice.Although the practices were uninformed of this investigation and were chosen at random, there is a possibility that they became aware of the study and modified their websites accordingly.A final limitation is that veterinary practices were chosen from a list of referring practices to the Ontario Veterinary College, which may have resulted in some practices within the 75 km radius who had never referred a patient before to the college to be missed.Considering the small sample size, extrapolation of findings from this study to the greater population of Ontario should be made with caution.However, regardless of sample size, this study should act as a call for more research and action from veterinary practices and staff members to improve upon the sources of information provided to pet owners regarding weight management.A broader website search should be conducted within a wider geographical range and without the prerequisite of being a referring practice to avoid sampling bias.Further research could also include comparing information found on the websites of veterinary practices to what is truly offered in-clinic to evaluate online accessibility for pet owners seeking veterinary care or advice.The findings of this study highlight the need for veterinary practices to improve upon their weight management promotion.Providing content online can increase awareness about pet obesity and help owners more effectively manage their pet's weight from home. Conclusions With the growing use of the internet for pet health information, veterinary practices are presented with the opportunity to utilize an online platform to raise awareness on the importance of weight management in pets.However, based on the results, it seems that veterinary practices in Ontario are not prioritizing the advertisement of weight management resources frequently, and there remains room for improvement.Services and products related to nutrition were advertised most frequently, with little priority given to the promotion of physical therapy and physical activity in relation to weight care.Educational resources on weight management were also not provided to pet owners on many websites.Exploratory analyses indicated that future research should consider the influence of a practice's size, location, and company status on the frequency with which they promote weight management services, products, and educational material online.The findings of this study raise awareness on the current state of weight management promotion for pets on veterinary practice websites and highlight ways to improve upon a practice's online presence.This can ensure pet owners have access to up-to-date information and trusted resources they can rely on. Figure 1 . Figure 1.Weight management services and information displayed (A) and weight management products displayed for sale (B) on websites of 50 veterinary practices in Ontario. Figure 1 . Figure 1.Weight management services and information displayed (A) and weight management products displayed for sale (B) on websites of 50 veterinary practices in Ontario. Table 1 . Demographic information relating to the number of veterinary practices, veterinarians, and veterinary technicians in this study and in Ontario. Table 2 . Veterinary practice and staff demographic information displayed on websites of 50 veterinary practices in Ontario.Staff demographic data are based on the total number of veterinarians (n = 162) and veterinary technicians (n = 166) mentioned on all veterinary practice websites. Table 2 . Veterinary practice and staff demographic information displayed on websites of 50 veterinary practices in Ontario.Staff demographic data are based on the total number of veterinarians (n = 162) and veterinary technicians (n = 166) mentioned on all veterinary practice websites. Table 3 . Binary logistic regression models exploring the association between (model 1) whether current bodyweight measurement is advertised on practice websites (yes/no outcome) and the number of veterinarians and veterinary technicians working in the practice and (model 2) whether the sale of therapeutic diets is advertised on practice websites (yes/no outcome) and the number of veterinarians and veterinary technicians working in the practice.Bolded values indicate significance (p < 0.05). Table 4 . Binary logistic regression model exploring the association between whether the sale of treats and weight management accessories is advertised on practice websites (yes/no outcome) and the company status of the veterinary practice.Bolded values indicate significance (p < 0.05).
v3-fos-license
2014-10-01T00:00:00.000Z
2002-06-11T00:00:00.000
1725467
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-2-10", "pdf_hash": "d236cce277c847f96c170cd9b7c16b8bb5b40356", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44720", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "61a6ff04caaadf4a48a42bf56bf332638fcd6b5b", "year": 2002 }
pes2o/s2orc
Kinetics of maternal immunity against rabies in fox cubs (Vulpes vulpes) Background In previous experiments, it was demonstrated that maternal antibodies (maAb) against rabies in foxes (Vulpes vulpes) were transferred from the vixen to her offspring. However, data was lacking from cubs during the first three weeks post partum. Therefore, this complementary study was initiated. Methods Blood samples (n = 281) were collected from 64 cubs (3 to 43 days old) whelped by 19 rabies-immune captive-bred vixens. Sera was collected up to six times from each cub. The samples were analysed by a fluorescence focus inhibition technique (RFFIT), and antibody titres (nAb) were expressed in IU/ml. The obtained data was pooled with previous data sets. Subsequently, a total of 499 serum samples from 249 cubs whelped by 54 rabies-immune vixens were fitted to a non-linear regression model. Results The disappearance rate of maAb was independent of the vixens' nAb-titre. The maAb-titre of the cubs decreased exponentially with age and the half-life of the maAb was estimated to be 9.34 days. However, maAb of offspring whelped by vixens with high nAb-titres can be detected for longer by RFFIT than that of offspring whelped by vixens with relatively low nAb-titres. At a mean critical age of about 23 days post partum, maAb could no longer be distinguished from unspecific reactions in RFFIT depending on the amount of maAb transferred by the mother. Conclusions The amount of maAb cubs receive is directly proportional to the titre of the vixen and decreases exponentially with age below detectable levels in seroneutralisation tests at a relatively early age. Background Campaigns of oral vaccination of foxes (Vulpes vulpes) against rabies have shown to be a powerful tool in vulpine rabies control [1,2]. However, in some areas (temporarily) setbacks have been observed. Partially, these have been linked with a low vaccination coverage of the young fox population, which is possibly a result of maternally transferred immunity interfering with active oral immunization of fox cubs [3,4]. However, until recently, no experimental evidence was available to support this hypothesis. Recently, after more than 20 years of oral vaccination campaigns, it was finally demonstrated that maternally transferred immunity in fox cubs does occur after oral immunization of vixens against rabies [5,6]. During previous studies on maternal antibodies (maAb) against rabies in foxes, blood samples were taken only from animals aged 23 days or older [5,6] hampering insights into the kinetics of rabies maAb. To overcome this shortcoming, in the present study blood samples from fox cubs were collected during the first six weeks after birth. By merging these results on rabies virus neutralising antibodies (nAb) with those obtained during previous experiments in 1998 and 1999 [5], it was possible to quantify the temporal decline of maAb against rabies in fox cubs in general. This decline was examined in relation to one of the most important parameters influencing the initial level of maAb: the rabies nAb-titre of the mother animal. Furthermore, we tried to answer the question at which age maAb are no longer distinguishable from unspecific reactions in the seroneutralisation test used. Material and methods In Spring 2000, 64 cubs whelped by 19 vixens at the Fur Animal Breeding station 'Gleinermühle' (Söllichau, Germany) were included in this study. The vixens were orally vaccinated with the attenuated rabies virus vaccine, SAD B19, shortly before mating or during early pregnancy. All vixens received 1.5 -2.0 ml SAD B19 (10 6.7 FFU/ml) by direct oral instillation. The cubs and vixens were marked individually by electronic identification (Indexel ® Iso Transponder, Rhone-Merieux GmbH, Laupheim, Germany). Blood samples (n = 281) were taken up to 6 times per cub at different ages ranging from day 3 to 43 days post partum. The study was performed according to the German Animal Welfare Act (Tierschutzgesetz) of 25 May 1998 and the experimental design was approved by the appropriate authorities. For ethical reasons, depending on the general constitution of the new-born cubs, only a small number of cubs (n = 6) were bled between day 3 to 5 post partum. These six animals were euthanised using 1 ml of a 105 mg/ml barbiturate, Eunarcon ® (Parke-Davis, Freiburg; Germany). From those animals, blood samples were taken from the heart during necropsy whilst from the others blood samples were taken by puncturing of the Vena safena. The serum samples were tested for the presence of nAb using the Rapid Fluorescence Focus Inhibition Test (RFFIT) as described by Smith et al. [7], with the modifications of that method as described by Cox & Schneider [8]. Prior to testing, sera were heat inactivated for 30 minutes at 56°C. The nAb-ti-tres were determined as described elsewhere [9] and were converted to International Units (IU) by comparison with an international standard immunoglobulin (2nd human rabies immunoglobulin preparation, National Institute for Standards and Control, Potters Bar, UK) adjusted to 0.5 IU/ml which served as a positive control [10]. The obtained data was pooled with the results obtained during 1998 and 1999. A Kruskal Wallis Test [11] was performed to test if these data sets could be merged. In accordance with Gooding & Robinson [12] and Krakowka et al. [13], we assumed an exponential decline of maAb of fox cubs (y) with time (i.e. age [days] of cubs [x]). We further assumed that maAb titres of newborn fox cubs depend on the nAb-titre of the mother animal (VT)) in a non-linear way. Thus: (1) The pooled data sets were used to estimate the model parameters. Subsequently, model (1) was used to calculate the half-life of maAb in fox cubs which is given by ln(2)/ n and the age of fox cubs (critical age) when maAb are below the threshold of 0.5 IU/ml. The critical age (x c ) when maAB equals 0.5 IU/ml is given by Results Prior to whelping, the Geometric Mean Titre (GMT) of the 19 vaccinated vixens (21 days post vaccination) was 11.32 IU/ml. In Spring 2000, of the 64 cubs born of rabies-immune vixens, 61, 57, 58, 56, 34 and 15 of the 64 cubs born of rabies-immune vixens were bled 1, 2, 3, 4, 5 and 6 times, respectively, during the first 43 days post partum. Serum samples of 3 cubs could not be assigned to the respective litters due to dysfunction of the transponder. Ninety (32.02%) of 281 sera had maAb-titres ≥ 0.5 IU/ml; the GMT of all blood samples collected was 0.41 IU/ml. There was a great individual variation in maAb-titres of cubs, especially in the first days of their life ranging from 0.1 to 10 IU/ml. However, this variance in maAb-titers declined with age up to 43 days. This diminishing variance was particularly obvious in those cubs having maAb-titres above the threshold of 0.5 IU/ml (Fig. 1). The comparison of the serological data of the 13-day overlap period (day 31-43 post partum) of the studies conducted in 1998-2000 showed that they were not statistically different (Kruskal-Wallis Test, P> 0.09). The data was therefore pooled and comprised of 499 serum samples taken from 249 cubs whelped by 54 rabies-immune vixens. The majority of the cub sera (369 of 499) had maAb titres below the threshold of 0.5 IU/ml. A regression line was fitted to the data, with significant linear (p < 0.0001) decrease of log(maAb) with increasing age (Figure 2). The estimates of the models' parameters (1) are: a = 0.314, b = 0.329 and n = 0.0727. The model predictions are given in Figure 3. The calculated half-life of maAb against rabies is 9.34 days. Maternal antibodies of offspring whelped by vixens with high nAb-titres can be detected longer in RF-FIT then those of offspring whelped by vixens with relatively low nAb-titres. Using equation (2) the critical mean age when cubs' titres equal 0.5 IU/ml in RFFIT was 23 days (range: 14 -38 days) depending on the nAb-titre of the immunised vixen ( Figure 4). Discussion Detailed knowledge of the kinetics of maAb in fox cubs against rabies was missing, but is essential to optimise the timing of oral rabies vaccination campaigns in spring in order to achieve maximum vaccination coverage of the fox population [14]. The study presented here completes previous experiments conducted in 1998 and 1999 [5,6,9,15] by providing data on maternal antibodies in fox cubs during the first weeks post partum. Obtaining blood samples from cubs at such an early age is not without risks. It is known that vixens are very sensitive during the first days after birth and frequent manipulations during this period can lead to behavioural disorders, which often results in the loss of complete litters. However, during this study it was shown that blood samples can be taken from cubs aged 6 days or older without complications. At first sight, the relatively low (<0.5 IU/ml) maAb titers observed during the first three weeks after birth were surprising. The great variance in individual maAb-titres during the first three weeks of cubs ( Fig. 1) was similar to that observed in older fox cubs (Fig. 2). Large differences in maAb-titres were observed even among littermates; these could be a result of difference in suckling behaviour among the cubs [16]. The initial level of maAb is influenced by many factors; e.g. quality and quantity of colostrum and milk-intake as well as body constitution (condition, birth-weight) [16][17][18][19]. In another canids species, the domestic dog (Canis familiaris), major transfer of maAb takes place during ingestion of colostrum and milk by the new-born [16,17,20]. In these animals, a limited transfer of maAb also occurs in utero [13,16]. Additional studies have been initiated to clarify whether or not this also takes place in foxes. Unfortunately, these and other possible factors are, most of the time, extremely difficult to assess, mainly due to the previously-mentioned extreme susceptibility of the vixen to disturbance immediately prior and after parturition. Pollock & Carmichael [21] mentioned that increasing dog litter-size negatively influenced maAb-levels in puppies. This effect, however, could not be observed in foxes [9]. The model presented here clearly identified another important parameter determining maAb-levels: the nAb-titre of the mother animal. A direct proportional relationship between the serum titre of the mother and her offspring has been identified in many studies on maternally derived immunity [18,[20][21][22]. Our results indicate that the subsequent disappearance rate of maAb in fox cubs was independent of the nAb-titre of the vixen. The exponential decline of maAb against rabies in foxes corresponded with the maAb-decline observed for other viruses in canine animal species, whereby maAb persist for up to 8-10 weeks on average [12,13,16]. The half-life of maAb against rabies in foxes was estimated to be 9.34 days, and is similar to that observed for maAb against canine distemper virus (8.4 days) and canine parvovirus (9.7 days) [13,21]. However, the disappearance of maAb is also linked with the sensitivity of the serological techniques and the threshold to distinguish between positive and negative used (Fig. 4). At an international level nAb at concentrations < 0.5 U/ml representing an arbitrarily defined threshold are considered positive whilst such nAb below this threshold cannot be distinguished from unspecific reactions [10]. Following rabies vaccination of female dogs, maAb in puppies could be detected up to 6-7 weeks post partum, on average [16]. Taking an estimated mean time period of 23 days into account during which maAb can be distinguished from un- Figure 3 Non-linear regression model fitted to the combined data set (1998)(1999)(2000) showing the maternal antibody (maAb) titre of cubs in dependency on the neutralizing antibody (nAb) titre of the mother animal (VT) and the age of the cubs (days). cubs titre < 0.5 (IU/ml) cubs titre > 0.5 (IU/ml) specific reactions in RFFIT (nAb-titre ≥ 0.5 IU/ml) after birth, there is evidence that maAb-titres in fox cubs are not as high as in puppies and therefore appear to decrease more quickly than in dogs. Further considering spring whelping activity, maternal immunity against rabies in young foxes is very difficult to detect under field conditions. Thus, the percentage of 9-20% of young foxes having nAb following spring vaccination campaigns [23][24][25], may result exclusively from active immunization by baituptake. It has been shown, however, that the detection of rabies maAb by immunoblotting is much more sensitive than the RFFIT, and consequently, by using the former method, maAb could be detected for a longer period of time (Müller, unpublished results). The relatively longevity of maAb at a low level results in an interference between passively and actively acquired immunity up to 8 weeks post partum which affected more severely the ability of fox cubs to resist a virus challenge [9]. Taking this into account, concerning spring vaccination campaigns baits should not be distributed in previously baited areas before most cubs are more than 8 weeks of age. Therefore, to reach optimal immune response in young foxes, depending on the geographical region vaccination campaigns should be adjusted accordingly. In areas vaccinated for the first time, however, baits can be distributed earlier, while 5 weeks old cubs are already immunocompetent [14]. Conclusions The kinetics of maAb against rabies in fox cubs is similar to that observed in dog puppies; the amount of maAb cubs receive is directly proportional to the titre of the vixen and the former decreases exponentially with age below detectable levels in seroneutralisation tests. Thus, antibody-titres detected in sera of young foxes submitted for investigation after spring oral vaccination campaigns are most likely a result of active immunization by bait-uptake. Young foxes without detectable levels of rabies nAbtitres are either whelped by non-immunized vixens or maAb-titres already dropped below the level of detection.
v3-fos-license
2024-04-17T15:10:34.150Z
2024-04-01T00:00:00.000
269176877
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/16/8/1502/pdf?version=1713159565", "pdf_hash": "a02b6f88137cd443db3372374687aa5225773908", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44721", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b2947fae996a3220ab7e7ff0aec36096fa4e6419", "year": 2024 }
pes2o/s2orc
The Performance of Different Parametric Ultrasounds in Prostate Cancer Diagnosis: Correlation with Radical Prostatectomy Specimens Simple Summary A systematic review assessed multiparametric ultrasound [mpUS] modalities in diagnosing prostate cancer via radical prostatectomy specimens. Between 2012 and 2023, eleven studies evaluated grayscale TRUS, SWE, CEUS, and mpUS. Sensitivity ranged from 37.7% to 55% for grayscale TRUS, 55% to 88.1% for SWE, 59% to 81% for CEUS, and 74% for mpUS, with specificities ranging accordingly. Notably, sensitivity for clinically significant prostate cancer was 55%, 73%, 70%, and 74%, respectively, with varying specificities. Quality Assessment of Diagnostic Accuracy Studies-2 [QUADAS-2] was used to evaluate bias and applicability. The review underscores the significance of mpUS modalities in prostate cancer diagnosis, highlighting their varying sensitivity and specificity in detecting both overall and clinically significant prostate cancer lesions. Abstract Background: Prostate cancer is a prevalent cancer among men. Multiparametric ultrasound [mpUS] is a diagnostic instrument that uses various types of ultrasounds to diagnose it. This systematic review aims to evaluate the performance of different parametric ultrasounds in diagnosing prostate cancer by associating with radical prostatectomy specimens. Methodology: A review was performed on various ultrasound parameters using five databases. Systematic review tools were utilized to eliminate duplicates and identify relevant results. Reviewers used the Quality Assessment of Diagnostic Accuracy Results [QUADAS-2] to evaluate the bias and applicability of the study outcomes. Result: Between 2012 and 2023, eleven studies were conducted to evaluate the performance of the different ultrasound parametric procedures in detecting prostate cancer using grayscale TRUS, SWE, CEUS, and mpUS. The high sensitivity of these procedures was found at 55%, 88.6%, 81%, and 74%, respectively. The specificity of these procedures was found to be 93.4%, 97%, 88%, and 59%, respectively. This high sensitivity and specificity may be associated with the large lesion size. The studies revealed that the sensitivity of these procedures in diagnosing clinically significant prostate cancer was 55%, 73%, 70%, and 74%, respectively, while the specificity was 61%, 78.2%, 62%, and 59%, respectively. Conclusions: The mpUS procedure provides high sensitivity and specificity in PCa detection, especially for clinically significant prostate cancer. Introduction One of the most frequently diagnosed forms of cancer in males is prostate cancer [PCa], and it is the second leading cause of death after lung cancer [1].Three approaches are available for exploring it: screening, histopathology, and medical imaging.The screening method involves the measurement of prostate-specific antigen [PSA] and the digital rectal exam [DRE].PSA accuracy can be impacted by high PSA levels.In fact, when PSA levels are elevated, the test's accuracy in detecting prostate cancer can decrease significantly.Therefore, a high PSA level should not be taken as a definitive diagnosis, and additional testing may be required to confirm any potential results [2].The subjective nature of DRE examination leads to low sensitivity and specificity [3].The standard method for detecting prostate cancer that has been used over the years is a biopsy, which is typically performed using transrectal ultrasound [TRUS].However, biopsies have some drawbacks, such as a high likelihood of missing cancer and causing bleeding in the colon.In addition, more than half of patients who undergo systematic biopsies later require radical prostatectomy [RP] [4][5][6][7][8].As a result, alternative methods are being employed to overcome these restrictions and enhance the precision of diagnosing prostate cancer.Magnetic resonance imaging [MRI] and ultrasound [US] are more efficient techniques for detecting prostate cancer.Despite being more sensitive than the US, MRI has its limitations.It is considerably more expensive and not suitable for all patients, including those with pacemaker ferromagnetic metals and those suffering from claustrophobia.Nevertheless, the US is a more cost-effective, non-invasive, and real-time procedure than MRI, which is suitable for all patients.To enhance the performance of PCa detection, various parametric methods have been developed in addition to grayscale ultrasound [GUS], such as Doppler ultrasound [DUS], elastography ultrasound [EUS], contrast-enhanced ultrasound [CEUS], and micro-ultrasound [9].Grayscale ultrasound imaging, based on the density of organs, shows variations in brightness and darkness.Typically, prostate cancer presents isoechogenicity, meaning it has the same echogenicity as the surrounding tissue due to the prevalence of stromal fibrosis in prostate cancer tissue.However, if stromal fibrosis is minimal, hypoechoic PCa may be observed [10,11].Doppler ultrasound, in the case of PCa, identifies abnormalities by examining the microvascular appearance of suspected lesions.These microvascular abnormalities are caused by increasing angiogenesis in neoplastic tissues [12][13][14][15][16].The resolution of microvessels is typically lower than that of the Doppler shift, and this leads to reduced Doppler.Therefore, to visualize the inflow and outflow speed of blood vessels in suspected lesions, an intravenous micro-bubbles contrast agent is administered [17].In the CUES, multiple measurements to differentiate between normal and abnormal tissues are determined.The Time-Intensity Curve [TIC] demonstrates the contrast agent signal over time after injection, displaying the Time to Peak [TTP] and Area Under the Curve [AUC].Additionally, Rise Time [RT] and Mean Transit Time [MTT] are quantitative measurements that can indicate the presence of neoplastic lesions [18].Although this test is effective, it may be limited in detecting a small lesion, and some patients may be allergic to the microbubble contrast agent [19,20]. Elastography is an important procedure used in ultrasound to measure tissue elasticity.It can be used to identify prostate cancer lesions, which are stiffer than surrounding tissue.In ultrasound, elastography has two different procedures: strain elastography [SE] and shear-wave elastography ultrasound [SWEUS].SE provides a color map to differentiate between hard and soft tissue by inducing stress caused by transducer compression.While in SWE, an acoustic radiation force impulse [ARFI] is generated toward the tissues and assesses its stiffness based on the shear velocity [21]. This systematic review will study the performance of these modalities in prostate cancer detection.Two different studies completed a systematic review of ultrasound multiple parametric performance in detecting prostate cancer [22,23].Postema et al. [22] examined how ultrasound can be utilized to detect prostate cancer and how these methods can be combined to improve detection accuracy, known as multiparametric ultrasound [mpUS].They included elastography ultrasound, contrast-enhanced ultrasound [CEUS], and/or Doppler US in their research term besides the grayscale TRUS.Only one research database was used in their study, and the reference standards were biopsy and radical prostatectomy.The Alghamdi et al. [23] study was recently published; they analyzed the accuracy of various ultrasound parameters in PCa detection.They included CEUS, micro-ultrasound, and both elastography techniques associated with biopsy and radical prostatectomy as the reference standard. This study aimed to systematically review the performance of the multiparametric ultrasound in prostate cancer detection.It will focus on radical prostatectomy specimens as a reference standard and shear-wave elastography ultrasound [SWEUS] instead of strain elastography [SE].The objective of this study is to provide an accurate and comprehensive understanding of the performance of different parametric ultrasounds in the diagnosis of prostate cancer and its limitations. Materials and Methods A systematic review was conducted on various ultrasound parameters using five databases: PubMed, Scopus, Cochrane, MEDLINE, and Embase.The search results were filtered to include all articles published between January 2012 and the most recent articles available in 2023.This time frame was selected because the initial clinical study on SWEUS for prostate cancer detection was carried out in 2012 [24].The primary focus of the research was to assess the effectiveness of various ultrasound techniques, such as grayscale ultrasound, shear-wave elastography ultrasound, Doppler ultrasound, contrast-enhanced ultrasound, and multiparametric ultrasound in detecting prostate cancer [PCa].This study was registered with PROSPERO for the systematic review [PROS-PERO Registration ID 467274].The term MeSH was applied to all databases during the research in order to collect more articles related to the detection of prostate cancer using medical imaging modalities.Additionally, the search results were obtained based on the title and abstract of the articles.The search term included the following: "prostatic neoplasms/diagnosis" [MeSH Terms] OR "prostatic neoplasms/diagnostic imaging" [MeSH Terms] AND ["prostate cancer detect*" [Title/Abstract] OR "prostate cancer locali*" [Title/Abstract] OR "prostate cancer diagnos*" [Title/Abstract] AND ["ultrasound" [Title/Abstract] OR "ultrasonography" [Title/Abstract] OR "TRUS"[Title/Abstract] OR "transrectal ultrasound" [Title/Abstract] OR "gray scale ultrasound" [Title/Abstract] OR "doppler*" [Title/Abstract] OR "shear wave elastogra*" [Title/Abstract] OR "SWEUS" [Title/Abstract] OR "USWE" [Title/Abstract].It should be noted that the search term included all possible keywords related to ultrasound techniques, and the use of NOT was avoided to ensure that no relevant articles were missed.This systematic review study followed the PRISMA guidelines and used the checklist in Tables S1 and S2 to ensure compliance. Systematic review tools were employed to remove duplications and screen for eligibility criteria.The inclusion and exclusion criteria are mentioned in Table 1.The primary requirement for a study to be included is that it should involve a US parametric examination for patients enrolled for radical prostatectomy as a reference standard.The most important outcomes considered are sensitivity and specificity, while other outcomes like positive predictive value [PPV], negative predictive value [NPV], area under the ROC curve [AUC], and accuracy were also mentioned.No restrictions are based on the study's country, race, or clinical institution.The viewers used the Quality Assessment of Diagnostic Accuracy Result [QUADAS-2] tools to evaluate the bias and applicability of the study results.The tool consists of four sections: patient selection, index test, reference standard, and flow and timing [25].Review Manager [RevMan] version 5.4 was used to achieve the quality assessment.The questions of the QUADAS-2 were explained by the authors and then answered accordingly [26,27]. Results The database results and eligibility studies used to identify the included studies in this research are briefly presented in Figure 1.Of the numerous studies screened, only 11 met the inclusion criteria and were included in this analysis.These studies were published between 2012 and 2023.Most of the excluded studies were related to MRI, CT, and PET scans for prostate cancer.A significant number of studies were excluded due to the absence of histopathological data from radical prostatectomy.Among the studies included, SWEUS was the most used modality associated with the results of radical prostatectomy.Only one study compared the performance of mpUS in PCa detection with radical prostatectomy results.Unfortunately, no studies were found that assessed the performance of Doppler ultrasound in PCa detection with histopathological results from radical prostatectomy.The author's name, year of publication, number of patients, lesion size, and study outcomes, such as sensitivity, specificity, PPV, NPV, accuracy, and AUC, of the included studies are presented in Table 2.The outcomes of each study were averaged, and the details were carefully considered. Grayscale Ultrasound The grayscale images in the provided studies have low sensitivity, likely due to the several imaging factors employed, such as the ultrasound frequency.Two studies [28,29] used low-frequency ultrasound for varying objectives.For instance, Zhu et al. [28] aimed to compare the accuracy of real-time elastography [RTE] with grayscale imaging.They used a bi-plane ultrasound probe to examine 56 patients and found that RTE has a slightly higher accuracy than grayscale ultrasound.Although the true-negative ratio is high in grayscale imaging, the true-positive ratio in RTE is higher. The performance of two ultrasound frequencies, 5 MHz and 21 MHz, was compared in 25 patients [29].High frequency provides higher image resolution, but the wavelength is short and unsuitable for high-depth organs.Ultrasound with lower frequency demonstrates low sensitivity, predictive value [PPV], and negative predictive value [NPV].Nonetheless, in comparison with higher frequency ultrasound, both show close specificity.Not all patients started the ultrasound examination with the same probe to avoid bias in the result.Some started with the high frequency, and others started with the lower frequency. Mannaerts et al. [30] examined the performance of grayscale mpUS for different stages of clinically significant prostate cancer [csPCa] based on the Likert score.A score of ≥3 referred to intermediate csPCa, while a score of ≥4 referred to high csPCa.Grayscale imaging is the primary technique used to localize prostate cancer, but its performance is Cancers 2024, 16, 1502 6 of 14 low.Grayscale imaging was examined and compared with the RP result in the peripheral zone [PZ] and transition zone [TZ].Grayscale imaging is less sensitive to localizing a high-grade csPCa in all prostate zones than intermediate csPCa grade.The data suggests that the detection of prostate cancer [PCa] in the peripheral zone [PZ] is significantly higher compared to the transitional zone [TZ], with a sensitivity of 57.1% and 21%, respectively.However, it should be noted that the specificity of detecting PCa in the PZ is comparatively lower at 62% compared to the TZ, which has a specificity of 83.2%. Shear-Wave Elastography Ultrasound Several factors need to be considered in shear-wave elastography performance, such as the cutoff value, which is the threshold to decide the malignancy, lesion location, and prostate size.Morris et al. (2021) [35] and Tyloch et al. ( 2023) [21] have a closed number of populations [36 and 30, respectively] using a similar ultrasound transducer, while only the patient position differed.The cutoff value in [21] was 35 kPa, and the sensitivity and specificity were 71.8% and 70.2%, respectively.In contrast, the cutoff value in [35] was 91.4 kPa, which increased the sensitivity, specificity, PPV, NPV, and AUC by 81%, 82%, 69%, 89%, and 0.48, respectively.Moreover, in [35], a 3D shear-wave elastography procedure was developed by acquiring more than 100 images with 1 to 1.5 angular spacing.Patients were scanned by SWEUS, ARFI, and B-mode.They showed that the mean elasticity of the peripheral zone [PZ] was lower than that in the central prostate by 69.1 kPa and 84.3 kPa, respectively.Moreover, the mean PCa lesion was 108 kPa. Dai et al. [34] provided several performance outcomes based on several cutoff values, and the population was slightly higher.The sensitivity, specificity, and AUC were 81.3%, 82.4%, and 0.816, respectively, with a cutoff point of 84 kPa.The AUC was 0.776 for a cutoff point of 71 kPa, and the sensitivity and specificity were 78.1% and 76.5%, respectively.In the cutoff point of 60 kPa, the sensitivity and specificity were reduced to 68.8% and 70.6%, respectively.In addition, they show a correlation between SWEUS elasticity and the Gleason Group.The highest-grade group of PCa had an elasticity range of 84.1-117.2kPa, while the lowest-grade group of PCa had an elasticity range of 41.6-67.3kPa. In 2023, Tyloch et al. [21] conducted a study comparing the performance of strain and shear-wave elastography, and they found that the Gleason score [GS] had a more significant impact on the sensitivity of results than the lesion size.For GS scores of 3, 4, and 5, the sensitivity was 54.6%, 81.36%, and 93.8%, respectively.The study showed that increasing the cutoff value could lead to high specificity but low sensitivity.Additionally, the average elasticity of benign, low-grade, intermediate-grade, and high-grade prostate cancer [PCa] was 36.43,43.41, 55.93, and 66.81 kPa, respectively.Yet, it is essential to note that this was an average of both shear-wave and strain elastography. Rouviere et al. [32] explored the impact of the size and location of prostate lesions on the results of elasticity testing.They found that 75% of patients did not have any elasticity data in the transition zone [TZ] due to a blind zone caused by an enlarged prostate gland.SWEUS detected more lesions in the peripheral zone [PZ] than in the TZ.This was particularly evident for lesions measuring 5 cm 3 or larger.Although the stiffer lesions were found in the transitional zone [TZ] rather than in the peripheral zone [PZ], the elasticity data for the TZ was low.Finally, the study showed a difference in elasticity between the axial and sagittal planes, possibly due to increased pressure on the prostate gland during an axial scan. Wei et al. [33] performed a study to evaluate the effectiveness of shear-wave ultrasound elastography in detecting significant prostate cancer.They measured the elasticity of the prostate gland based on the Gleason Score [GS], using a high cutoff value of 82.6 kPa.The results showed that the test's sensitivity increased with higher GS and larger PCa lesion size.However, there was no significant difference in PCa elasticity according to PCa size.The study concluded that the elasticity of low, intermediate, and high-grade PCa was 91.6, 102.3, and 131.8 kPa, respectively, and the elasticity of each GS was also mentioned in the study.Figure 2 shows the true positive result of SWEUS in PCa detection correlated with RP result. sound elastography in detecting significant prostate cancer.They measured the elastic of the prostate gland based on the Gleason Score [GS], using a high cutoff value of 8 kPa.The results showed that the test's sensitivity increased with higher GS and larger P lesion size.However, there was no significant difference in PCa elasticity according to P size.The study concluded that the elasticity of low, intermediate, and high-grade PCa w 91.6, 102.3, and 131.8 kPa, respectively, and the elasticity of each GS was also mention in the study.Figure 2 shows the true positive result of SWEUS in PCa detection correla with RP result.In a study by Mannaerts et al. [30], the effectiveness of SWEUS in detecting csP was examined.The study revealed that SWEUS may have limited sensitivity in detect In a study by Mannaerts et al. [30], the effectiveness of SWEUS in detecting csPCa was examined.The study revealed that SWEUS may have limited sensitivity in detecting high csPCa grade across all prostate zones compared to intermediate grade.However, when examined by zones, SWEUS demonstrated higher sensitivity in detecting PCa in the PZ by 44.1% compared to the TZ by 37%, where specificity was equally observed in both zones. Contrast-Enhanced Ultrasound Two studies were carried out to assess the effectiveness of contrast-enhanced ultrasound [CEUS] in patients with prostate cancer [PCa] and to evaluate the results based on radical prostatectomy [36,37].In both studies, a 2.4 mL bolus of SonoVue ® microbubble contrast agent was injected intravenously.In the first study [36], they recorded the contrast agent's wash-on and wash-out for 3 min in the prostate gland.The study evaluated the performance based on three readings: the slope of the time-intensity curve [TIC], rise time [RT], and mean transit time [MTT].The performance of these three readings was assessed, and the sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV] were determined.The TIC reading provided a sensitivity of 82%, specificity of 96%, PPV of 100%, and NPV of 57%.The MTT reading provided a sensitivity of 73%, specificity of 78%, PPV of 89%, and NPV of 41%.Finally, the RT reading showed a sensitivity of 58%, specificity of 81%, PPV of 80%, and NPV of 37%.Notably, the number of cases analyzed in these three measurements was unequal.The study analyzed a total of 34 PCa lesions, with 30/34, 28/34, and 23/34 analyzed by TIC, MTT, and RT, respectively. In the second study (Postema et al., 2020) [37], they compared the outcomes of CEUS and contrast-enhanced ultrasound Doppler imaging [CUDI] for clinically significant prostate cancer [csPCa].A low-frequency TRUS probe was used, and a low contrast-specific power modulation pulse scheme by 3.5 MHz was applied with a mechanical index of 0.06 for contrast signal reading.The recording time was 2 min after contrast injection, and the inflow and outflow contrast readings were evaluated to diagnose csPCa.The sensitivity and specificity of CEUS, CUDI, and the combination were almost the same, with a slightly higher specificity of CEUS.Finally, it was found that increasing the cutoff value of the cancer detection rate leads to a decrease in sensitivity and an increase in specificity for both CEUS, CUDI, and both combined. Mannaerts et al. [30] used CUES in the mpUS procedure.The ultrasound contrast agent was administered three times to define and detect the lesions, but no further details were provided on the type of reading used for lesion evaluation.The study detailed the outcome of CEUS of csPCa in the entire prostate gland, PZ, and TZ as intermediate and high grade.The sensitivity and specificity of CEUS in detecting PCa in the PZ were found to be 58.3% and 70%, respectively, while the sensitivity and specificity in the TZ were 37.65% and 79%, respectively. Multiparametric Ultrasound The increase in the average sensitivity of mpUS in comparison with ultrasound parametric individually was evident.Though only one [30] study provided the mpUS performance, it provides the result based on a high Likert score [≥4 and ≥3] and location of lesion [PZ and TZ].Overall, the sensitivity of mpUS is higher than that of the other procedures individually.The specificity is low compared to other individual procedures.It has been noticed that the sensitivity of mpUS is higher in the detection PCa in PZ by an average of 73.5%, while in TZ, it is low by an average of 48.3.The average specificity of mpUS in PZ was 65.8%, and in TZ was 77.16%. Other Results Five research studies have been completed that correlate the ultrasound technique's result with clinically significant prostate cancer.Three [21,33,34] were shear-wave elastography studies; one [37] used contrast-enhanced ultrasound, and the last used mpUS [30].Table 3 shows the average outcome of these techniques in scanning a csPCa.The quality assessment of study selection has shown a low risk of bias overall, as depicted in Figures 3 and 4.However, two of the selected studies had a high risk of bias in patient selection as the selection was non-randomized [34,37].Additionally, three studies provided a high risk of bias in flow and timing selection [28,30,37]. Discussion The systematic review that is achieved is aimed at evaluating the diagnostic accuracy and performance of various ultrasound techniques, including grayscale, shear-wave elastography, and contrast-enhanced imaging, in detecting prostate cancer.The analysis involved assessing the sensitivity and specificity of these modalities based on several factors. There is a debate about whether grayscale ultrasound is the best way to detect prostate cancer.While it is considered the gold standard in the US parametric for detecting neoplastic growths in the prostate, its ability to detect prostate cancer is limited after a biopsy has confirmed the presence of cancer, regardless of the number of samples taken [38].Based on the studies included in this systematic review, the sensitivity of prostate cancer detection ranges from 25% to 56%, while the specificity ranges from 37.7% to 55%.These ranges may be increased due to the presence of significant prostate cancer [39].Highfrequency ultrasound is a valuable tool in detecting prostate cancer due to its exceptional sensitivity.Micro-ultrasound technology is a prime example of this, utilizing ultrasound waves with frequencies exceeding 29 MHz to achieve unparalleled resolution and tissue characterization when compared to traditional ultrasound methods [40,41].In addition, the location and size of the PCa lesion have an influence on the sensitivity [28].Studies have shown that a high percentage of PCa are hypoechoic, while others are isoechoic or hyperechoic [10,11,[42][43][44], where this diversity in PCa echogenicity is due to the fibrosis of stromal cells, as was mentioned before.However, one study indicated that hypoechoic lesions have less stromal fibrosis compared to isoechoic PCa [10].Another study [43] ex-plained that hyperechoic PCa are rare and usually are ductal adenocarcinomas that contain central necrosis and calcification.The radiographic features of PCa under the US are well documented, but it is important to note that PCa's appearance is mainly hypoechoic, which is one of the limitations in detection.Due to other prostatic issues that can also cause hypoechogenicity, such as prostatitis and benign prostatic hyperplasia, further testing may be required [45].Thus, it is essential to identify the number of non-cancerous hypoechoic lesions for improved sensitivity while missing the number of neoplastic isoechoic lesions decreases sensitivity.This finding corresponds with previous studies that have examined the grayscale realization correlation with biopsy [46,47].Conversely, Sauvain et al. [48] compared the efficacy of grayscale and power Doppler sonography in detecting prostate cancer.The results obtained were superior to those of the reviewed studies.However, the results were based on biopsy, and the specifics of the grayscale ultrasound procedure were not clearly outlined. SWE has been used to assess the stiffness of lesions as an alternative regardless of the lesion echogenicity [24,49].According to the current study, the sensitivity and specificity of SWEUS for detecting PCa generally range from 55 to 88.6% and 69.1 to 97.3%, respectively.The sensitivity and specificity of SWEUS for detecting csPCa range from 55 to 88.6% and 61 to 97.3%, respectively.As mentioned before, elasticity cutoff leads to either improvement or decrease in the performance of the SWEUS [21,34,35]; this is if the cutoff values of stiffness are lowered, it can identify more patients with the disease, but it can also lead to diagnosing more normal cases as abnormal, which is inaccurate [50].Therefore, an optimal cutoff value must be determined to eliminate false-positive values.The lowest sensitivity may be due to missing SWE readings in PZ due to ultrasound heterogeneity.The highest specificity of SWEUS reading in TZ, as is shown in [30,32], is assembly due to the high stiffness of the TZ, which makes identifying the suspicious area effortless. This systematic review discussed the shear-wave elasticity of benign, malignant, and csPCa.Therefore, this should be divided into general PCa and csPCa studies.The range stiffness of benign, PCa, and csPCa was between 11 and 65 kPa, 12 and 108 kPa, and 43.41 and 131 kPa, respectively, and this agreed with several studies [49][50][51][52].The lowest range of prostate cancer was mentioned as caused by the depth range of the SWE [32]. Enhanced ultrasound is another advanced modality for prostate cancer detection that shows improving lesions in real-time, regardless of their echogenicity [53,54].The sensitivity and specificity of the CUES procedure in detecting PCa ranged from 59% to 81% and 63% to 88%, respectively.The defining result in both studies [36,37] was high sensitivity, while [36] showed high specificity, probably from the increasing scan time after injection.It was agreed by [37] that the time of PCa detection in the interstate gland was insufficient.In addition, this high sensitivity is due to the increase in lesion size.A large lesion means more blood supply, which increases the blood flow.In this case, this enhances the perfusion pattern and increases the ability to visualize the lesion under CEUS. We believe this study explained the performance of different ultrasound modalities in PCa in detail and associated it with the most accurate histopathological result.However, it is important to note that the study has a limitation that should be considered.Firstly, it should be pointed out that the number of selected studies was relatively small.As a result, the performance of the ultrasound parametric in the results showed a close correlation.As a second point, no studies were available that assessed the performance of Doppler ultrasound in detecting prostate cancer.This is because there is a lack of studies that correlate Doppler ultrasound with radical prostatectomy results.From 2012 to 2023, four studies evaluated the performance of Doppler ultrasound in detecting prostate cancer, but the outcomes were based on biopsy results only [55][56][57][58].The review was also supposed to discuss the results of multiparametric ultrasound in detecting prostate cancer, but only one study included in the review provided outcomes based on radical prostatectomy results.According to the research results, multiparametric ultrasound studies in detecting prostate cancer were founded in 2013.Two studies [59,60] evaluated the achievements of multiparametric ultrasound based on biopsy results.The limited number of studies and data, especially with multiparametric ultrasound, may make it difficult to compare different procedures.Another limitation of the studies included is that the biopsy confirmed the diagnosis of prostate cancer, which could cause bias. Based on the results of this systematic review, we recommend that future studies should aim to assess the performance of multiparametric ultrasound in detecting prostate cancer with radical prostatectomy results.Furthermore, there is a need for further studies on grayscale, contrast-enhanced ultrasound, and Doppler ultrasound in detecting prostate cancer and their performance based on radical prostatectomy results.What is more, it has been noticed that no study, to our knowledge, addressed the influence of echogenicity on the shear-wave.There were only a few studies that focused on clinically significant prostate cancer in all modalities, so further studies are requested to detect clinically significant prostate cancer.In addition, the accuracy of prostate detection was not provided in 8 out of 11 studies, which should be considered in future studies. Conclusions This systematic review presents qualitative results from studies examining multiple ultrasound modalities for prostate cancer detection based on histopathological results from radical prostatectomy.The modalities assessed include grayscale imaging, shearwave elastography, and contrast-enhanced ultrasound.When used in combination, these techniques enhance the performance of ultrasound parameters in detecting prostate cancer, providing various quantitative information that significantly matches in vivo results. Funding: This research received no external funding. Figure 2 . Figure 2. Histopathology results compared with 12-region SWE images obtained in a 73-yearpatient.(a) The whole set of prostate slices at 3 locations, including gland base, mid, and apexes.A 12-region prostate imaging template.(c) Representative ultrasound images of the apex incl the SWE image at the top and the b-mode image below [33]. Figure 2 . Figure 2. Histopathology results compared with 12-region SWE images obtained in a 73-year-old patient.(a) The whole set of prostate slices at 3 locations, including gland base, mid, and apexes.(b) A 12-region prostate imaging template.(c) Representative ultrasound images of the apex include the SWE image at the top and the b-mode image below [33]. Figure 3 . Figure 3. Risk of bias and applicability regarding graphs: review authors' evaluation of domains presented as percentages across included studies. Figure 4 . Figure 4. Risk of bias and applicability concerns summary: Review authors made judgments about each domain for each included study [21,28-37]. Figure 3 .Figure 3 . Figure 3. Risk of bias and applicability regarding graphs: review authors' evaluation of domains presented as percentages across included studies. Figure 4 . Figure 4. Risk of bias and applicability concerns summary: Review authors made judgments about each domain for each included study [21,28-37]. Figure 4 . Figure 4. Risk of bias and applicability concerns summary: Review authors made judgments about each domain for each included study [21,28-37]. Table 1 . Inclusion and exclusion criteria. Table 2 . Number of studies in each modality, with the average outcomes in each study. Table 3 . The performance of different ultrasound modalities in clinically significant prostate cancer diagnoses.
v3-fos-license
2019-05-02T13:07:51.728Z
2010-09-01T00:00:00.000
141745801
{ "extfieldsofstudy": [ "Art" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://njes-journal.com/articles/10.35360/njes.232/galley/232/download/", "pdf_hash": "e57b80ca6d1a2979d6e70782f4fbac62214c803b", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44722", "s2fieldsofstudy": [ "Medicine", "Business" ], "sha1": "e57b80ca6d1a2979d6e70782f4fbac62214c803b", "year": 2010 }
pes2o/s2orc
Pretty in Pink: The Susan G. Komen Network and the Branding of the Breast Cancer Cause The pink ribbon is a ubiquitous fixture on the consumer landscape of contemporary America. Emerging over the last two decades as the symbol for the fight being waged against breast cancer, the color and image now adorn packaging for everything from trash bags to cosmetics, cereal to cleaning products, postage stamps to guacamole. The already pink Energizer bunny now dons a pink ribbon as he keeps going and going to fight breast cancer as well as power the nation‘s electronic devices. The National Football League donned pink during October 2009 in support of October‘s National Breast Cancer Awareness Month and Muslim women veiled themselves in pink hijabs for the annual Global Pink Hijab Day at the end of October. 1 cognizance of the need for exams and screening, the pink ribbon phenomenon spearheaded by the Susan G. Komen machine reveals much darker realities about American marketing, consumerism, philanthropy, gender relations, and the perils of branding.The Komen brand has achieved virtually unrivaled cachet in the philanthropic world.With all of this quasi-consumer success however, has come all the pitfalls inherent in such success.This analysis will show that while philanthropic brands must undertake many of the same strategies for success as corporate brands, and while philanthropic brands are not immune to the problems facing corporate brands, their cultural resonance and ultimate non-capitalist orientation do afford them a more readily earned and maintained social legitimacy than their corporate counterparts.This raises the question, are the capitalist strategies of corporate branding prettier in pink? Background Susan G. Komen the network takes its name from Susan G. Komen the woman and breast cancer victim who died of the disease in 1980.Out of her sister Nancy Brinker's grief came the organization that has shone a brighter light on the tragedy of breast cancer than any other advocacy group in the country. 2 Additionally, because Brinker's focus was always on her sister and her sister's memory, the network gave a face to the disease. 3At a time when breast cancer was discussed in hushed tones and treated with a certain taboo by its victims, their families, and the public at large, the Komen Network, building upon the work done by former first lady, Betty Ford, removed the stigma, started the conversation, and prompted a complete reversal in pubic perceptions and attitudes.Today breast cancer is an openly discussed part of American culture with the month of October devoted yearly to its eradication in the United States for nearly a quarter century. With Susan G. Komen as the personification of breast cancer's everywoman, the network launched its advocacy efforts in 1982.Prior to launching the network, Brinker had been a member of the executive training program for Neiman Marcus, a talk-show host, and a director of public relations for the Hyatt Regency Hotel in Dallas.More recently, Brinker served as United States Ambassador to Hungary and Chief of Protocol in the George W. Bush Administration (Leone 2009).She took her experience and success in the corporate arena and applied it to the non-profit sector.The result was the Susan G. Komen Breast Cancer Foundation (which changed its name to Susan G. Komen for the Cure in 2007), an organization that boasts more than 100,000 volunteers working through a network of 125 United States and international affiliates (Collins 2009). The structure and attitude of the network as well as its unparalleled success reveal sometimes unfortunate realities of corporate America and women's place in it as much as they reflect the tragedy of breast cancer.Nancy Brinker set out to found an organization of women for women in which they would be empowered, not just to fight a disease intimately associated with femininity, but to run a multi-million dollar, multinational organization committed to the eradication of that disease.According to the Susan G. Komen for the Cure website, -we're proud of the fact that we don't simply dump funds and run.We create activistsone person, one community, one state, one nation at a time -to try and solve the number one health concern of women‖ (Brinker 2010). The Network's claim that breast cancer is the -number one health concern of women‖ alludes to both the character and critique of the Komen Network's activism.By the numbers, breast cancer should not be the number one health concern of women.According to the American Heart Association half of all women who will die this year will die from heart disease or stroke; 500,000 per year compared to 40,000 from breast cancer.Yet 67% of women name breast cancer as their biggest health concern compared to 7% for heart disease and 1% for stroke (Mosca et al. 2003).Thus, breast cancer is the health threat about which women are most aware.Additionally, though men can get and are getting breast cancer in increasing numbers, the disease is generally perceived of as a female affliction.Thus, breast cancer activism targets women and when it reaches out to men, as it frequently does, it is typically in the context of helping women.Women have been victimized by breast cancer but spouses, fathers, brothers, and sons can take up the fight to protect and/or save women from this disease by participating in breast cancer philanthropy. The Komen for the Cure website claims that every major advance in the fight against breast cancer has been touched by the network, its people, and its advocacy.Komen for the Cure has -helped train more than 400 breast cancer researchers and funded more than 1,800 research projects over the past 26 years.‖They have provided more money for breast cancer research and community health programs than any entity besides the United States government, and Komen for the Cure's goal is to -energize science to find the cures‖ (-Why Komen?‖).The Komen Network has raised 1.3 billion dollars for research, education, and health services.Today Komen for the Cure has members and conducts activities in over 50 countries. 4he measure of Komen for the Cure's success in the battle against breast cancer is found as surely in these numbers of billions of dollars raised for research as in the survivorship rates of those stricken with the disease.In these two sets of numbers, we see the two faces of the Komen organization.The former is the face of high finance and corporate America where the skills Nancy Brinker honed in her for-profit past have been put to good use in her not-for-profit present.These numbers encompass an advertising/marketing juggernaut in which dozens of high profile national sponsors help Komen for the Cure raise millions annually to continue its work against breast cancer.Komen's Million Dollar Council, for example, is comprised of twenty businesses with million dollar annual contributions.Corporations such as Avon, General Electric, Bristol Myers Squibb, Ford Motors, and Lee Jeans are among the ranks of Komen's corporate sponsors (Million Dollar Sponsor).On the other end of the philanthropic/activist spectrum are the tens of thousands of grassroots volunteers, many of them breast cancer survivors, who take the Komen message from Wall Street to Main Street and personalize the battle being waged against this disease.It is through the efforts of this latter group, the everyday activists, that the Komen Network achieves and maintains much of its social legitimacy, a legitimacy sometimes threatened and even eroded through corporate sponsorship. Grassroots activism Many of the everyday pink ribbon volunteers, participants in the Komen for the Cure activities, and consumers of the Pink Ribbon products are motivated to participate in Komen's quest for a cure for breast cancer because the disease has personally affected them.The Race for the Cure events are annual events held in scores of cities around the country and likely the most well known and most effective elements of their advocacy and awareness-raising campaigns.They attract some serious runners and tens of thousands of walkers.Each participant's admission and/or pledges provide the basis of the fundraising effort.Equally important to the revenue raised however, is the politicized character of the races that take on many of the sociological characteristics of a march as opposed to a fun run.The racers occupy a public space.By their sheer numbers and location they garner media and popular attention.Additionally, due to the prominent place afforded current patients and survivors in the races, they are truly empowering events that succeed in turning an everyday activity and its participants, into activists marching for a cure.As evidence of the widespread success of the Races for the Cure, Komen announced on March 10, 2009 the first annual Global Race for the Cure.The Global Race for the Cure funds breast cancer programs for the medically underserved throughout the National Capital Area and abroad (-International Races‖). The runners and walkers in the dozens of Races for the Cure that take place annually remind all who see them of the human tragedy that is cancer and as such form a crucial moral and empathetic bulwark of the Komen for the Cure initiatives.It is unquestionable that the Komen Network could not have reached its present level of success without the invaluable assistance of the members of its Million Dollar Council, but it is these tens of thousands of runners and walkers that form the sociocultural structure upon which the marketing campaigns of the iconic Komen brand find resonance with American consumers.As shall be discussed below, the pink ribbon affixed to the box of cereal or bottle of detergent prompt us the American consumer to purchase said cereal or detergent not because it symbolizes the corporate beneficence of Kellogg's or Tide, but because it reminds us all of the mothers, daughters, sisters, friends who have been afflicted by this disease and those who run or walk on their behalf or perhaps in their memory every year.As we shall see, the corporate component of the Komen agenda is formidable and lucrative, but much of that strength and success rests on the individuals whom the disease has affected and who take to the streets to march for the cure. Marketing a disease When the noble actions of these running, walking, buying activists are juxtaposed with the far more questionable actions of corporate profiteering, the Komen for the Cure organization becomes the subject of greater scrutiny and the focus of legitimate criticism.The Komen Network has been questioned, even vilified for a marketing strategy that at best makes it a pawn to the corporate mandate and at worst makes it complicit in the manipulation of American consumer behavior and philanthropic impulse.Those that question it point out that Komen is profiting from a disease that it claims it wants to eradicate.If this disease is indeed eradicated, how will the Komen Network sustain itself?Inherent in all the philanthropic rhetoric surrounding the organization is this -conflict of interest‖ and the fact that the organization is using for profit corporate marketing strategies and making millions of dollars.To understand its conflicted polarity and the development of this conflict of interest, we must examine the history of the Susan G. Komen brand, the nature and meaning of iconic brands, the unique characteristics of branding in the non-profit and/or philanthropic sector, and the cultural context within which all of this occurs and exists. The branding of Komen for the Cure made it the organization it is today.As an advertising executive Nancy Brinker was well aware of the power of a brand.Ad agency founder David Ogilvy's, definition of a brand is -the intangible sum of a product's attributes: its name, packaging, and price, its history, its reputation, and the way it's advertised‖ (Quoted in Dvorak 2009: 10).A brand is a promise that a product or an organization makes to its constituency.It is successful by making an emotional connection to a target audience (Dahlén et al. 2010: 195).The genius of the Susan G. Komen brand is that it taps into highly emotional issues.Founder Nancy Brinker used the name and memory of her dead sister to start this organization and launch its activism.The power of this message is that most Americans can relate to the loss of a loved one or have lived with the fear of such a loss. One of Komen for the Cure's attributes is its logo or trademark, the pink ribbon, which is the centerpiece of its brand.According to published reports the pink breast cancer ribbon was originally peach.In the early 1990s, 68-year old Charlotte Haley, whose mother, grandmother, and sister had all had breast cancer, made peach-colored loops at her dining room table.She distributed the ribbons in sets of five along with a card that said: -The National Cancer Institute annual budget is $1.8 billion, only 5 percent goes for cancer prevention.Help us wake up our legislators and America by wearing this ribbon.‖ 5n a truly grassroots campaign to defeat breast cancer, Haley passed out cards in her community, wrote to prominent women, and spread her message by word-of-mouth.Self Magazine asked Ms. Haley if they could take her peach ribbon campaign national, but she did not want her crusade to bring awareness to the cause to become too commercial.To avoid legal trouble, Self Magazine's attorney advised it to use another color; and they chose pink.In 1991, pink ribbons were handed out at the Race for the Cure in New York City.In 1992, Self Magazine, in partnership with Estee Lauder, launched its pink breast cancer ribbon campaign.Estee Lauder distributed 1.5 million ribbons along with laminated cards describing how to conduct breast self-examination.Within the year, the peach ribbons were forgotten (Fernandez 1998). Of course, the ribbon is a symbol that dates back decades and was for much of its iconographic history associated with the return of soldiers from war.Similarly the color pink has been associated with femininity since the 1940s though more directly associated with infants and children than with adult women.Thus, the fusion of the ribbon and the color pink became one of the most potent branding symbols in modern marketing.Komen adopted a familiar advertising technique by using an already popularized symbol, making it their own, and expanding its influence in the consumer marketplace. When this technique is used successfully to create a symbol that resonates widely in the marketplace it is said to have acquired brand recognition.When this recognition increases to a point where there is enough positive attitude and response to it in the culture in which it exists, it is said to have achieved brand franchise.The Pink Ribbon campaign can be said to have reached brand franchise proven by the shear fact that 67% of women said that breast cancer is their number one health concern when, as mentioned previously, the health statistics do not support that this should be so.As a brand's franchise grows, if its attributes are such and conditions are right, it can become an iconic brand.An iconic brand is a brand that is so successful that it takes on a larger meaning than simply symbolizing a product, company, or service.An iconic brand symbolizes a belief system, shared experience, or emotion widely held in a particular society (Holt 2004: 1) Branding (2004) called the cultural branding model to achieve iconic branding status (Holt 2004: 36).First, the organization began by addressing a contradiction in our society: the notion that very few dollars were being devoted to breast cancer research and yet each year 200,000 people became victims of the disease.Second, the organization's belief that the disease can and will be completely eradicated has provided a positive outlet for much of the fear and anxiety surrounding this deadly disease and has perpetuated a necessary story or myth upon which a brand develops.By using a personal tragedy to convey a need, Komen and its cause-marketing partners have helped to establish the cultural relevance of the pink ribbon specifically and the breast cancer cause more generally.Third, wearing the pink ribbon or buying a pink ribbon adorned product has provided society with a ritual action in which people can participate and do their part, helping to buy into the belief that the disease will be eradicated. Having achieved iconic brand status, the Susan G Komen Network has been able to raise over $30 million dollars a year since the early 2000s through an advertising and marketing technique known as cause marketing.Cause marketing is a type of marketing that involves a nonprofit organization joining forces with for profit businesses.One of the first examples of this was when the March of Dimes teamed up with the Marriot Corporation in 1976 for the opening of a 200-acre family entertainment facility called Marriott's Great America.The complex was in Santa Clara, California but the campaign was held in 67 cities throughout the Western United States.This campaign broke all fundraising records for the Western Chapters of the March of Dimes, and it provided hundreds of thousands of dollars in free publicity for the successful opening of the Marriott entertainment complex.Bruce Burtch conceived of the program and went on to coin the phrase, -Do Well by Doing Good‖ (Burtch). Over the last two decades, -cause-related marketing‖ and -cause marketing‖ have continued to grow as a means for product sales, promotions, and collaborations between companies and nonprofit causes.From 1990 to 1998 businesses involved in cause marketing increased over 400 percent.In recent years companies have made more long-term commitments to causes.These companies are what industry expert Carol Cone today calls -cause branders,‖ companies that take a long-term, stake holder-based approach to integrating social issues into business strategy, brand equity, and organized identity. 6usan G. for the Cure has based much of their donation generation on this technique.They have received over $30 million a year through corporate sponsorships.Their website lists over 185 corporate partners with almost as many programs for October 2009 alone.One can click on each program and get detailed facts on the partnership, its fiscal provisions and history, and its contribution to the Komen cause.For instance, the Energizer Family of Brands launched a Joining for the Cure platform in 2009 at the retail level.Through this combined effort Energizer will be making a contribution to Komen for the Cure for $400,000.Beginning July 1, 2009 Schick, through the Quattro for Women brand, will donate an additional $50,000 from a free music download promotion (-Corporate Partners‖). 7 Criticism: slacktivism and pinkwashing The Komen Network's significant success with cause marketing both in terms of the number of corporate sponsorships and the amount of revenue generated however, has led some to question its methods and criticize its efforts.Such critiques have come from within the ranks of consumer advocates and industry watchdog organizations and as well as from those who share Komen's goal of curing breast cancer.The organization Breast Cancer Action, for example, has responded to the use of cause marketing and corporate profiting from the pink campaign by cause, 61 percent said they'd switch retailers to support a cause, and 54 percent would pay more for a product that supported a cause they care about (McConnell 2007: 70). 7For other examples of cause related marketing see Sokol.Komen's hold on female boomers and corporations eager to reach them however has sometimes been eroded by Komen's support of controversial organizations like Planned Parenthood.Komen's support for Planned Parenthood is rooted in the broad spectrum of female health services their clinics provide including breast cancer screenings for low-income women.When Komen refused to stop funding Planned Parenthood, the pro-life owner of the Curves fitness chain withdrew his financial support for the organization.Ironically, regular exercise is and has been a proven preventative measure for breast and several other kinds of cancers, but abortions like those provided by Planned Parenthood have been known to increase the risk of breast cancer in women (Stanek 2010).creating a project called Think Before You Pink.The Think Before You Pink campaign has questioned many of the motives and tactics of organizations such as Komen for the Cure.The BCA has accused Komen and like organizations of slacktivism and pinkwashing tactics and calls for transparency and accountability in companies that participate in these efforts (-Think Before You Pink‖). The Urban Dictionary defines Slacktivism as -the act of participating in obviously pointless activities as an expedient alternative to actually expending effort to fix a problem.‖Slacktivism applies to both individual activity and collective action.The latter is large-scale industrialperpetrated slacktivism, which is highly planned, professionally coordinated and intended to advance a self-serving industrial agenda.Corporate-sponsored slacktivism is, in short, -implemented to stop social change that could, in the long run, be crucial to society's long-term well-being‖ (Landman 2008a). Slacktivism dates back to the mid 1980s when the tobacco industry undertook a campaign to derail efforts to ban smoking in public places by promoting segregation of smokers into smoking sections in restaurants and other like facilities.Clearly limitations on public smoking would have had adverse effects on the tobacco company's profitability, but to oppose the bans outright would have been to provoke popular backlash sustained by indignation at the obviously self-serving motives of the companies.So, in order to avoid such a backlash, the tobacco companies, led by Philip Morris, got out ahead of the issue and suggested and then supported the smoking section alternative, labeling it as progress and reform (Landman 2008a).If one thinks through the logic of smoking sections or recalls passing through a smoking section to reach a non-smoking section, the futility of attempting to confine smoke to one section of an open space is apparent.Nonetheless smoking sections are still used in some locales more than two decades later and in those intervening two decades, the cigarette companies were able to maintain the social acceptability of smoking in public and reap the profits therein. Other slacktivist campaigns followed and included the effort to recycle plastic shopping bags promoted by the companies that manufactured said bags and the American Chemistry Council in order to make an end run around environmentalists who sought to restrict the use of plastic bags altogether (Landman 2008a).Students of slacktivism add the Susan G. Komen phenomenon to this list because of the network's successful integration corporate incentive and individual philanthropy as manifested in the ubiquity of the pink ribbon. In considering slacktivism one must place blame where blame is due.Slacktivism is a product of corporate malfeasance.Its victims however are the average citizens who are duped by such campaigns.-Most slacktivist individuals are probably genuinely well-meaning people who just don't take the time to think about the value, or lack thereof, of their actions.They're looking for an easy way to feel like they're making a difference -how damaging is it to wear a rubber wristband or slap a magnetic ribbon on your car?‖ (Landman 2008a).For producer and consumer alike -donating by making a purchase is a really seductive idea‖ (Stukin 2006). Komen has also come under fire for a related practice called pinkwashing, a quasi-philanthropic marketing strategy and form of slacktivism where corporations put the Komen brand on their products and give the organization a share of proceeds from the sales of said products.Pinkwashing has become a $30 million a year moneymaker for the Komen Network and has contributed significantly to public awareness of the disease and the effort to cure it.As the name implies, however, pinkwashing is not without its critics.These critics generally fall into two camps. The first group point out the limited profitability of these campaigns for Komen relative to their substantial profitability for the corporate sponsors.These critics further contend that committed citizens would be better off donating directly to Komen than indirectly through these third parties whose primary mandate is profit, not charity.For example, consider Yoplait's donation compared to the profit the corporation makes in the name of charity.Yoplait donates 10 cents for every pink yogurt lid mailed back to the company.They guarantee a minimum of $500,000 and cap donations at $1.5 million.Yoplait is owned by General Mills, which did $10.1 billion in sales in 2008.Fifteen percent of those sales come from the Yoplait brand.Therefore, if Yoplait contributes the full $1.5 million that still only represents .10% of their net sales.Obviously using the Komen name has been successful since General Mills plans to expand their production capacity in 2010 with the growth of the Yoplait brand.When one considers it would take buying over 100 yogurts to make a $10 contribution, the viability of pinkwashing for corporate America is revealed.Questions as to why consumers do not simply make a direct donation remain (Reisman 2007). Similarly, when Campbell's Soup changed their labels to pink from red in October to mark Breast Cancer Awareness Month, their contribution to Komen was $250,000.However the actual amount contributed works out to 3.5 cents a can (Buchanan 2006).Barbara Brenner, executive director of Breast Cancer Action, told Newsweek: -Everyone's been guilt-tripped into buying pink things.If shopping could cure breast cancer, it would be cured by now‖ (Quoted in Venezia 2010). Komen's corporate partners are using support for breast cancer research to market products.Problematically, some of these products actually cause cancer and have been linked to breast cancer in particular.For example, BMW's Ultimate Drive will donate $1 per mile when people test-drive their cars.In Anne Landman's article -Pinkwashing: Can Shopping Cure Breast Cancer‖ ( 2008), the author points out, -it ignores the fact that the campaign encourages more and unnecessary driving, not to mention that automobile exhaust contains polycyclic aromatic hydrocarbons, harmful chemicals known to cause cancer‖ (2008b).BMW is profiting from its association with the pink ribbon and as this case reveals -breast cancer has been transformed into a marketdriven industry.It has become more about making money for corporate sponsors than funding innovative ways to treat breast cancer‖ (Samantha King quoted in Adams 2007). On BCA's Think Before you Pink website, they advocate and provide a list of ways to take action against breast cancer that do not involve shopping.Their list includes using public transportation because pollution is one of the risk factors for breast cancer.They also recommend using non-rGBH dairy products for their role in reducing risk.Again this highlights the possible syncopation in the anti breast cancer movement from Komen's focus on cure rather than prevention.BCA speaks out against pinkwashing.They guide consumers to ask basic questions before buying such products.These questions include: how much of the purchase price will be donated and where is it going?What programs do the recipients fund?Is there a cap on donations?What does the company offering the pink ribbon product do to make sure that they are not adding to the problem of breast cancer (-Think Before You Pink‖)? A second of critics reject pinkwashing on more philosophical grounds contending that philanthropic schemes such as these undermine not only popular commitment to substantive social action but also reinforce traditional gendered power relations by targeting women as consumers.For instance, when Campbell Soup changed its label from red to pink last October to support Breast cancer month, its sales doubled.Campbell spokesman John Faulkner said, -We certainly think there is the possibility of greater sales since our typical soup consumers are women and breast cancer is a cause they're concerned about.‖He went on to say that he would -love to see the program expanded greatly next year‖ with other retail partners (Thompson 2006). Interestingly, even though pinkwashing efforts seem to be targeted at consumers who are mostly women, breast cancer is personified not by the real life women struggling to cope with the disease, but by a small pink ribbon that can be affixed to any number of products.A commodity is something that has value in exchange.To commodify something is to artificially give it value in exchange.Breast cancer and the hardship and heartache it brings have been given value, $30 million worth, in exchange.Komen's corporate sponsors for all their rhetoric would be more likely to maintain their current profitability were no cure to be found.8 Conclusion From the outset, Komen for the Cure has been committed to finding a cure for breast cancer.While a laudable and certainly desirable goal, it stands apart from other related goals including raising awareness (which has actually occurred as a by-product of Network activity), discovering the cause or causes of the disease, and working on prevention techniques.For Komen the entire focus is on research for the cure and as a result, other breast cancer advocacy groups have criticized the network for not putting more of its vast resources into cause and prevention research.From a personal as well as societal perspective, preventing disease is as legitimate if not more legitimate than searching for a cure.Perhaps in response to this criticism, in 2008 Komen reexamined its research focus towards addressing the translation of this knowledge into -treatment, early detection and prevention‖ (-Research Grant Programs‖).Regardless, the Komen Network is the big kid on the block and no other organization, with the possible exception of the umbrella organization, the American Cancer Society, comes close to Komen in name recognition or fundraising.And of course the American Cancer Society, divides its research and advocacy dollars among all types of cancers. As mentioned previously, a slight deviation between agenda and outcome in the work of the Komen Network is detectable.Komen's agenda has been to eradicate the disease by finding a cure.The result, however, has been a huge sales boost for corporations willing to join the cause marketing bandwagon as well as a greater public awareness of the disease and its consequences.The high profile and impressively successful Race for the Cure campaign exemplifies an unintended consequence of Komen activism.Initially intended as a fundraising tool, thanks to widespread popular support, the Races for the Cure have become that and much more.In addition to raising $4.3 million annually with estimated participation at 45,000 people nationwide, the races have become an outlet for female activism vis-à-vis breast cancer (Kurtianyk 2009).Women with no direct connection to the disease out of a sense of perhaps shared female solidarity and with the weighty recognition that someday any one of them could be benefactors of the work Komen provides participate.Others afflicted with the disease walk as a means of instilling or buffeting hope.Survivors walk for what is essentially a victory lap.And it is in the inspiration of the survivors that the Races take on perhaps their most obvious unintended consequence, a conscience-raising social movement alerting women to take control by getting regular checkups that could lead to life-saving early detection. The challenge in analyzing the Susan G. Komen Network relative to the slacktivist phenomenon is to place the Network on the spectrum between the well-intentioned but uninformed individual activists and their corporate manipulators.The Komen Network is not a corporation.It is not a for-profit entity.It is an organization dedicated to a meritorious cause.It seeks to bring about a change, the cure for breast cancer, that would enhance society's overall long-term well-being. This begs the question, is Komen complicit or co-opted, victim or victimizer, manipulator or manipulated in their embrace of corporate modalities, including cause marketing.Does the Komen organization undertake a pragmatic calculus to determine that while a direct donation was preferable to one through a third party as provided by soup labels or yogurt lids, the latter was preferable to no donation at all.Further, how do we calculate into this equation the importance of raising awareness about the disease and the credit that Komen and its pinkwashing corporate sponsors necessarily deserve for raising awareness about a disease for which early detection can make a life or death difference? Problematically few if any of the pinkwashing breast cancer organizations and their corporate benefactors make any mention of disease prevention.A cynical analysis of this reality would suggest that prevention is not promoted because to find a cure is to end the pinkwashing raison d'être. According to the Komen website though, the organization is making a difference.They call their members activists, advocates, and global citizens.Consider the following: nearly 75 percent of women over 40 years old now receive regular mammograms, the single most effective tool for detecting breast cancer early (in 1982, less than 30 percent received a clinical exam).The five-year survival rate for breast cancer, when caught early before it spreads beyond the breast, is now 98 percent (compared to 74 percent in 1982).The federal government now devotes more than $900 million each year to breast cancer research, treatment and prevention (compared to $30 million in 1982).America's 2.5 million breast cancers survivors, the largest group of cancer survivors in the U.S is a living testament to the power of society and science to save lives.(-Our Promise and Background‖) Critics condemn Komen for pinkwashing and being complicit in slacktivism.There is as yet no universal cure for breast cancer, but the above statistics leave little doubt that the network succeeds in its goal of creating activists.Saving yogurt lids, selecting pink ribbon adorned products, wearing pink bracelets, affixing pink magnetic ribbons to one's car are all examples of everyday activism.While not pivotal in leading to a cure as yet, the increased awareness that comes from these actions undoubtedly leads women to be more diligent about examination and mammography.Whether born of slacktivism or more philanthropic notions of activism the result of their diligence is the same-tangible differences being made in the lives of thousands of women yearly.That is success, -one person, one community, one state, one nation [one survivor] at a time‖ (Brinker 2010). . Examples of iconic brands include Harley Davidson Motorcycles, Coca Cola, and McDonalds.Susan G. Komen for the Cure has followed what Douglas Holt, author of How Brands Become Icons: The Principles of Cultural
v3-fos-license
2022-01-19T16:24:57.269Z
2022-01-01T00:00:00.000
246021479
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.envsoft.2022.105321", "pdf_hash": "e3f333b21b595f19f613ec3923c44cbef6c5b75c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44723", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "d18c74353f2325f16256eb48e50ca0c9679f02a0", "year": 2022 }
pes2o/s2orc
Catchment scale runoff time-series generation and validation using statistical models for the Continental United States We developed statistical models to generate runoff time-series at National Hydrography Dataset Plus Version 2 (NHDPlusV2) catchment scale for the Continental United States (CONUS). The models use Normalized Difference Vegetation Index (NDVI) based Curve Number (CN) to generate initial runoff time-series which then is corrected using statistical models to improve accuracy. We used the North American Land Data Assimilation System 2 (NLDAS-2) catchment scale runoff time-series as the reference data for model training and validation. We used 17 years of 16-day, 250-m resolution NDVI data as a proxy for hydrologic conditions during a representative year to calculate 23 NDVI based-CN (NDVI-CN) values for each of 2.65 million NHDPlusV2 catchments for the Contiguous U.S. To maximize predictive accuracy while avoiding optimistically biased model validation results, we developed a spatio-temporal cross-validation framework for estimating, selecting, and validating the statistical correction models. We found that in many of the physiographic sections comprising CONUS, even simple linear regression models were highly effective at correcting NDVI-CN runoff to achieve Nash-Sutcliffe Efficiency values above 0.5. However, all models showed poor performance in physiographic sections that experience significant snow accumulation. Introduction Effective management of hydrologic resources and hazards often depends on accurate simulations of runoff. For example, runoff time series can be combined with other environmental data to characterize how a system responds to various climate and land use scenarios. To facilitate the work of researchers and managers seeking to understand and manage hydrologic systems, we developed an automated Curve Number (CN) based technique for estimating catchment level runoff that allows for the use of either simulated or historical data for precipitation and landcover. To facilitate validation of the models developed in this paper for various applications, we designed and implemented a machine learning accuracy assessment framework that withholds validation data from statistical model training both spatially and temporally to build confidence that the resulting accuracy measures truly characterize how the models generalize to other catchments and time periods. To implement the accuracy assessment, this paper presents a machine-learning framework for a state-of-the-art, approximately unbiased approach to quantifying predictive accuracy in a hydrologic spatio-temporal context. Using an existing automated technique for quantifying hydrologic condition (Muche et al., 2019a;2019b), we generated NHDPlusV2 catchmentscale CNs for CONUS and applied the framework to estimate and evaluate a variety of relatively simple CN-generated runoff time-series correction models that often dramatically improve runoff accuracy. Additionally, because the technique we employed for automating CN generation adheres to the conventional CN approach, we expect that the correction models are likely to enhance the accuracy of runoff time series generated by any of the variants of CN, such as the recent GCN250 (Jaafar et al., 2019). Hydrologic runoff modeling A variety of research topics involve data modeling that requires information about how precipitation patterns translate into measures such as runoff and streamflow. Historical runoff estimates utilizing a robust set of environmental forcing variables have been made readily available through the North American Land Data Assimilation System (NLDAS) (Xia et al., 2013) and the Global Land Data Assimilation System GLDAS (Rodell et al., 2004) Land Surface Model (LSM) projects, for example. Historical data are useful for assessing and training models, but these projects do not provide a means for simulating runoff in counterfactual or future conditions. A variety of approaches have been developed to estimate the relationship between hydrologically relevant environmental variables such as precipitation and runoff. Sitterson et al. (2017) provide a taxonomy of rainfall-runoff models based primarily on the correspondence of the model with physical reality and spatial resolution. At one end of the spectrum, the CN methods of runoff modeling offer analysts one of the simplest approaches to runoff modeling. At the other end of the spectrum are multi-input, multi-output LSMs such as NLDAS-2 (henceforth referred to as NLDAS), which itself is an ensemble of LSMs (Xia et al., 2012a;2012b). Other runoff modeling approaches include the Geomorphological Instantaneous Unit Hydrograph approach, a more recent and a more technically sophisticated approach to rainfall-runoff modeling (Rigon et al., 2016). Recently, Oppel and Schumann (2020) applied machine learning estimators to explore transferability of geomorphological instant unit hydrograph runoff models between catchments based on catchment characteristics and a basin classification scheme derived from their models. Fractal geometry has been applied to modeling surface runoff (Gires et al., 2018). Another group of runoff models is the "GR chain" (Ficchì et al., 2019) that vary in temporal resolution, which Ficchì et al., (2019) extended to include flux-matching criteria. The Soil Conservation Service-Curve Number also widely referred to as the Curve Number method was developed by the United States Department of Agriculture (USDA) in the 1950s to predict direct runoff from rainfall events and it is a widely adopted method in surface runoff estimation (Hawkins et al., 2008;Hawkins 2014;Lian et al., 2020). The method was developed using measured rainfall and runoff data from several agricultural research watersheds primarily in the Eastern, Midwestern, and Southern U.S.; the rainfallrunoff relationship in the study watersheds was extrapolated to an empirical number (the Curve Number) using land use/cover, hydrologic soil groups, and hydrologic conditions of watersheds (Hawkins et al., 2008, Muche et al., 2019aRallison 1980). The CN method has been globally adapted to areas with varying land use/cover, soil properties, and climatic conditions. It has also been incorporated into various continuous hydrologic/watershed models though the method was originally devised for event-based rainfall-runoff modeling (Garen and Moore 2005;Hawkins 1996;Kennan et al. 2007;Muche et al., 2019a). Despite wide applications of the CN method, some watersheds have been found to exhibit significant differences between observed and predicted runoff using the CN method (Hawkins 2014;Muche et al., 2019a). The effects of rainfall volume, intensity, and frequency (Muche et al., 2019a;Wang and Bi 2020), in addition to the seasonality of rainfall-runoff relationship could be among the contributing factors to the CN method's low accuracy in those watersheds (Rodríguez-Blanco et al., 2012). To increase the accuracy of CN generated runoff, Silveira et al. (2000) incorporated automated estimation of antecedent moisture using five days of lagged rainfall. Recently, advancements in Geographic Information Science (GIS) created the opportunity to account for seasonality in the rainfall-runoff relationship by using remote sensed data to flexibly approximate hydrologic condition (Muche 2016;Nasiri and Alipur 2014;Singh and Khare 2021). Moderate Resolution Imaging Spectroradiometer-Normalized Difference Vegetation Index (MODIS-NDVI) were applied to CN estimation by several authors (Gandini and Usunoff 2004;Muche et al., 2019aMuche et al., , 2019bNasiri and Alipur 2014;Singh and Khare 2021). Muche et al. (2019a) used MODIS-NDVI to estimate CN using 12 years of observed rainfall and runoff at four small watersheds in the Konza Prairie Long-term Ecological Research site. Muche et al. (2019b) extended this work, using MODIS-NDVI for catchment-level CN development spanning CONUS as part of USEPA's Hydrologic Micro Services (HMS) computational platform, which is used in the results below. Model validation and selection Validation is generally regarded as an important step in modeling, though it is not clear what exactly is meant by validation and what one must do to achieve it. Schlesinger (1979) defined validation for computerized simulations only in terms of comparison to reality in the domain of applicability. Schruben (1980) discussed simulation credibility as a more practical standard to meet than strict validation, requiring simulation output to be indistinguishable from observations of reality by a human in the same manner as a Turing Test for artificial human intelligence. Sargent (2013) discusses validation broadly for empirical studies and describes several Validation Techniques including inner validation as an assessment process using data resampling and historical data validation as a process of splitting data into building and testing sets. Klemeš (1986) offers an early and widely cited guide to validation of hydrologic models that generally corresponds well with modern machine learning approaches to model validation. Biondi et al. (2012) review and refine validation concepts in hydrology, and they discuss performance or model validation, which includes qualitative assessment of graphs and quantitative assessments of model metrics on split-samples. They also discuss a distinct type of validation, scientific validation, wherein one considers the theoretical underpinnings of the model. Common machine learning terminology contrasts with the above uses of validation. Hastie et al. (2017) uses the validation set for model selection and a final testing set for quantifying generalization error, which is called validation in the above contexts other than machine learning. In machine learning, a great deal of attention is paid to using validation-like metrics for both model selection and validation, typically with distinct treatments of data. Common statistical approaches to model selection that rely on error estimates from training observations can lead to substantial downward bias in error metrics, leading to overly optimistic conclusions about model accuracy (Picard and Cook 1984). Cross-validation approaches to model validation split a dataset into n folds, use n − 1 of the folds to train a model, quantify predictive accuracy on the withheld fold, and cycle through all n folds, generating an empirical distribution of model accuracy that approximates expected prediction error (Hastie et al., 2017, p254). Hawkins et al. (2003) favor crossvalidation approaches to model selection and validation rather than a single hold-out test set because cross-validation balances the desire for more data with the need for quantifying predictive accuracy. However, cross-validation when done incorrectly can still lead to biased performance measures and inferior model selection (Cawley and Talbot 2010;Hastie et al., 2017 p245). Guyon et al. (2010) provide a useful discussion of the nested relationship between the two (or more) types of parameters in the context of diverse statistical learners. In this process of "multi-level inference" that Guyon et al. (2010) describe, parameters are chosen in an inner estimation step by analytically efficient learning algorithms (e.g., linear algebra solutions for Ordinary Least Squares (OLS) linear regression) and hyper-parameters are chosen by repeatedly invoking the efficient algorithms with different hyper-parameter values. For multi-level inference approaches, cross-validation serves as the framework for each level of inference to avoid over-fitting (Cawley and Talbot 2010). Fushiki (2011) found for regression problems that n-fold cross validation may bias prediction error upwards while training error is a downward biased estimate of prediction error. Hastie et al. (2017, p254) characterize 10-fold cross-validation as an "approximately unbiased" means of quantifying expected or extra-sample error. Techniques such as 5fold and 10-fold cross-validation are computationally efficient approaches for quantifying prediction error with generally lower variance than leave-one-out cross-validation (Hastie et al., 2017 p255). In classification problems, n-fold cross validation has exhibited reduced bias and computational complexity relative to the bootstrap (Kim 2009), an alternative to cross-validation. Time series datasets require additional assumptions to justify cross-validation. More conservative approaches to validation of time series models typically require withholding later observations in the dataset from training for testing the model's performance with new data. This process has been called "last block validation" (Bergmeir and Benítez 2012) or "out-of-sample evaluation" (Bergmeir et al., 2018). Recent advances have opened up possibilities for efficient cross-validation with some time series estimators, which is particularly adventitious for small datasets (Bergmeir et al., 2018) because all data can be used for validation. Data can be grouped for modeling and assessment in a variety of ways. In machine learning, the term, slice (Chung et al., 2019), refers to divisions in the data by predictor variables that can be used to assess model performance with greater granularity. Validation metrics that group slices together can potentially obscure poorly performing slices (Chung et al., 2019). The idea of assessing slice performance is closely related to the idea of transportability in hydrology as discussed by Klemeš (1986), who described an early cross-validation-like testing procedure for detecting poor performing members of a group of catchments to validate simulations in ungauged members of the group. The model scoring or loss metric also plays an important role in model selection and validation. Gupta et al. (2009) applied and refined decompositions of the popular Nash-Sutcliffe Efficiency (NSE) model scoring metric in the context of runoff simulations, criticizing models optimized with NSE as the score for being of use only in normal conditions. This problem can be remediated to some extent by their proposed Kling-Gupta Efficiency (KGE) metrics (Gupta et al., 2009). Knoben et al. (2019) point out interpretation issues associated with several parametric and non-parametric variants of KGE; they emphasize a lack of a clear benchmark or cutoff value with KGE metrics, while the NSE value of zero benchmarks simulation performance against the mean of the observed series. Corrective modeling Models that correct simulations have been developed extensively in the earth sciences. Watson (2019) discusses the tradeoffs associated with physical versus purely data-driven approaches to increasing predictive accuracy at short and long timescales in the context of climate simulations. Dinge et al. (2019) distinguish between point to point correction models and models that use time series characteristics to increase performance in the context of applying error correction models to wind speed prediction. Zjavka (2015) applies a polynomial neural network to correct wind speed using lagged measures of nearby environmental variables. Regression prediction We use a broad definition of regression from Hastie et al. (2017, p10), which encompasses any statistical learner that makes quantitative predictions. In practice, regression models have a continuous dependent variable in contrast to classification and ordered categorical models with discrete dependent variables. Accordingly, the linear regressions, regularized linear regressions, and gradient boosting ensembles are all referred to as regressors or regression models. Regularized linear regression estimators share similarities with OLS, but with additional structure to reduce the variability (increase the stability) of the parameter estimates, at the expense of increased bias (Frank and Friedman 1993). The Lasso regularized regression effectively selects predictor variables, pushing some coefficient estimates to zero, while the Ridge regularized regression tends to push regression coefficients towards equality with each other (Tibshirani 1996). The Group Lasso was developed to select groups of dummy variables for multi-category predictors, and extensions to the Group Lasso have been developed to preserve hierarchal connection between interaction and main effects in lasso regression models (Lim and Hastie, 2015). In contrast to OLS, the basic Lasso estimator can more generally estimate flexible dummy variable specifications with overlapping categories as discussed in Lim and Hastie (2015). The elastic-net regressor combines the strengths of the Lasso and Ridge regression estimators, with the ability to model high collinearity among variables like Ridge and the ability to do variable selection like Lasso (Zou and Hastie, 2005). The gradient boosting regressor is an extension of the gradient boosting classifier and has been described in detail by Friedman (2001) and by Hastie et al. (2017). The algorithm fits an additive sequence of simple regression tree estimators with the gradient of the previous estimator's loss function used as the dependent variable for training the subsequent tree in an iterative procedure. According to Guyon et al. (2010), boosting methods of regression are less vulnerable to overfitting because they minimize a guaranteed risk function. Hastie et al. (2017, p340) quote others in describing the classifier version of gradient boosting as the "best off-the-shelf classifier in the world". NDVI-based automated curve number development The primary challenge in automating the generation of runoff time series using the Curve Number method is the selection of the hydrologic condition. The hydrologic condition functions as a categorical variable taking into consideration several possible influencing factors mainly related to land-cover type at the time of precipitation event. The customary approach to specifying hydrologic condition requires site specific expert analysis that hinders scaling the approach to larger areas. Remote sensing data has been shown to be a viable approach to specifying hydrologic condition, facilitating automated estimation of CN values. We followed the work of Muche et al. (2019aMuche et al. ( , 2019b and used 250-m, 16 day resolution MODIS NDVI (Didan 2015) data along with land cover and soil data to quantify hydrologic condition and create a time series of twenty-three CN values spanning an average year for each of approximately 2.65 million NHDPlusV2 catchments in CONUS. To compute these numerous CN values, we first needed to quantify the corresponding hydrologic condition. We used Google Earth Engine to obtain and spatially average seventeen years (2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) of MODIS NDVI satellite raster data to the NHDPlusV2 catchment. Next for each catchment and each of twenty-three annual, sixteen-day timesteps, we temporally averaged the 17 observations of spatially averaged NDVI. We then used these time and space averaged NDVI values along with NLCD land-cover data (discussed next) for each catchment and time period to determine the hydrologic condition as Poor, Normal or Good based on the ranges specified in Table 1. We obtained catchment level NLCD 2011 land cover data and STATSGO derived sand and clay soil composition percentages from the EPA StreamCat dataset (Hill et al., 2016). We used the STATSGO percentages to determine the hydrologic soil group of each catchment. Finally, for each timestep and each catchment we used the land cover, hydrologic soil group, and hydrologic condition values along with the USDA's Soil Conservation Service curve number tables to obtain NDVI-CN values for each catchment and each of the 23 annual 16 day time periods. For each catchment and each timestep, the NDVI and CN values as well as annual average CN values can be obtained at ftp://newftp.epa.gov/ exposure/CurveNumberNDVI. Additionally, the spatially and temporally averaged NDVI data can be obtained for each catchment at https://qed.epa.gov/hms/rest/api/info/catchment? cn=true&comid=COMID, where COMID is replaced by a NHDPlus catchment ID (e.g., https://qed.epa.gov/hms/rest/api/info/catchment?cn=true&comid=331416). Accuracy assessment We used the CN values described above to develop a runoff database to investigate the accuracy of NDVI-CN generated daily runoff using NLDAS runoff as the target. For the spatial units in our database we randomly selected 5 NHDPlus catchments in each United States Geologic Survey (USGS) physiographic section (Fenneman and Johnson 1946). Next, we retrieved 17 years of NLDAS runoff data and NDVI-CN runoff data (forced by NLDAS precipitation data) for each catchment. We also retrieved the GLDAS runoff and NDVI-CN runoff (forced by GLDAS precipitation data) and present parallel, condensed results based on that data in Appendix 1. In the discussion of model selection and validation below, we follow the bulk of the empirical literature and reserve the term validation to describe the final split of data that is not used for any kind of model selection (in this paper), but only for reporting a final estimate of predictive accuracy. This decision contrasts with recent trends in machine learning research, where validation data are used for model selection and testing data are used for quantifying predictive accuracy of the selected model (e.g., Hastie et al., 2017). We use the term validate or validation to characterize the final chronological split of data as well as the subsequent accuracy analysis. This validation stage provides information not used for model selection in this paper but that is developed for use by end-users who require information about expected predictive accuracy. We use the term test or testing more generically to refer to any accuracy assessment such as those performed during model selection. Validation is thus a special kind of model testing deliberately designed to avoid data leakage that can optimistically bias results. We adopt this terminology for consistency with the hydrologic literature (and a number of other empirical sciences as well). The purpose of assessing the accuracy of simulated runoff relative to the target runoff is to provide information to users about the likely quality of future simulations that may include times and locations not present in our database. To achieve this, we develop a validation approach with a resampling design based on holding-out observations for testing based on both time and space. First, because some of our runoff simulation models use lengthy time series of runoff data for training, we employ a traditional three-part temporal splitting of each time series. As can be seen in Fig. 1, the first half of a runoff time series is reserved for training the models, and the second half is split into a testing series used for final model selection and a validation series used for quantifying predictive accuracy. This approach helps ensure that the validation accuracy assessment of the models generalizes to other time periods, particularly the near future. This type of validation is an example of what Klemeš (1986) called a split-sample test. Second, for the runoff correction models that we develop below, there is also a potential concern about the ability of the accuracy metrics to generalize to catchments excluded from model training, such as those not in the runoff database. Accordingly, we evaluate predictive accuracy of each correction model using the average of a leave-one-catchment-out-of-eachsection repeated cross validation approach. As shown in Fig. 2, the splitting algorithm we developed takes each physiographic section and places the five sampled catchments in a list. For a single repetition of cross-validation, each list is shuffled and then the first catchment in each list is excluded from the training data of a sub-model and reserved for testing that model. The next training/test split comes from excluding the second catchment in each list and so on until each of the 5 catchments in each section have been reserved from training a model and used for testing and validation of that model. This procedure is repeated to ensure the accuracy metric does not depend on any patterns in the test-data from a single shuffling of the catchments in a physiographic section. This approach to geographic data-splitting is in addition to the single chronological split discussed above and is an example of what Klemeš (1986) referred to as a proxy-basin test. The result is that no catchment or time period is ever present in both training and test or validation data for any of the prediction accuracy measures that we report. The simulated runoff series, and the target or observed runoff series, y obs , we are comparing are continuous and non-negative, allowing for a wide range of accuracy measures based on differences or similarities between the two series. We select Pearson's correlation coefficient (r p ) and Nash-Sutcliffe Efficiency (NSE) because these two measures are popular in both machine learning and hydrology applications and because the two measures inform us about distinct aspects of model fit. Nash-Sutcliffe Efficiency is identical to the familiar coefficient of determination or R 2 from a linear regression where y obs is the dependent variable and y sim is treated as the regression prediction. NSE can be applied to non-linear models with a potential range from minus infinity to positive one. Negative values for NSE indicate that the mean of the target is a better predictor of the target than the simulated series. The magnitude of r p can range from zero to one, measuring how close a linear transformation of the simulation is to the target. Comparing NSE to r p is helpful for illustrating the contrasting properties of these two measures. NSE effectively benchmarks the simulation against the target's mean, and importantly the mean of the target is not known at the time of the simulation. Further, the NSE does not center or rescale the simulation series to help it match the target series. In contrast, r p benefits from a linear transformation of the data (i.e., standardization of both y sim and y obs ) that uses information about the target series. Thus, the NSE conservatively uses information about the target's mean to penalize the measure of a simulation's performance, while r p optimistically uses similar information to effectively augment the simulation when assessing its performance. CN runoff correction modeling As shown in the results below, the high values of r p for the average catchment in most sections along with the low NSE values for the same catchments could indicate that the NDVI CN runoff series were not very close to the NLDAS runoff series, but that nonetheless the two series contained much of the same information. This situation is analogous to comparing measurements taken in the wrong units (e.g., Celsius vs Fahrenheit). Accordingly, we developed correction models to investigate if we could reliably correct the NDVI-CN runoff time series using NLDAS runoff as the target. Catchments with fewer than 100 NDVI-CN event days were excluded from the analysis to ensure sufficient data across all three temporal splits. We use the Python (version 3.8) programming language to develop software implementing a modeling framework that would allow for a flexible and repeatable analysis of a variety of approaches for generating a set of one or more NDVI-CN correction models. Because our database includes runoff time series for numerous locations characterized as catchments, sections, provinces, and divisions, we allow for distinct models to be estimated for each item in a geographic grouping or level (e.g., one CONUS model or eight physiographic division models, etc.); we refer to this as the geographic modeling scope. Additionally, we allow for models to estimate runoff corrections conditional on a different, finer geographic level (e.g., physiographic section within a model of a single physiographic province, or a physiographic province within a model of a single physiographic domain, etc.); we refer to this as the geographic modeling level. In the results below, we set geographic modeling scope to physiographic division and we set the geographic modeling level to physiographic section. We develop correction models of NDVI-CN, y CN to predict the NLDAS target runoff values, y NLDAS , according to, y NLDAS, t, g = f G y CN, t, g , g + e CN, t, g where t indexes time in days, G indexes the geographic modeling scope of the transformation f, g indexes the geographic modeling level (i.e., physiographic section) associated with each runoff value, and e CN,t,g is the error. For the correction function, f, we develop the option for employing several statistical regression techniques utilizing pipelines, transformers, and estimators from Python's Scikit-Learn (version 0.24.1) machine learning package (Pedregosa et al., 2011). Pipelines are a sequence of data transformers and statistical estimators that can be fit to training data to estimate a model; that model can then be used to predict with potentially different data. Pipelines always end with an estimator (e.g., linear regression) and may include data transformation steps such as for standardization and creation of interaction and polynomial terms. Conveniently, transformations such as standardization that are estimated during model training are stored for use with future predictions. For this paper, we developed pipelines implementing the following statistical estimators: OLS linear regression (lin-reg), lasso regularized linear regression (lasso), ridge regularized linear regression (ridge), elastic-net regularized linear regression (elastic-net), and a gradient boosting regression tree ensemble (GBR). Each of the pipelines is preceded by a global step where the geographic modeling level is used to create a set of binary dummy variables identifying the geographic membership of each runoff value. When creating dummy variables, no values were dropped from each level, so no constant term was included as a regressor. This approach avoids perfect-multicollinearity and allows for interpretation of coefficients on dummy variables that does not rely on comparison to an excluded category. Each of the four linear regression pipelines include polynomial terms for the continuous NDVI-CN runoff series interacted with each of the geographic modeling level dummy variables. To avoid perfect multicollinearity, these polynomial terms are not included as noninteracted standalone variables. We also developed the option to use nested cross-validation to choose the optimal polynomial degree for each model, though we did not use that setting in the results presented in this paper in favor of a more geographically specific model selection approach discussed below. To maintain the integrity of pipelines, we used Python objects to create wrappers that use multi-indexed Pandas dataframes (McKinney 2011) to retain information about the ComID associated with each corrected or uncorrected runoff value. This extra programming step was helpful for maintaining data integrity because the scikit-learn estimators and transformers used in the pipelines utilize Numpy array objects (Pedregosa et al., 2011;Harris et al., 2020). When used for cross-validation, each pipeline wrapper also automatically checks for, logs, and removes any catchment IDs (ComIDs) used at training, to guard against data leakage, which could lead to downward biased prediction error estimates and incorrect inference in the model selection process. When creating a correction model, we have the opportunity for dividing up the data into different models for different values of uncorrected runoff. For example, because the CN method tends to predict no runoff for days with low values of precipitation, the optimal correction model for those values is likely quite different than for days when precipitation is high. For this paper we only develop the capability of splitting uncorrected NDVI-CN runoff values based on whether they are equal to zero, but it would be simple to add other approaches such as a quantile-based split. The split we chose makes sense particularly because zero and nonzero runoff values have distinct data generating processes due to the event-based nature of the CN method. In our simple point-to-point correction method, there are two straightforward approaches to correcting zero runoff predictions from the NDVI-CN model. First, we consider the mean of observed runoff, conditional on the geographic modeling level, and use that value as the correction. Second, we consider an otherwise identical model that retains the uncorrected value of zero, labeled as flat0 in the results below. The positive NDVI-CN values are used to train the pipeline, creating a model that can be used for prediction. At the time of prediction, NDVI-CN values are divided into zero and nonzero rows, fed into the appropriate model, and reassembled into a dataframe with each row indexed by ComID and date. Each of the pipelines we created present several opportunities for hyper-parameter tuning. We develop the option to use nested cross-validation for assessing the predictive performance of all hyper-parameter combinations using Scikit Learn's optimized nested cross validation estimators (e.g., LassoCV instead of Lasso) when available and the Scikit Learn nested GridSearchCV tool otherwise. These nested cross-validation estimators utilize repeated k-fold cross-validation on the training data passed when fitting a pipeline (which itself may be part of a broader cross-validation assessment). The tool chooses the combination of hyperparameters that has the highest average test R 2 (i.e., NSE) across inner cross-validation folds, and refits the pipeline using those values on the training data passed to the estimator. Fig. 2 illustrates the nested-cross validation process for a single leave-one-catchment-out split for a single physiographic section. The four ComIDs in each split of nested cross-validation in Fig. 2 correspond to the 4 of 5 catchments for used for training in Fig. 1. The nested cross validation approach to hyper-parameter tuning is computationally intensive at the time of fitting a model. Because the geographic modeling scope can be broader than the geographic modeling level, it may be the case that the best combination of hyper-parameters varies from one location/ level to the next. To allow for flexible hyper-parameter selection across levels, we developed the option for running and selecting from multiple pipelines with varying hyper-parameter values. This is an alternative to tuning the hyper-parameter values through nested crossvalidation in a single pipeline. This alternative approach to handling hyper-parameter values leads to a much longer list of models to select from. In comparison to hyper-parameter tuning with nested cross-validation, this approach is less computationally intensive at the time of fitting the models and more computationally intensive at the time of testing the models. In the OLS linear regression and regularized linear regression models presented in the results below, we created separate pipelines for each maximum polynomial degree from one to five. We used nested cross-validation to choose hyper-parameters for regularization strength in each pipeline and this particular division balances the computational efficiency of the Scikit-Learn cross validated regularization estimators (e.g., LassoCV) and the flexibility of separate models for each maximum polynomial degree. EPA Author Manuscript While double cross validation can produce an approximately unbiased estimate of prediction error, choosing from many models the one that seems to have the best prediction error can potentially lead to an optimistic estimate of prediction error. Because we have eighteen years of data in our runoff database, we developed and used in the results below the option for using a separate validation set for reporting the final model accuracy. As discussed in the Introduction, we use the term validation to describe the final assessment of the accuracy of the selected model. Once the cross-validation assessment is complete for each pipeline, all results are compared and the software selects for each location in the geographic modeling level the pipeline with the best average leave-one-catchment-out cross-validation NSE over the test data. The uncorrected NDVI-CN runoff series is also considered as a candidate in the model selection process. There is no temporal or spatial cross-validation necessary for the NDVI-CN series due to the lack of a correction model, but to ensure comparability, accuracy is assessed on the same chronological split of test data. After model selection and model validation are complete, the statistical pipelines associated with each selected model are refit using all of the data in the runoff database to obtain a final model for production use. The resulting model has not been validated in a strict sense, but the modeling approach, as implemented in the statistical pipeline has been validated. Additionally, due to the larger training dataset, the refit model is likely to have less bias and variance than the sub-models estimated during the cross-validation experiment that informs our accuracy assessment presented below. NDVI-CN runoff To better understand geographic variability in the accuracy of the uncorrected NDVI-CN runoff time series relative to the NLDAS runoff time series, we calculated NSE and r p for each catchment. Then we grouped the sampled catchments by physiographic section and averaged each accuracy measure within each section. The resulting maps can be seen in Fig. 3. Because we are not interested in physiographic sections where the model performs worse than a simple average, we censored locations with negative average accuracy metrics when shading the maps. This leaves more room in the map's polygon fill gradient to facilitate interpretation for physiographic sections where the simulation has some credibility (i.e., NSE>0). We also use identical scaling of the fill-gradient to the accuracy metrics across all maps in this paper to facilitate comparisons among figures. Notably, the GLDAS accuracy maps in Appendix 1 share their own, distinct scaling. Fig. 3 shows overwhelmingly higher values for the averaged r p relative to averaged NSE. The juxtaposition of high r p and low NSE for the same simulated and observed runoff values can be explained by at least two possibilities: 1. Physiographic sections contain catchments that perform very differently across the models estimated during the leave-one-catchment-out cross-validation experiment. One large, negative NSE value for a single catchment in a section can dominate the average value of NSE. For r p , a poor prediction can't be below minus one, so a single poorly performing catchment cannot dominate the average r p . 2. The NDVI-CN runoff simulations correlate with the NLDAS runoff values similarly across catchments in a physiographic section, but the NDVI-CN values suffer from a scaling problem. This explanation hints at a possibility of correcting this scaling problem to obtain precise automated CN generated runoff predictions. CN runoff correction modeling For this paper we estimate correction models using physiographic domain as the geographic modeling scope and physiographic section as the geographic modeling level. A visual comparison of the validation average NSE scores of the leave-one-catchment-out crossvalidation assessment for each pipeline are presented in Fig. 4. In this figure, physiographic domains are arranged in order of decreasing NSE (averaged across sections) and sections are arranged in order of decreasing NSE (averaged across catchments). Each point for each statistical estimator is the validation NSE of the best performing model of that type, where selection is based on the test data not validation data. The numbering of the sorted physiographic domains can be found in Table 2 along with the validation NSE, and model details for the model that scored highest on the test data. In Fig. 4, one of the more remarkable patterns is that there appears to be no relationship between the uncorrected NSE and the corrected NSE. This indicates that NDVI-CN generated runoff time series diverge from the NLDAS runoff time series quite differently across sections even in the same domain. Fig. 4 is also useful for assessing the relative strengths of the different statistical techniques. From a machine learning perspective, it is interesting that the regularized regression methods are frequently dominated by the OLS linear regression models. A finer spaced grid of regularization hyper-parameters that include smaller values for regularization strength may lead to improved performance. However, regularization is intended to reduce over-fitting, and the large number of observations and relatively few parameters in the models may prevent over-fitting without regularization penalties. Fig. 5 shows the validation scores for the selected estimator for each physiographic section. The NSE results provide the best indication of the likely accuracy of future runoff simulations from the models considered in this analysis. In this map, r p has visibly increased relative to the same measure for uncorrected NDVI-CN runoff from Fig. 3, suggesting accuracy improvements come partially from the estimator learning the NLDAS series beyond just learning how to rescale the uncorrected NDVI-CN runoff values. The r p values can rise when a model with a non-linear transformation (including higher than first degree polynomials) is selected or because of the improvement in correlation from the correction for NDVI-CN non-event predictions of zero. By considering the patterns in r p in Fig. 6 relative to Fig. 3 (both of which benefit from the rescaling inherent to r p ), the improvement from the transformation of nonzero runoff values can be distinguished from improvements that come from correcting the non-event values from zero to the mean of the zero-runoff-days in the training data. The similarity in r p across the correction models suggest a substantial bulk of improvement in r p is due to the non-event, zero runoff corrections. By comparing the NSE scores for the best correction models in Fig. 5 with the NSE scores from the first order linear regression models in Fig. 6 and the original NSE scores for the uncorrected NDVI-CN runoff series in Fig. 3 we can see that the bulk of the improvement in accuracy in the correction models is attainable with a simple linear rescaling of the nonzero, event values and a simple shift of the non-event, zero values. It is useful to compare the values of r p in Fig. 3 to the corrected NSE values shown in Fig. 6. The r p benefits from an in-sample transformation to account for differences in the mean of each series, an inherent part of the r p metric. In contrast, the coefficients from the first order linear regression correction models also implement a linear transformation, but from out of sample (relative to the testing and validation splits over which the metrics are calculated) training data. The r p is also not squared like NSE, but otherwise the measures are similar and comparing them indicates how well a simple linear correction can generalize to leverage the available information to correct linear scaling problems for future predictions. From visual inspection of the accuracy maps, it appears physiographic sections with more snowfall or less precipitation tend to perform poorly across the various runoff simulations we conducted. Because none of the models we developed for this paper use information about time for training or prediction, simple augmentations like monthly or bi-weekly time dummy variables may increase runoff accuracy, particularly in snowy areas with consistent annual runoff patterns. Each of these sections is the first section in each division in Table 2 and in Fig. 4. Each pane includes the following three runoff series: uncorrected NDVI-CN generated, best correction model generated, and NLDAS generated. For all runoff time series plots in this paper, the runoff values on the vertical axes are transformed by taking the natural logarithm of one plus runoff. The logarithmic transformation makes it easier to see patterns at both low and high values of runoff and adding 1 prior to the logarithmic transformation keeps runoff values of zero at zero. Fig. 7 is useful for developing an understanding of the overall behavior of the runoff series across the diverse physiographic domains. The impact of poor event detection is particularly visible in the bottom two series where the nonzero correction model predictions are markedly above zero. It is also readily apparent from several panes that the correction models reduce the frequency of dramatic runoff over-predictions generated by NDVI-CN. To better distinguish between runoff performance during NDVI-CN nonzero runoff events, we created separate runoff plots spanning the validation temporal split for NDVI-CN events with nonzero uncorrected simulated runoff and NDVI-CN non-events with zero uncorrected simulated runoff. Fig. 8 shows only the event days when NDVI-CN predicts positive runoff. Across the top of each pane in the figure is an index that numbers the days of nonzero NDVI-CN runoff in each physiographic division. It is helpful to consider these same nonzero runoff values, but ordered by ascending observed runoff (i.e., NLDAS runoff), as illustrated in Fig. 9. Here the reader can see patterns of under or overprediction of the final corrected runoff models, which can be used to validate or invalidate the use of these models for various real-world decision-making applications. It is interesting to compare the runoff predictions for the Mississippi Alluvial Plain (MAP) and the Arkansas Valley (AV) (the top 2 panes in Fig. 9. As can be seen in Table 2, both sections have high NSE values, and the selected model for the MAP is a first order linear regression, while the AV correction is a 3rd order linear regression. For the days in the AV with the highest runoff, the polynomial correction appears to help achieve a close fit, while for the MAP the corrected runoff values seem to have a downward bias when observed runoff is highest. The AV has nearly twice as many observations, which may be important for estimating higher order polynomial terms with sufficient precision to enhance predictive accuracy over simpler models. As indicated by Table 2, the Plains Border section uses GBR as the selected correction estimator. A close examination of the differences between the uncorrected runoff values and corrected runoff values in Fig. 9 reveals several instances of non-monotonic transformations, where NDVI-CN runoff falls and corrected runoff falls but then rises. This can be seen around day 80, for example. In the same figure, the Lower Californian section, with relatively few NDVI-CN nonzero event days, shows a marked improvement in accuracy at high runoff values while reducing the variability of NDVI-CN runoff. This last pattern, a reduction in corrected runoff variability relative to NCVI-CN variability is the most discernible feature of the first five panes in Fig. 9. To better understand the days when NDVI-CN predicts zero runoff, we also developed Fig. 10 showing the NDVI-CN zero runoff days in the validation split. It is important to note that the vertical scale on these graphs varies widely. The NDVI-CN method fails to detect substantial runoff events in a seasonal pattern in NLDAS runoff in the Superior Upland and Northern Rocky Mountains physiographic sections. There also are likely statistically significant seasonal patterns in the NDVI-CN zero runoff days that could be addressed by adding additional complexity to the zero-runoff correction models. While the validation and estimation framework we used for developing the NDVI-CN correction models is complex, the underlying models are relatively simple because they lack an awareness of time. Broadening the information set available for prediction in both the zero-runoff and nonzero-runoff models to include past time periods would potentially overcome limitations associated with the event-based nature of NDVI-CN. Including precipitation and lagged precipitation similarly would likely provide opportunities for increasing model skill. Variables for seasonality or more sophisticated time series approaches such as wavelets also would likely increase model accuracy, particularly for locations with substantial snow melt and accumulation. However, in the context of the curve number methodology, a simple and effective linear correction is particularly appealing. In these results we considered only a narrow range of the possibilities for grouping the data by selecting a broad geographic modeling scope (physiographic division), and a narrow geographic modeling level (physiographic section). At the same time, we used a short list of explanatory variables, so the structure of the matrix of regressors is block diagonal, and thus relatively little information is shared between physiographic sections in each physiographic division for use by the statistical estimators. More complex structures such as overlapping dummy variables (Lim and Hastie 2015) and geographic regressors such as those in StreamCat may help identify stronger patterns in the data. A finer selection for the geographic modeling scope would also potentially improve model accuracy by reducing the tendency to lump together physiographic sections or more generally slices with contrasting snow fall/melt patterns. Conclusions By developing and applying a carefully designed model estimation, selection, and accuracy assessment framework, we have developed correction models to enhance NDVI-CN rainfallrunoff model. The result of our accuracy assessment is a set of validation NSE values that can be used by practitioners who need runoff time series estimates to appropriately curate their data sources and quantify sources of error in downstream modeling applications. Because the curve number approach to runoff modeling is one of the simplest and least data intensive approaches, it is fascinating that runoff estimated using a somewhat inflexible, automated approach to quantifying hydrologic condition (i.e., NDVI-CN) has such high linear correlation with an ensemble of state-of-the-art LSMs. Further, for much of the country, this relationship is stable and can be leveraged into simple first order linear regression correction models with skillful predictions, as judged by NSE and illustrated in Fig. 6. During this research, we became aware of the importance of including a simple metric like r p to contrast more stringent accuracy measures like NSE or KGE. While time series plots of simulated and observed runoff can help the analyst spot information patterns that can be leveraged to build correction models, it is useful to have a metric that quantifies these patterns. Because r p , R 2 , and NSE have so much in common, there may be a need for a generalized version of r p in the same manner that KGE generalizes NSE. Non-linear correlation measures like Spearman's rank order correlation coefficient may also be useful for spotting non-linear patterns in the data. Supplementary Material Refer to Web version on PubMed Central for supplementary material. Fig. 1. A diagram of the chronological splitting of data for model selection and validation. A diagram of the internal nested leave one catchment out cross-validation model assessment. One year of NLDAS, NDVI-CN, and best correction generated runoff for the physiographic section in each division with the highest corrected validation NSE. Nonzero CN event days during the validation time split for NLDAS, NDVI-CN, and best correction generated runoff for the physiographic section in each division with the highest corrected validation NSE. Nonzero CN event days during the validation time split for NLDAS, NDVI-CN, and best correction generated runoff, sorted by NLDAS runoff for the physiographic section in each division with the highest corrected validation NSE. Zero runoff, CN non-event days during the validation time split for NLDAS, NDVI-CN, and best correction generated runoff for the physiographic section in each division with the highest corrected validation NSE. Table 2 Estimator selection and validation score by physiographic section. For linear models the number following the estimator's name is the maximum polynomial degree used for transforming uncorrected runoff for training and prediction. b For GBR the trailing numbers indicate the following hyper parameter values: number of estimators in the ensemble, the boosting learning rate, the fraction of the training sample to use for stochastic gradient descent, and the maximum tree depth. c The suffix FLAT0 indicates that a model predicts zero runoff for NDVI-CN zero runoff days instead of using the mean of observed runoff from zero runoff days in the training data.
v3-fos-license
2024-05-18T06:17:50.076Z
2024-05-16T00:00:00.000
269802871
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "84a886610f12481ed39156fb24332526807766a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44724", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "8c6b041df58d122574d3a5631eb25eddb8f35ba3", "year": 2024 }
pes2o/s2orc
E-cardiac patch to sense and repair infarcted myocardium Conductive cardiac patches can rebuild the electroactive microenvironment for the infarcted myocardium but their repair effects benefit by carried seed cells or drugs. The key to success is the effective integration of electrical stimulation with the microenvironment created by conductive cardiac patches. Besides, due to the concerns in a high re-admission ratio of heart patients, a remote medicine device will underpin the successful repair. Herein, we report a miniature self-powered biomimetic trinity triboelectric nanogenerator with a unique double-spacer structure that unifies energy harvesting, therapeutics, and diagnosis in one cardiac patch. Trinity triboelectric nanogenerator conductive cardiac patches improve the electroactivity of the infarcted heart and can also wirelessly monitor electrocardiosignal to a mobile device for diagnosis. RNA sequencing analysis from rat hearts reveals that this trinity cardiac patches mainly regulates cardiac muscle contraction-, energy metabolism-, and vascular regulation-related mRNA expressions in vivo. The research is spawning a device that truly integrates an electrical stimulation of a functional heart patch and self-powered e-care remote diagnostic sensor. regarding the extensive application of CCPs: (1) the bioagent-free CCPs are inefficient in enhancing electroactivity improvement and restoring cardiac function, as current CCPs are solely considered as auxiliary carriers for cells and drugs 7 , (2) the real-time feedback from CCP MI is desired, whereas it is a very high demand to integrate the electrocardiosignal monitoring and effective CCP treatment. Inspired by the electroactive property of cardiac tissue, electrical stimulation (ES) is an agreeable approach to induce the CMs' maturation 8,9 , and even used to reduce the cardiac ischemic size after the ischemia-reperfusion injury elicited by an invasive electrode in the rat's ventricular wall 10 .Furthermore, the synergy of ES and conductive scaffolds can significantly improve cell-cell coupling and synchronous contraction of CMs, superior to the conductive scaffold itself 7 .Recently, avoiding the traditional battery-powered stimulation device, the complicated operation and the invasive implanted electrode, an innovative electrical generator called a triboelectric nanogenerator (TENG) has been deployed to cardiovascular system health care (Supplementary Data 1).TENG, serving as a green and infinite source of electricity, provides either the power supply for biomedical devices or for therapeutic electrodes that generate electrical stimuli.TENGs powered interdigital electrodes can promote the maturation of neonatal CMs, as well as increase and unify the beating rate of CMs 11,12 .Implantable TENGs (I-TENG) powered cardiac pacemakers successfully correct arrhythmia in large animal models [13][14][15] .On the other hand, the electrical output signals of TENGs, including open-circuit voltage, short-circuit current, and frequencies, are highly sensitive to mechanical motions and other stimuli, making it an excellent candidate for a miniaturized cardiac monitoring system.I-TENG have been thus transplanted into hearts for the detection of heart rate 16,17 and endocardial pressure 18 .Though I-TENGs hold great promise in the therapeutics and diagnosis of cardiac systems, I-TENGs that achieve MI repair and diagnosis simultaneously have yet to be developed.In addition, the therapeutic electrodes for TNEG-involved cardiac healing systems are typically constructed of inert metals such as gold.These electrode materials are stiffer than myocardium by several orders of magnitude, resulting in significant stiffness mismatch 19 .Moreover, sophisticated surface modification strategies are necessary to improve the effective contact area of the dielectric layers and the electrodes of TENG 20 , which is not scalable.Accordingly, as patch treatment and diagnosis sensor for MI, a scalable miniature trinity (3 functions in 1 device) of TENG (TRI-TENG) CCP, encompassing the functions of CCP, self-power generation for non-invasive in situ electrical stimulation therapy, and real-time electrocardio monitoring, is called for but absent. For MI treatment and diagnosis, in the design of our I-TENG CCP, a polydopamine (PDA) modified reduced graphene oxide (rGO) membrane is employed as a substitute for the metallic electrode.Our TRI-TENG CCP (TCP) adopts a unique double-spacer design with two spacers symmetric about a PDA-rGO membrane electrode.The first spacer sits on the myocardium, incorporating it as one component in the TRI-TENG.Thus, the PDA-rGO electrode raised by the first spacer works as a triboelectric electrode that generates triboelectric charges and simultaneously as the therapeutic electrode that builds an electric field on the myocardium, obviating the requirement for an additional therapeutic electrode (Fig. 1A).The second spacer, positioned on top of the PDA-rGO electrode, facilitates the contact and separation of the PDA-rGO with another triboelectric layer that exhibits higher ability to generate triboelectric charges.Owing to electrostatic induction, the electrical potential built between the myocardium and the PDA-rGO electrode is dictated by the higher electrical potential built between the PDA-rGO electrode and the triboelectric layer.We utilize mold casting to impart a leaf vein structure to bestow the polyvinylidene fluoride (PVDF) triboelectric layer with biomimetic leaf vein structure (Fig. 1A).The leaf vein structure and the PDA coating on the rGO electrode are both nature-inspired surface structures that can enhance the triboelectric effect by increasing the roughness and effective contact areas cost-effectively.The unity of the cardio patch, TENGpowered electrode, and sensor as three facets in one device is exhibited by our TCP.In detail, the TCP has three functions: (i) serves as the therapeutic electrode, which conveys electrical stimulus to infarcted tissues and facilitates electric signal transportation between normal and infarcted tissues, (ii) converts the biomechanical energy into electric energy, and (iii) serves as a potential wireless diagnosis device (Fig. 1B).With the combination property of conductivity and electrical generator, we hypothesize that the TCP produces a remarkable reparative effect on the infarcted heart in minipig MI models through the strengthened electroactivity reconstruction (Fig. 1C), surpassing the therapeutic efficacy of most recently reported approaches for MI treatment in minipigs (Supplementary Data 2). Assembly and characterization of TRI-TENG As is illustrated in Fig. 1A, our TRI-TENG mainly comprised an elastomer bottom package, an rGO electrode, a PVDF triboelectric layer with leaf vein structure, Ecoflex 00-50 spacer, and a PDA-rGO electrode.The biocompatible elastomer Ecoflex 00-50 was used as a spacer and package to further augment the triboelectric effect and avoid leakage.According to literature reports, the weight loss of Ecoflex is less than 5% within 60 days and less than 10% within 200 days 21,22 .Ecoflex demonstrates minimal degradation over ~12 weeks, thereby ensuring the stability of TENG functionality.The TRI-TENG (8 mm in diameter, Supplementary Fig. 1A) underwent cyclic contact and separation with the contraction and relaxation of the heart, resulting in charges with opposite signs on the PDA-rGO electrode and the surface of the epicardium.Thus, the PDA-rGO patch membrane served a triple purpose, functioning as a conductive patch and as a triboelectric electrode for energy conversion, in addition to as a therapeutic electrode for the application of electrical stimuli to the epicardium.Meanwhile, the wireless sensing of the cardiac condition was achieved by connecting the rGO electrode to a Bluetooth-enabled device, facilitating communication with a smartphone application (Fig. 1B).To ensure compatibility with hearts possessing large surface areas, the TRI-TENG can be assembled in serial whose array design was employed to optimize the fitness and to a large scale (Supplementary Fig. 1B). The bottom package as indicated by "1", the rGO electrode as indicated by "2", the leaf vein structured PVDF triboelectric layer as indicated by "3", and the PDA-rGO electrode as indicated by "4" are well demonstrated in the cross-section SEM image of our TRI-TENG (Fig. 1A).The bottom package was fabricated through the spincoating and the following curing process.To prepare rGO electrodes and PDA-rGO electrodes, graphene oxide (GO) films were prepared in advance through drop casting GO aqueous solution on templates.During the evaporation process, GO sheets underwent self-assembly at the air/liquid surface and eventually formed a uniform GO film.The surface morphology of the formed GO film is shown in Supplementary Fig. 2 where each GO sheet can be identified clearly.The GO sheets were bumpy and loosely packed in the GO film, which is due to the distortion of the GO sheet caused by the presence of oxygencontaining functional groups (OCG) 23,24 .The energy dispersive spectroscopy (EDS) analysis was performed to obtain insight into the elemental composition.The C/O ratio of the GO film is 2.1 (Supplementary Fig. 3A).Fourier-transform infrared (FTIR) spectrum was used to study the OCG of the GO film (Supplementary Fig. 4A).The OCG-related peaks found in GO include the peak at 3143 cm −1 corresponding to the stretching vibration of C-OH in the hydroxyl group, the peak at 1718 cm −1 assigned to the stretching vibration of C = O in COOH, and the peak at 1030 cm −1 attributed to the stretching vibration of C-O-C in epoxide.The peak at 1617 cm −1 is due to in-plane vibrations of sp 2 hybridized C = C.These results were consistent with previous studies 25,26 .To prepare the rGO electrode, GO film was subjected to thermal annealing at 300 °C.The rGO sheets were densely packed in the rGO electrode (Supplementary Fig. 2B), indicating the removal of OCG and restoration sp 2 -hybridized lattice structure 27,28 .The C/O ratio of the rGO electrode increased to 4.2 (Supplementary Fig. 3B), which is another piece of evidence for the removal of OCGs.In addition, all the OCGs-related peaks disappeared in the FTIR spectrum of rGO, suggesting the successful reduction of GO.The sheet resistance of the rGO electrode was reduced from 756.93 ± 44.471 kΩ/sq to 0.420 ± 0.047 kΩ/sq (Supplementary Fig. 4B), further proving the reduction of GO.The rGO electrode was then subject to PDA coating deposition.After the treatment, PDA granules aggregated and anchored on the surface of the rGO electrode (Supplementary Fig. 2C).The PDA coating increased the atomic percent of oxygen and introduced nitrogen elements to the treated electrode (Supplementary Fig. 3C).In the FTIR spectrum of the PDA-rGO electrode, the broadband appeared at 3220 cm −1 , which was associated with the stretching vibration of N-H and O-H, and the two new peaks at 1508 cm −1 and 1050 cm −1 , which were associated with the stretching vibration of C = N and C-N 29 , further proving the successful anchoring of polydopamine.The sheet resistance of the PDA-rGO electrode (0.885 ± 0.036 kΩ/sq) was slightly larger than the rGO electrode (Supplementary Fig. 4B).PVDF dissolved in DMF/Acetone was drop cast on a Ecoflex mold to prepare the PVDF layer with leaf vein structure.The surface of the asprepared PVDF layer processed delicate leaf vein structure as is shown in Supplementary Fig. 2D. We detected the cell viability of neonatal rat CMs cultured on different substrates.As shown in Supplementary Fig. 5, cell viabilities of CMs cultured on the non-conductive Ecoflex, a conductive substrate C Schematic of TRI-TENG array assembly for matching the infarct size of the porcine heart and its application in a minipig MI model.The preparation of each layer structure of TRI-TENG array adopts the same configuration as TRI-TENG using the polylactic acid (PLA) mold, and every two neighboring rGO electrodes as well as PDA-rGO electrodes were electrically connected by an air-dried poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate)/methacrylated gelatin (PEDOT:PSS/ GelMA) hydrogel.After 28 days of transplanting the TRI-TENG array into the infarcted heart, there was a significant improvement in cardiac function by approximately 14.7%, surpassing the therapeutic efficacy of most recently reported approaches for MI treatment in minipigs.We hypothesize that TRI-TENG conductive cardiac patches (CCPs) exert their therapeutic effects on infarcted hearts primarily by modulating the expression of mRNA related to cardiac muscle contraction, energy metabolism, and vascular regulation in vivo, as revealed by RNA sequencing analysis. of PDA-rGO electrode in the PDA-rGO/Ecoflex (PDE) group and on the self-powered triboelectric substrate (TRI-TENG) were very well during 7 days culture.The CMs' survival rate in the PDE group and the TRI-TENG group were higher than 80% on days 3 and 7 of culture.The biocompatibility of these two biomaterials is similar to that of other existing implantable self-powered materials (Supplementary Table 1).In addition, the Young's modulus of the TRI-TENG was calculated to be 640.70 ± 71.07 Kpa, aligning with most the scaffolds used in MI management (Supplementary Fig. 6).These results suggest that the composition of TRI-TENG exhibits good biocompatibility, and its mechanical properties are well-suited for CMs and cardiac tissues. The performance of TRI-TENG as the energy convertor in vitro and in vivo To harvest the mechanical energy produced by heart contraction and relaxation for electrical stimuli generation, the TRI-TENG adopted a contact-separation work mode.The working mechanism of the TRI-TENG is depicted in Fig. 1A.At the initial state when the heart was contracted, there was no contact between the PDA-rGO electrode and the PVDF layer because of the existence of the Ecoflex spacer.As the heart started to relax, the TRI-TENG was stretched out, forcing simultaneous contact between the upper surface of the PDA-rGO electrode and the PVDF layer, as well as between the lower surface of the PDA-rGO electrode and the epicardium.According to Supplementary Fig. 7, the PVDF layer tends to gain electrons compared with the PDA-rGO electrode, and the PDA-rGO electrode tends to gain electrons compared with the myocardium.Thus, electrons can transfer from the upper surface of the PDA-rGO electrode to the PVDF layer during the contact, leaving positive charges on the upper surface the PDA-rGO electrode.Electrons are also injected from the myocardium to the bottom layer of the PDA-rGO electrode, resulting in negative charges at the bottom surface of the PDA-rGO electrode and positive charges in the epicardium.As a result of electrostatic induction, triboelectric charges generated on the upper surface of the PDA-rGO electrode were equal to those generated on the bottom surface of the PDA-rGO electrode.The three layers were in immediate contact when the heart was fully relaxed.There was barely any distance between charges with opposite signs, resulting in no potential difference between any two oppositely charged surfaces.As the heart started to contract, the distance between two oppositely charged surfaces gradually increased, and the electric potentials between the PDA-rGO electrode and the epicardium as well as between the PDA-rGO electrode and the PVDF layer started to establish.The potential differences reached the maximum value when the heart was fully contracted.The first spacer enables the PDA-rGO electrode to not only takes part in the generation of triboelectric charges but also work as a therapeutic electrode that builds an electric field on the myocardium.The second spacer amplifies the built electric field to the same strength as the one built between the PDA-rGO and the PVDF. To function as a sensor, the output voltage between the rGO electrode of the TCP and the ground was tested by an electrometer or wireless sensing module (Supplementary Fig. 8).To balance the gradually increasing potential difference built between the top surface of the PDA-rGO electrode and the PVDF layer, across the second spacer, during heart contraction positive charges would be driven to flow from the ground to the rGO electrode.The flow of the positive charges ends when the heart is completely contracted.Once the layers were forced to approach each other during heart relaxation, the positive charges would be repelled back to the ground.Thus, heart contraction and relaxation lead to the generation of the alternative current voltage between the rGO electrode and the ground.Moreover, the electrical potential built between the bottom surface of the PDA-rGO electrode and the myocardium, across the first spacer, remains unaffected by the measuring event involving the second spacer.When the rGO electrode is connected to a wireless module, the TCP can operate as a wireless sensor, self-powered electrical stimulus generator, and conductor simultaneously.We designed a model study to prove that the sensing unit and stimulation do not interfere with each other, regardless of the complexity of in vivo voltage measurement.As shown in Supplementary Fig. 9 and Supplementary Movie 1, the field potential on the sodium alginate (SA) hydrogel and the voltage output from the rGO electrode can be simultaneously measured by the electrocardiography (ECG) electrode and the multimeter, when the SA hydrogel is subject to cyclic compression.Moreover, the peak value of both the field potential and the voltage output can remain stable.Thus, the sensing unit and the stimulation unit don't interfere with each other, without consideration of the complexity of measuring voltage output in vivo. Nature-inspired surface structures were employed to improve the electrical output performance of TCP.An electrometer was connected between the PDA-rGO electrode and the rGO electrode to test the performance of different TENGs with different components at controlled pressure and loading rate.The leaf vein structure on the Patterned PVDF layer (P-PVDF) resulted in significant improvement in the open-circuit voltage, short-circuit current, and transferred charges (Fig. 2A).As shown in Supplementary Fig. 2C and Supplementary Fig. 10, the PDA coating aggregated on rGO surface as granules, which significantly increases the roughness of PDA-rGO electrode.The charge density of the generated triboelectric charges on the surface of the PDA-rGO electrode is higher than that on the rGO electrode.Consequently, the PDA coating on the PDA-rGO electrode further increased the open-circuit voltage, short-circuit current, and transferred charges to 21.98 mV, 2.23 nA, and 0.22 nC, respectively.To investigate the output power of TENGs, resistors with resistance spanning from 1 KΩ to 1 GΩ were connected as external loads.The output voltages and currents of all the TENGs remained stable when the external load was less than 1 MΩ (Fig. 2B).When the external load exceeded 1 MΩ, the voltages of all TENGs dramatically rose with the increase in the resistance of the external load, while the currents of all TENGs dropped noticeably due to Ohmic loss.Consequently, the instantaneous output powers of all the TENGs reached their maximum values at 10 MΩ.The instantaneous output power of TENGs was also increased by the nature-inspired leaf vein structure and the PDA structure.The TENG with both the P-PVDF and the PDA-rGO electrode reached the highest maximum instantaneous output power of 0.16 µW/m 2 . The contraction and relaxation of the heart occasionally exhibit irregularity in terms of strength and frequency.The amplitude, frequency, and waveform of the TENG output signals are highly dependent on external mechanical stimuli, a phenomenon that has been reported in other TENG sensors 30 .Thus, the voltage output of the TCP was evaluated under different strains, compression frequencies, and different compression strengths.The amplitude of the output voltage increased as the compression frequency increased from 1 to 5 Hz (Fig. 2C).In addition, the frequency of the output voltage can serve as a representative measure of the frequency of the applied compression.The amplitude of the output voltage also increased with the increase in the applied pressure.The relationship between the relative change in the amplitude output voltage and pressure is demonstrated in Supplementary Fig. 11A, the response of the amplitude to pressure can be divided into two regions, namely the high-sensitivity Region I and the low-sensitivity Region II.In Region I where the pressure is less than 5.9 kPa, the sensitivity of the TRI-TENG sensor was 6.74 mV/kPa (R 2 = 0:979).The sensitivity of the TRI-TENG sensor dropped to 2.54 mV/kPa (R 2 = 0:991) in the high-pressure region (Region II).The voltage amplitude of the TRI-TENG sensor depends on the change in the spatial distance between the P-PVDF layer and the PDA-rGO electrode, the speed of change in the spatial distance, and triboelectric charge density 31 .In Region I, pressure increase caused a substantial increase in the change of spatial distance, as a result, voltage amplitude changed dramatically.At the turning point between Region I and Region II, the spatial distance was probably close to zero, and the increase in pressure might merely increase the contact area between the two layers, exerting little effect on the output voltage.The output voltage of the TRI-TENG sensor decreased with the increase in the applied strain (Supplementary Fig. 11B).Due to the positive Poisson's ratio of the elastomer spacer, the spatial distance between the two layers decreases proportionally with the increasing strain.The output voltage of the TRI-TENG sensor exhibited a decrease in response to the applied strain, with a sensitivity of −2.65 mV/1% (R 2 = 0:970) (Supplementary Fig. 11B). A single variation in the strain or pressure of mechanical stimuli results in a change in the output voltage, and the frequency of the output voltage represents the frequency of mechanical stimuli.The TCP's potential as an activity monitoring sensor was assessed.The TCP sensor was attached to different sites of the human body by a tape for the measurement of different human activities.The motion of the throat involves small-strain vibrations occurring at a specific frequency.Thus, a particular movement of the throat produced a distinct change in spatial distance, resulting in a unique waveform of output voltage (Fig. 2D and Supplementary Movie 2).Large-strain mechanical deformation such as the bending index finger can also be similarly monitored by the TCP sensor (Fig. 2D and Supplementary Movie 3).The peak output voltage of the TENG increased with the increasing bending angle.The increment in bending angle from 30°to 60°r esulted in a more significant increase in peak output voltage compared to the increase from 60°to 90°.The change in the spatial distance may reach saturation after the bending angle exceeds 60°.Despite variation in bending angles, finger bending followed the same movement regime.Thus, the output voltage exhibited a consistent waveform as the bending angle increased.The TCP fixed on the ankle can measure the level of human activity (Fig. 2D and Supplementary Movie 4).As the intensity of human activity increased, there was an increase in both the amplitude and frequency of the output voltage.To further investigate the stability and repeatability of TCP under long periods of cycles, we tested 1000 cycles.As shown in Supplementary Fig. 12, the amplitude of the output voltage remains relatively stable even after a large number of compression and relaxation cycles.These evidences showed that our TCP still has good performance under long periodic action, which is suitable for the periodic movement of the heart. In order to evaluate the performance of TRI-TENG as the sensing device in vivo, TRI-TENG array was transplanted between the apex cordis and pericardium in the minipig, and the TRI-TENG's electrical output and the ECG signals were simultaneously recorded (Fig. 3A, B).As shown in Fig. 3C, the fluctuating frequency of the open-circuit voltage (V OC ) outputted from the TRI-TENG array was fully consistent with the heartbeat frequency derived from ECG, and the R-R intervals in ECG were equivalent to the peak-peak phases in V OC wave.This result indicates that the voltage generated by the TRI-TENG is triggered by cardiac movement.The periodical contraction and relaxation of the heart prompt the friction layer of TRI-TENG, resulting in contact and separation, which finally produces an electrical output through the TRI-TENG.Driven by the porcine beating activities, the mean maximum open-circuit voltage (V OC, max ) and the mean minimum opencircuit voltage (V OC, min ) produced from the TRI-TENG's output signals were 5.975 ± 1.438 mV and −6.512 ± 1.665 mV (Fig. 3C, H).Furthermore, the TRI-TENG was transplanted onto the Langendorff-perfused isolated rat heart to observe directly the corresponding V OC changes outputted from TRI-TENG along with the change of the heart's contraction force (Fig. 3D, E and Supplementary Fig. 13A, B).In sinus rhythm, the mean V OC,max , V OC,min , and the open-circuit voltage difference (ΔV OC ) from the Langendorff-perfused rat heart were 0.664 ± 0.109 mV, −0.546 ± 0.140 mV, 1.210 ± 0.233 mV respectively, which were just in the range of the non-excitatory electrical stimulation applied on rat heart (0.1 ~1 mV) (Supplementary Fig. 13A) 10 .When the heart rate was elevated to 420 b.p.m by 7 Hz pacing, all the V OC,max , V OC,min , and ΔV OC produce by TRI-TENG were increased correspondingly compared with those in sinus rhythm (Supplementary Fig. 13A).However, when the left ventricle (LV) was ischemic injured by the ligation of the left anterior descending coronary artery, the related V OC values were decreased whether in the sinus rhythm or the loaded 7 Hz pulse (Supplementary Fig. 13B).In addition, we investigated the effect of reduced cardiac contractile motion in the ischemic region on the TRI-TENG' function.As shown in Supplementary Fig. 14, a significant improvement in field potential amplitudes of the heart after the TRI-TENG transplantation under the ischemic conditions and the ΔV OC values of the TRI-TENG driven by ischemic myocardium can be detected, indicating that the reduced cardiac contractile motion can also induce electrical energy generation by the TRI-TENG. Wireless transmission is essential for implantable sensors for posttreatment monitoring.Given that TRI-TENG can sensitively detect the voltage changes of normal and ischemic hearts, we utilized an external device (Pokit meter) to detect and wirelessly transmit the in vivo output voltage signal from TRI-TENGs driven by rat hearts under different states, and the wirelessly transmitted signals were received by a mobile phone for real-time analysis (Fig. 3F and Supplementary Movie 5).The output signals obtained from wireless transmission displayed the same variation trend with those by wired measurement, and the output voltage values were significantly decreased under the ischemic heart state compared with those under the normal heart state (Fig. 3G, I, J and Supplementary Fig. 13C).We also investigated the influence of breathing movement on the sensing function of the TRI-TENG.As shown in Supplementary Fig. 15, the electrical output capacity of the TRI-TENG is stable under both open-chest and closed-chest conditions.Collectively, these results suggest that this TRI-TENG was an effective energy convertor, which can output the synchronized electrical signals with the ECG by harvesting the heart's biomechanical energy.The cardiac contraction and relaxation activities result in the periodic separation and contact of the two triboelectric layers in TRI-TENG 16 , which made it acts as an heart activity monitoring sensor.In addition, this TRI-TENG was sensitive to the changes of cardiac contractility.When myocardial ischemia, contractile force becomes weak, leading to a significant decrease in the output voltage value of the TRI-TENG, which allows for precise monitoring of cardiac signals under pathological state.Here, TRI-TENG can also connect to wireless devices and transmit signals to the mobile phone to enable real-time monitoring of cardiac electrical signals, which holds significant implications for the timely diagnosis of pathological cardiac conditions. Effects of TRI-TENG on CM structure and maturation Our previous studies suggest that the surface topography and conductivity of the cardiac patches can remodel the cell phenotype and function of CMs in vitro, which may be an important element for CPs to activate endogenous repair after transplantations.In this study, the cell shape and the cardiac-specific proteins expression of neonatal rat CMs cultured on Ecoflex, PDE and TRI-TENG were detected, respectively.On day 3 of culture, more CMs and a larger spreading area of CMs were observed in the PDE and TRI-TENG groups through F-actin staining, compared with those in the Ecoflex group (Supplementary Fig. 16A, D, E).On day 7 of culture, more dense and elongated myofibrils dotted with massive parallel aligned actin, which were closely related to the CMs' differentiation and maturation, were detected in PDE and TRI-TENG compared with those in the Ecoflex group (Supplementary Fig. 16A, D, E).SEM images also showed that confluent CMs-formed myocardial-like structures were on the PDE and TRI-TENG, and the direct contact and intercellular communication were clearly visible (Supplementary Fig. 16B).Immunostaining of cardiacspecific markers, sarcomeric α-actinin, and CX43, in CMs in different groups displayed that more mature sarcomeric structures and CX43 expressions were located in PDE and TRI-TENG compared those in Ecoflex on days 3 and 7 of culture (Supplementary Fig. 16C).Quantificational analyses results showed that the highest α-actinin and CX43 coverage area were presented in the TRI-TENG group (Supplementary Fig. 16F, G).CX43, the main component of gap junction, plays a vital role in transmitting the electrical excitation signals among CMs 32 .Accordingly, these results indicate that the PDE CCP benefits CMs' maturation and synchronous electrical excitation.PDA-rGO had excellent properties of water-stability and biocompatibility, thus, the PDA-rGO electrode absorbed more proteins through hydrophobic interactions, electrostatic attraction or π-π stacking, leading to high CM density on it 33 .In addition, PDA-rGO can reduce CMs' excitation threshold and accelerate the electrical signal transmission among CMs 34 .Furthermore, the additional electrical stimulation in TRI-TENG can facilitate fast maturation and functionalization of CMs. TRI-TENG enhances the electroactivity of the infarcted heart in rat MI models The effects of TRI-TENG on the electrical properties of the injured myocardium were further studied.The infarcted myocardium had an inefficient movement ability because of the uncoupling of excitation-contraction, leading to decompensated hypertrophy and deteriorated cardiac function 35 .The enhanced electrical sensitivity and contractility enable the injured myocardium to regain movement vitality to avoid cardiac function deterioration 36 .As illuminated in Fig. 4A-C, the ligation of the left anterior descending coronary artery and the injured myocardium deadened the electrical sensitivity of the Langendorff-perfused rat heart, demonstrated by the increased pacing thresholds.Interestingly, the TRI-TENG's transplantation reduced the pacing thresholds of rat heart, that is, lower stimulus voltage pulses were enough to inspire the whole hearts' synchronous pacing either on the TRI-TENG-transplanted normal heart or on the 2 and 3).Next, we investigated whether the TRI-TENG's enhanced electrical sensitivity in the heart by the transplantation is coupled with the heart's contractility (Fig. 4D-I).Stimulated by the same electrical pulse, LV contractility was elevated about 2 times once the TRI-TENG was transplanted onto the rat LV compared with that before transplantation (Fig. 4F, G).After TRI-TENG's transplantation on the infarcted heart in rat MI models for 4 weeks, the contractility of the infarcted LV was significantly increased compared with that in the MI group (Fig. 4H, I).With the weak conductive property of Ecoflex, its application on the normal heart or the injured heart had no influence on the heart's pacing thresholds and contractility (Fig. 4D-I and Supplementary Tables 2 and 3).As for the conductive PDE patch, its transplantation had no obvious effect on the electrical sensitivity of the normal heart or the ischemic heart and had no obvious effect on the contractility of the normal myocardium (Fig. 4D-I and Supplementary Tables 2 and 3).After being transplanted on the rat's infarcted heart for 4 weeks, the PDE patch-treated LV contractility was higher than that in the MI group (Fig. 4H, I).These results indicated that the microcurrent produced by TRI-TENG had an instant and sustained effect on the electrical excitation for the injured heart, which can be further coupled to the contraction of the myocardium, leading to the increased contractility of the infarcted heart.The conductive patch, however, seemed to have a minimal compact on the instant electrical excitation and contractility of the heart.It was supposed that the conductive patch transplantation elevated the contractility of the infarcted heart through an indirect way, that is, the structural and functional recovery of the infarcted heart elicited by the conductive patch's transplantation for 4 weeks results in the contractility promotion 37 . In addition, electrical mapping and optical mapping were also performed to probe the electrophysiology reconstruction of the infarcted heart after the patches' transplantation for 4 weeks.Epicardial electrical mapping, through collecting and analyzing the electrical activity of the LV free wall, generated electrical conduction velocity (CV) maps from the non-infarcted myocardium to the infarcted myocardium 38 .As shown in Fig. 4F, the maps of the MI group and the Ecoflex group were characterized by inhomogeneous conduction, and their electrical propagation was delayed.The PDE or the TRI-TENG transplantation significantly accelerated the excitation propagation between healthy and infarcted myocardium, and the TRI-TENG transplantation achieved the highest CV among those transplantation groups (Fig. 5A, D).The surface ECG traces in different groups were simultaneously recorded with the electrical mapping operation (Fig. 5B).In agreement to the CV maps' exhibition, the broadened (prolonged) QRS durations in ECG recordings reflected the abnormal ventricle conduction in the MI and the Ecoflex groups (Fig. 5B, E) 39 .The PDE or TRI-TENG transplantation recovered QRS durations, and it is inspiring that the produced QRS durations by the TRI-TENG transplantation were close to that in the sham group (Fig. 5B, E).The field potentials in remote, border and scar regions of the heart in different groups were also measured by the electrical mapping system.According to Fig. 5C, the Ecoflex transplantation had no obvious effect on the global field potential of the infarcted heart.Both the border/ remote field potential amplitude and the scar/remote field potential amplitude ratios in the PDE-transplanted heart were increased compared with that in MI heart, though the difference of the border/ remote field potential amplitude ratio between the PDE group and the MI group had no significance (Fig. 5C, F, G).The TRI-TENG-treated infarcted hearts had the highest border/remote field potential amplitude ratio and the scar/remote field potential amplitude ratio among the three transplantation groups of hearts (Fig. 5C, F, G).These analytical results suggest that after being transplanted for 4 weeks, the conductivity and the conductivity plus self-powered nanogenerator can improve the regional electrical activity and strengthen the global electrical impulse propagation of the infarcted heart.The TRI-TENG transplantation achieved the optimum effects on them. Subsequently, optical mapping was conducted to assess the transmembrane action potential (V m ) and the Ca 2+ holding dynamics in Langendorff-perfused isolated hearts in different groups.The electrical propagation and calcium transient fluorescent signals were captured from the RH237 (a voltage-sensitive dye) and Rhod-2 AM (a Ca 2+ dye)-staining hearts during a 5 Hz point stimulation (Fig. 5H), respectively.As predicted, reflected by the representative maps of action potential (AP) propagation and Ca 2+ activity, MI perturbed the AP propagation and caused Ca 2+ mishandling in the left ventricular myocardium (Fig. 5I, J).However, TRI-TENG transplantation greatly benefits normalizing AP propagation and Ca 2+ dynamics in the infarcted left ventricle (Fig. 5I, J).The disordered electrical propagation and calcium transient patterns usually result in repolarization variations in myocardium 4 .As indicated in Fig. 5K, L, MI prolonged action potential duration at 90% repolarization (APD 90 ) and calcium transient durations at 90% recovery (CaD 90 ).After TRI-TENG treatment on the infarcted heart for 4 weeks, the disorganized electrophysiological function of the infarcted heart recovered, and the prolonged APD 90 and CaD 90 were significantly decreased.Accordingly, these results suggest that TRI-TENG transplantation can accelerate the repolarization of the injured ventricular myocardium, and reduce the risk of calcium-dependent gap junction uncoupling and the malignant arrhythmia in the infarcted heart. TRI-TENG therapy for infarcted heart in rat and porcine MI models The commendable performance of TCP on enhancing electroactivity suggests a promising reparative effect for injured myocardial tissue.Accordingly, TCP's repair effect for infarcted heart was detected in rat and minipig MI models at 4 weeks post-transplantation.We firstly verified that there was little effect on normal heart structure and function at 4 weeks after TRI-TENG transplantation (Supplementary Fig. 17).Masson's trichrome staining for cardiac sections in rat experiments exhibited that the infarcted myocardium was almost completely replaced by scar tissue in the MI group, along with the most Fig. 3 | The application potential of the TRI-TENG as a real-time electrocardio sensor in vivo.A Schematic of the electrical signals outputed from TRI-TENG in swine and the electrocardiography (ECG) signals recorded simultaneously by the signal acquisition system.B Representative macroscopic images of the TRI-TENG array placed between the apex cordis and pericardium in minipig, which was driven by the beating activities of the heart.The yellow line represents the diastolic cardiac contour and the green line represents the systolic cardiac contour.C V OC from TRI-TENG and 2-lead ECG in minipigs.The upper wave represents V OC and the lower wave represents ECG.The number of marked peaks in V OC was in accordance with the marked QRS peaks in ECG from the same 5-s period and the marked peakpeak interval in V OC was matched the R-R interval in ECG, which showed the correlation between voltage and ECG signals.D, E The V OC and the ECG were simultaneously recorded from the TRI-TENG-transplanted Langendorff-perfused rat normal heart (D) and ischemic injured heart (E).The signals were displayed under sinus rhythm and 7 Hz stimulation pacing, respectively.The number of marked peaks was consistent in both the sinus rhythm and 7 Hz-stimulated heart.F Schematic diagram illustrating the electrical output assessment of TRI-TENG in rats as a self-sustaining wireless sensor.G Simultaneously recording of V OC and ECG from a TRI-TENG -transplanted rat heart under normal and ischemic states.H-J Statistics analyses of maximum open-circuit voltage (V OC, max ), minimum opencircuit voltage (V OC, min ), and the open-circuit voltage difference (ΔV OC ) respectively from the minipig hearts (n = 3 independent experiments) (H), the in vivo hearts under the normal state (n = 5 independent experiments) (I) and under the ischemic state (n = 5 independent experiments) (J).The data were presented as mean ± SD. extensive infarct area and the thinnest left ventricular wall thickness.The Ecoflex group exhibited negligible differences from the MI group in terms of infarct size and LV wall thickness, while the PDE and TRI-TENG groups demonstrated a pronounced effect on reducing fibrosis in the infarct region.Specifically, these two groups showed significantly decreased infarct size and increased LV wall thickness (Fig. 6A-C).Consistent with Masson's staining results, significant expression of α-actinin and CX43 proteins was observed in the infarct region of the hearts in the PDE and TRI-TENG groups, while minimal expression was detected in the MI and Ecoflex groups (Fig. 6D and Supplementary Fig. 18B, C).Furthermore, dual immunostaining for vWF and α-SMA proteins in the infarct region from different groups revealed that the PDE or the TRI-TENG transplantation elevated vWF + microvessels and vWF + /α-SMA + arterioles in the infarct regions, whereas Ecoflex transplantation had no effect in revascularization (Supplementary Fig. 18A, D, E).Notably, higher positive expression density of α-actinin/CX43 and vWF/α-SMA proteins in the infarct region were possessed in the TRI-TENG group compared to those in the other groups (Fig. 6D and Supplementary Fig. 18).Cardiac function assessments were carried out using echocardiography at 2 and 4 weeks after the transplantation.The echocardiographic images revealed severe ventricular dilatation and stiff LV anterior wall activities in both the MI and the Ecoflex groups.Conversely, the PDE group exhibited weakened pumping action, and the TRI-TENG-implanted heart demonstrated apparent wall motion (Fig. 6E and Supplementary Movie 6).The quantitative echocardiographic data showed an increase in Left ventricle internal diameter in diastole (LVIDd), Left ventricle internal diameter in systole (LVIDs) and a decrease in the fraction shorting (FS) and ejection fraction (EF), indicating the cardiac function deterioration in the MI and the Ecoflex groups (Fig. 6F).However, PDE and TRI-TENG transplantation improved cardiac function as evidenced by the decrease in LVIDs and the increase in ΔFS% and ΔEF% (Fig. 6G), indicating the improvement in cardiac pumping ability and a reduction in compensatory cardiac chamber enlargement.Compared to other groups, TRI-TENG group exhibited the highest ΔFS% and ΔEF%, resulting in the best cardiac function recovery.Collectively, we have demonstrated that the TCP, which incorporates electrical stimulation and conductivity in one CCP, is superior to the traditional CCP with a single property of conductivity in repairing damaged myocardium and improving cardiac function. Given the structure and physiological function of porcine hearts are similar to those of human 5 , we examined the TRI-TENG's treatment in minipig MI models after its transplantation for 4 weeks (Fig. 7A).The TRI-TENG array was developed to match the clinically relevant size of Bama minipig heart (3 cm × 3 cm) 40 .At 4 weeks post-transplantation, the hearts were harvested for subsequent morphometric and histological analysis.The gross results indicated that the TRI-TENG array transplanted hearts exhibited a smaller infarct size and thicker anterior wall in comparison to those observed in MI hearts.(Fig. 7B and Supplementary Fig. 19).Masson's Trichrome staining revealed that the MI group formed a significant amount of fibrous tissue in the infarct region, whereas transplantation with TRI-TENG arrays reduced myocardial fibrosis and promoted neonatal myocardium formation (Fig. 7C).Consequently, the immunofluorescence staining results demonstrated that hearts transplanted with TRI-TENG featured more mature myocardial tissue, more abundant CX43 proteins, as well as higher densities of micro vessels (vWF + cells) and arterioles (vWF + /α-SMA + cells) in the infarct region (Fig. 7D and Supplementary Fig. 20).Furthermore, ex vivo electrical signal propagation in porcine fresh hearts of the sham, MI and TRI-TENG array groups, which were quickly removed and immediately immersed in Krebs-Henseleit solution (KH solution) to maintain activity, was evaluated by measuring ECG using a signal acquisition system (Fig. 7E).Under the same electrical signal stimulation, the local field potential amplitude of the MI group was attenuated obviously.However, the TRI-TENG array group produced about a fivefold increase in local field potential amplitude compared with the MI group (Fig. 7F, G).The enhanced electroactivity can augment the motility of the injured myocardium, thereby contributing to the amelioration of cardiac function.Therefore, echocardiography was conducted prior to MI induction and again 4 weeks post-operation in order to evaluate cardiac function variation in different groups.The stiff LV anterior wall activity appeared in the MI group, whereas the TRI-TENG array-implanted heart displayed obvious LV anterior wall activities (Fig. 7H and Supplementary Movie 7).The results revealed that the FS and EF values derived from short axis of left ventricular were reduced in the MI group at 4 weeks after the operation, while increased significantly in the TRI-TENG array group compared with that in the MI group (Fig. 7I-L).We conducted a comparison of studies implementing intervention measures, including cardiac patch transplantation, gene engineering techniques, and delivery of bioactive factors, in porcine cardiac repair over the past five years, and discovered that the improvement effect for cardiac function by TRI-TENG array transplantation (FS improvement achieved about 14.7%) surpassed the efficacy of most other interventions (Supplementary Data 2). To evaluate the potential toxicity and inflammatory response of TRI-TENG array transplantation in minipigs, blood samples in different groups were collected prior to MI induction, as well as at 2-and 4 weeks post-operation.In addition, vital organs including lungs, spleens, livers and kidneys from different groups were harvested at 4 weeks post-operation.The results of the routine blood tests indicated that there was no obvious difference among different groups (Supplementary Data 3).In addition, the results of blood biochemistry showed that the liver functions-related indicators and kidney functions-related indicators in the TRI-TENG array transplantation group have no significant difference compared with those in the sham group (Supplementary Data 4).Furthermore, the histological morphology of vital organs showed no significant alterations at 4 weeks after transplantation with TRI-TENG array, as evidenced by H&E staining (Supplementary Fig. 21).The in vivo inflammatory response to TRI-TENG array transplantation was detected by tracking the variation levels of IL-10, IL-1β, IL-6, and TNF-α cytokines at different points in time using ELISA.Researches have demonstrated that cytokines levels of IL-1β, IL-6, and TNF-α in the heart were persistently elevated after MI, and these elevated cytokines levels may exacerbate myocardial inflammation during the acute phase of MI 41,42 .Our findings indicate that the levels of IL-1β, IL-6, and TNF-α in the MI group were significantly elevated at 2 and 4 weeks post-operation compared to preoperation.However, TRI-TENG array transplantation alleviated the upregulation of these cytokines at 2 and 4 weeks induced by MI.Besides, a remarkable increase of IL-10, which can relieve inflammation, was observed in TRI-TENG array transplanted hearts at 4 weeks, whereas there was no significant variation of IL-10 level in the MI group (Supplementary Fig. 22).Together, these results indicate that TRI-TENG array transplantation exhibits no apparent toxicity towards vital organs and can effectively suppress proinflammatory cytokines following MI. Whole-transcriptome RNA sequencing analysis of the change of gene level in different regions of infarct heart TRI-TENG exhibits remarkable reparative effects on infarcted hearts of rats and pigs, primarily attributed to its capacity for enhancing the electroactivity of the infarcted myocardium.However, further investigation is required to elucidate the potential mechanisms at the genetic level.After transplantation for 4 weeks, rat's hearts of sham, MI, Ecoflex, PDE and TRI-TENG groups were harvested, and the tissues Fig. 5 | Electrical mapping and optical mapping for the Langendorff-perfused hearts at week 4 after patches' transplantation.A The schematic diagram of electrical mapping from Langendorff-perfused hearts in different groups at week 4 post-transplantation.The stimulating electrode was positioned inferior to the right atrial appendage, and the 64-channel electrode was placed at the border region.Representative epicardial activation maps in all groups were displayed.The dark lines demarcate the boundary between the non-infarcted and infarcted myocardium.IR means infarcted region.Red color indicates the earliest activation, while the blue color represents the latest activation.The numbers on the heatmap scale correspond to the time of activation in milliseconds.B Representative ECG traces of different groups.C The field potential amplitude of the remote region, border region and scar region in different groups at week 4 post-transplantation. D Conduction velocity of different groups calculated based on epicardial activation maps (n = 5 independent rats).E The QRS duration of different groups were calculated based on ECG (n = 5 independent rats).F, G Statistics analysis of the border/remote field potential amplitude (F) and the scar/remote field potential amplitude (G) ratios in different groups (n = 5 independent rats).H Schematic of a setup for dual optical mapping of Rhod-2 AM-reported Ca 2+ transients and RH237reported transmembrane voltage in Langendorff-perfused rat hearts.I, J Optical mapping images of action potential (AP) (I) and ventricular Ca 2+ transients initiation (J) from the Langendorff-perfused hearts in different groups at week 4 posttransplantation.RV right ventricle.LV left ventricle.K, L Comparison of AP durations at 90% repolarization (APD 90 ) and calcium transient durations at 90% recovery (CaD 90 ) averaged over the optical mapping field of view in different groups (n = 3 independent rats).The data were presented as mean ± SD.Statistical significance in (D) was calculated using two-sided one-way ANOVA with Dunnett' s post hoc test, and statistical significance in (E-G, K, L) were calculated using twosided one-way ANOVA with LSD post hoc test.from infarct region (IR) and border region (BR) were dissected from the same heart for RNA sequencing (RNA-seq) to evaluate the changes in gene expression level.Principal component analysis (PCA) revealed that samples from the IR or BR of both Ecoflex and MI groups were tightly clustered, indicating minimal gene expression differences between Ecoflex and MI (Fig. 8A).In contrast, clear distinctions were observed in the same regions between PDE and MI or TRI-TENG and MI, which pointed to their obvious gene expression differences.Besides, the pyramid diagrams and volcano plots depicting the number of differentially expressed genes (DEGs) in pairwise groups indicated that TRI-TENG group or PDE group exhibited a high degree of differences from MI group, while Ecoflex group and MI group shared more similar genetic perturbation trends (Fig. 8B, C, left in Supplementary Fig. 23A-F, and left in Supplementary Fig. 24A-F).Furthermore, the TCseq package was utilized to perform cluster analysis of perturbed genes, which revealed two distinct clusters (cluster 1 and cluster 2), and hierarchical clustering heatmaps of DEGs were constructed based on gene expression patterns of cluster 1 and cluster 2 (Fig. 8D-F).Hierarchical clustering heatmaps and trend diagrams revealed that genes in cluster 1 exhibited low expression in the sham, while these genes were upregulated in the MI compared with that in the sham.With different therapeutic measures, the expression of these genes showed a decreasing trend.Among them, TRI-TENG showed the largest downregulation in gene expression, converging towards the sham.The genes in cluster 2, on the other hand, exhibited high expression levels in the sham group and were downregulated in the MI group compared to those in the sham group.With different therapeutic methods, these genes displayed an increasing trend, and TRI-TENG showed the highest increase compared with that in the MI, which converged toward the sham. To further investigate the mechanism by which TRI-TENG mediates the repair of infarcted myocardium, gene enrichment pathway analysis in both the IR and the BR in different groups was performed.Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis of DEGs indicated that biological processes in the comparison of TRI-TENG vs MI were mainly enriched in energy metabolism-related pathways (citrate cycle, oxidative phosphorylation), Ca 2+ regulation-related pathways (calcium signaling pathways, MAPK signaling pathways), cardiac contraction-related pathways (cardiac muscle contraction, cAMP signaling pathway), cellular adhesion and spreading-related pathways (gap junction), vascular regulation-related pathways (vascular smooth muscle contraction), inflammatory regulation-related pathways (inflammatory mediator regulation of TRP channels), cell cycle regulation-related pathways (TGF-beta signaling pathway) (middle in Supplementary Fig. 23C and middle in Supplementary Fig. 24C).In the comparison of PDE vs MI, DEGs mainly appeared in Ca 2+ regulation-related pathways (PI3K-Akt signaling pathway), cellular adhesion and spreading-related pathways (ECM-receptor interaction, focal adhesion), energy metabolism-related pathways (PPAR signaling pathway, fatty acid degradation, citrate cycle, protein digestion and absorption), amino acid biosynthesis-related pathways (propanoate metabolism) and cardiac contraction-related pathways (cardiac muscle contraction) (middle in Supplementary Fig. 23B and middle in Supplementary Fig. 24B).In the comparison of TRI-TENG vs DEGs were found to be enriched in pathways associated with inflammatory regulation, energy metabolism, cell cycle and vascular regulation (middle in Supplementary Fig. 23F and middle in Supplementary Fig. 24F).Furthermore, gene ontology analysis revealed the enrichment of terms related to cardiac muscle contraction, wound healing, response to hypoxia, energy metabolism, cell cycle, angiogenesis, and ECM organization in the comparison of TRI-TENG vs MI (right in Supplementary Fig. 23C and right in Supplementary Fig. 24C).When comparing TRI-TENG group with PDE group, DEGs in the IR majorly occurred in wound healing, cell adhesion and organization, response to decreased oxygen levels, regulation of T cell activation and cell contraction (right in Supplementary Fig. 23F), and in the BR mainly enriched in cell cycle, ECM organization, regeneration and cell proliferation (right in Supplementary Fig. 24F).In addition, the gene expression heatmap revealed a marked upregulation in the expression of genes associated with cardiac conduction (Dsc2, Hrc, Kcna5, Scn4b, Tnni3k), calcium handing (Cxcl11, Dhrs7c, Fhl2, Gbp1, Pdk2, Stc2), cardiac muscle contraction (Asb15, Atp2a2, Fgf13, Myh6, Scn5a, Tmem38a), angiogenesis (Angpt1, Igf2, Il1a, Rgcc, Vegfb) and wound healing (Celsr1, Gp1ba, Habp2, Klkb1, Ppara, Prkce) in the IR in the TRI-TENG and PDE groups compared with those in the MI and Ecoflex groups (Fig. 8G), and TRI-TENG group displayed the highest gene expression level among all groups.The results revealed significant alterations in genes associated with signaling pathways in the hearts transplanted with TRI-TENG or PDE, as compared to those in the MI hearts.These changes have a more effective regulatory impact on cardiac contraction, energy metabolism, ECM-receptor interaction, and inflammatory response, which reflects the molecular-level responses of PDE and TRI-TENG to the ischemic/hypoxic microenvironment and demonstrate their reparative effects on MI.Furthermore, the superior reparative effect of TRI-TENG compared to PDE is primarily attributed to its inherent self-powering characteristic of TRI-TENG.It can convert cardiac contractility into electrical signals to activate infarcted tissue.Specially, the TRI-TENG plays a pivotal role in enhancing cardiac conduction, calcium handling, energy metabolism, myocardial contractility, and angiogenesis, ultimately contributing to the repair and regeneration of infarcted myocardium. Materials GO was purchased from Hengqiu Tech., Inc. Hexane was purchased from Anachemia Canada, Inc. Polylactic acid (PLA) was purchased from XYZ printing.Ecoflex 00-50 was purchased from Smooth-On, Inc. 184 silicone elastomer kit was purchased from SYLGARD.DOPA was purchased from Sigma-Aldrich.PVDF was purchased from Tullagreen, Carrigtwohill, Co. Cork IRELAND.Tris-HCl and DMF were purchased from Sigma-Aldrich.Acetone was purchased from Thermo Scientific (USA).PVP was purchased from Shanghai Aladdin Bio-Chem Technology Co., LTD.Silver epoxy kit H20E was purchased from Epoxy Technology, Inc.The live/dead cell staining kit was purchased from Shanghai Bioscience Technology Co. Ltd.The primary antibodies of αactinin, CX43 and vWF were purchased from Abcam (Britain).α-SMA antibodies was purchased from Bosterbio (USA).Alexa Fluor 568 donkey anti-rabbit IgG (H&L) and Alexa Fluor 488 donkey anti-mouse IgG (H&L) were acquired from Life Technologies (USA).Masson's Trichrome Stain Kit was purchased from Beijing Solarbio Science & Technology CO., LTD. Preparation of leaf vein template Leaves were harvested from boldo plant (Peumus boldus).Cuticle and mesophyll cells of leaves were removed by immersing leaves in hexane for 1 day to acquire leaf vein templates. Preparation of molds PLA round molds (diameter = 8 mm, height = 0.5 mm) were built for the preparation of rGO electrodes, PDA-rGO electrodes, and bottom package layer.PLA hollow cylinder molds (inner diameter = 6.5 mm, outer diameter = 8 mm, height = 5 mm) were built for the preparation of Ecoflex spacers.All the molds were built using da Vinci Junior 1.0 3D printer (XYZ printing, Inc.).To prepare the leaf vein molds, two parts of 184 silicone elastomer kit were mixed at 10:1 ratio, and the mixture was spin-coated on a cover slid at 3500 rpm for 30 s.Then, the mixture was cured at 120 °C for 30 min to obtain a layer of cured elastomer. Another layer of the mixture was spin-coated on top of the cured elastomer at 3500 rpm for 30 s after the cured elastomer is cooled down.After that, a piece of leaf vein template was placed in the uncured mixture.The uncured mixture was then cured at 120 °C for 30 min, and the piece of leaf vein template was torn off. I K J Fabrication of rGO and PDA-rGO electrodes GO powder was added in DI water and sonicated using Probe Sonicator (SK92-IIN) for 60 min to obtain an 8 mg/mL uniform GO solution.In total, 90 μL of the GO solution was drop casted on 8 mm PLA round molds and followed by evaporation at room temperature to get GO thin films.For 5 mm TRI-TENG, 18 μL of GO was drop casted on 5 mm mold.GO thin films were heated at 300 °C for 12 h to get rGO electrodes.DOPA was dissolved in Tris buffer with a pH of 8.5 to make 2 mg/mL DOPA solution.The rGO electrodes were immersed in the DOPA solution for 8 h for the formation of PDA coating on rGO electrodes.In all, 6 mm rGO and PDA-rGO electrodes were prepared in a similar manner using 6 mm PLA mold. Preparation of PVDF membrane with leaf vein structure PVDF was dissolved in DMF/Acetone (2:1 ratio) under vigorous stirring for 30 min to obtain a homogeneous solution at the concentration of 10% (w/v).150 L of the PVDF/DMF solution was cast on a leaf vein mold.A piece of PVDF membrane with a leaf vein structure was obtained after the solvent was vacuum-dried. Assembly of TRI-TENG and TRI-TENG array PVP was dissolved in ethanol at 90 °C under vigorous stirring to make a 10% (w/v) ethanolic solution of PVP.The PVP/ethanol solution was spin-coated on a PLA round mold at 2000 rpm for 60 s.After the ethanol was evaporated, a PVP sacrificial layer was formed.Part A and part B of Ecoflex 00-50 were mixed at the ratio of 1:1.The mixture was spin-coated on top of the PVP sacrificial layer at the speed of 3500 rpm for 60 s.Curing of Ecoflex was performed at 60 °C for 30 min.Another Ecoflex mixture spin-coated on top of the previous Ecolex layer at the speed of 3500 rpm for 60 s.The rGO electrodes were placed on the uncured Ecoflex mixture followed by curing of the mixture.The PVDF membrane was cut into a circular shape with a diameter of 7 mm and was then placed on the rGO electrode.After 15 mg of the ecoflex mixture was drop cast onto the PLA hollow cylinder mold, the mold was placed on the PVDF layer with the uncured Ecoflex mixture facing the PVDF layer.The Ecoflex mixture was then cured at 60 °C for 30 min to make the Ecoflex spacer.The hollow cylinder mold was removed, and 5 mg of Ecoflex mixture was drop cast on the Ecoflex spacer.The PDA-rGO electrode was placed on top and the Ecofelx mixture was cured.The whole device was immersed in DI water and subjected to a water bath for 2 h for the detachment of TRI-TENG. A 4 cm × 4 cm PLA mold was used as the initial spin-coating substrate for the TRI-TENG array.The 4 cm × 4 cm PLA molds were used for the preparation of rGO electrodes and PDA-rGO electrodes.A 4 cm × 4 cm grid PLA mold with a line width of 1.5 mm was used for the preparation of the Ecofelx spacer.A TRI-TENG configuration was used for the TRI-TENG array.The electrical connection was made between every two neighboring rGO electrodes as well as between every two neighboring PDA-rGO electrodes using air-dried PEDOT: PSS/GelMA hydrogel as solder.The GelMA was synthesized according to a previously reported technique 5 .In brief, 10 g of gelatin was dissolved in 100 mL PBS buffer at 50 °C under vigorous stirring to get a 10% (w/v) gelatin aqueous solution, and then 2 mL of methacrylic anhydride (MAA) was added into the gelatin solution.The reaction proceeded at 50 °C for 2 h and the pH of the mixture was kept at 7.4 during the reaction using 10 M NaOH solution.The reaction product was dialyzed against DI water at 50 °C for 3 days and then lyophilized.In total, 100 mg of PEDOT: PSS, 100 mg of GelMA, and 2 mg of Irgacure® were dissolved in 2 mL DI water under vigorous stirring at 50 °C to obtain a homogeneous solution.A small amount (5-10 mg) of the precursor solution was placed on each connection point as solder.The precursor solution was then cured under UV for 30 min and air-dried.To make TRI-TENG with lead, rGO and PDA-rGO electrodes with a connection structure were made using PLA round mold with an extra connection portion and insulated wires were soldered on the connection structure using silver epoxy. Characterization of the TRI-TENG The cyclic compressive mechanical input that drove the TRI-TENG and mechanical properties of the TRI-TENG was provided by MTS® universal testing machine.The electric signals were collected using a benchtop digital multimeter (Keithley DMM6500, Tektronix®).The TRI-TENGs with wire connected to benchtop multimeter was affixed to throat, finger, and ankle using scotch to evaluate the sensing capacity.In the in vitro characterization of the TENGs, the rGO electrode of TENG was connected to the digital multimeter.A 2 cm × 2 cm × 2 cm PDMS cube was affixed to the top compression platen of the universal testing machine.The contact and separation between the TRI-TENG and the PDMS cube were facilitated by cyclic compression performed by the testing machine. FTIR was used to characterize the existence of specific functional groups.FTIR spectra were recorded using an infrared spectrophotometer (FTIR, Nicolet 6700, ThermoFisher Scientific Inc., USA).The sample spectra were recorded with a spectral range of 400-4000 cm −1 , a resolution of 4 cm −1 , and an average of 64 scans.Surface roughness of the rGO film and the PDA-rGO film was examined using an Atomic Force Microscope (AFM). To prove that the sensing unit and stimulation do not interfere with each other, regardless of the complexity of in vivo voltage measurement, the folloeing setup was designed.A SA hydrogel that mimics the heart was placed on top of the first spacer layer of the TRI-TENG.Then the SA hydrogel was subject to cyclic compression to mimic the contact and separation of the PDA-rGO layer with the heart due to heartbeat, the rGO layer of the TRI-TENG was connected to a benchtop digital multimeter (Keithley DMM6514, Tek-tronix®) for the measurement of the voltage output of sensing unit.Additionally, the ECG electrodes were attached to the hydrogel to measure the electric potential built on the hydrogel by the stimulation unit. Experimental animals Sprague-Dawley (SD) rats (newborns aged 1-3 days or adult weighing 250 ± 20 g) were purchased from the Animal Center of Southern Medical University, and Bama minipigs were purchased from Longgui Xingke Animal Breeding Farm, Baiyun District, Guangzhou City.Fig. 7 | The treatment effect of TRI-TENG array for the infarcted heart in porcine MI models.A Representative macroscopic images of the minipig hearts pre-and post-occlusion, as well as following transplantation of the TRI-TENG array onto the heart.B Representative transverse sections of hearts from sham, MI and TRI-TENG array transplantation groups after 4 weeks.The heart was cut into four layers from apex to level of ligation.The scar region was highlighted in the sections.Scale bars, 2 cm.C Representative images showing myocardial fibrosis stained with Masson's Trichrome of cardiac sections in different groups.Scale bars, 2000 µm (left) and 50 µm (right).D The expressions of α-actinin (green) and CX43 (red) proteins in porcine cardiac sections from various groups.Scale bars, 5000 µm (left) and 500 µm (mid and right).E Schematic representation of the assessment of electrical responses in infarcted myocardium in ex vivo porcine hearts using a signal acquisition system.F Representative ECG traces of the stimulation signal from ex vivo hearts in the sham (F1), MI (F2), TRI-TENG array (F3) groups.G The amplitude of the local field potential was subjected to statistical analysis across various groups (n = 3 independent minipigs).H Echocardiographic parasternal short-axis views of the minipigs' LV in different groups at the papillary-muscle level at 4 weeks after operation.I, J Cardiac function parameters, FS (I) and EF (J), in different groups before occlusion, as well as at 2 and 4 weeks post-operation.K, L Statistical analysis of the changes of FS (K), EF (L) in different groups at 4 weeks after transplantation (n = 3 independent minipigs).The data were presented as mean ± SD.Statistical significance was calculated using two-side one-way ANOVA with LSD post hoc test. All animal experimental performed in accordance with the Regulations on the Administration of Laboratory Animals (China).The experiments in rats were approved by the Southern Medica University Animal Ethics Committee, and the experiments in minipigs were approved by the Experimental Animal Ethics Committee of Longguixingke Animal Farm. Sensing signal detection After intramuscular administration of preanesthetic medication (atropine sulfate 0.05 ml/kg, xylazine hydrochloride 1 mg/kg), minipigs were anesthetized by continuous intravenous infusion of propofol (10 ml/h) and inhalation of isoflurane (1-3%).A thoracotomy was performed between the fourth and fifth ribs, then the fourth intercostal space was widened by a rib spreader, and the pericardium was opened and TRI-TENG patch with wire was placed between the ventricle and the pericardium.Voltage signal was recorded using the multimeter (DMM7510, Keithley) attached to the wire and ECG was recorded by signal acquisition system (BL-420F, Chengdu Techman Software Co. LTD).Rats were injected with heparin (3125 U/kg) for 10 min to prevent blood and humanely sacrificed using the pentobarbital sodium.The heart was excised quickly and rinsed in Krebs-Henseleit (KH) buffer solution containing 128 mM NaCl, 4.7 mM KCl, 20 mM NaHCO 3 , 1.05 mM MgCl 2 , 1.19 mM NaH 2 PO 4 , 1.3 mM CaCl 2 and 11.1 mM D-glucose.Then the heart was perfused retrograde in Langendorff apparatus with KH solution bubbled with 95% O 2 and 5% CO 2 , and stabilized for 15 min at 37 ± 0.5 °C.The flow rate and the perfusion pressure were maintained at 8-10 ml/min and 60-70 mmHg, respectively.A TRI-TENG patch (0.8 cm in diameter) with wire was placed to the left ventricle, and the multimeter was connected to the wire to record the voltage signal of the TRI-TENG driven by the heart under normal and ischemic conditions, respectively.Then an electrical stimulus with 7 Hz stimulation was performed under the left atrial appendage and the voltage signal was recorded.Pulses was performed by an electronic stimulator.ECG electrodes were positioned on the right atrium and the left ventricle, respectively, to continuously record the ECG. In order to evaluate the effect of reduced cardiac contractile motion in the ischemic region on the TRI-TENG' function, a small size TRI-TENG (0.6 cm in diameter) was prepared.ECG signals obtained from Langendorff-perfused normal and ischemic rat hearts before and after the TRI-TENG (without wire) transplantation were recorded.Besides, the TRI-TENG with wire was placed to the left ventricle, and the multimeter was connected to the wire to record the voltage signal of the TRI-TENG driven by the normal and ischemic hearts, respectively. Male SD rats were anesthetized, performed ventilation and then carried out thoracotomy for wireless sensing experiment.The TRI-TENG with a wire was connected to an external wireless Bluetooth device (Pokit meter).To be specific, when the TRI-TENG was placed on the heart, its wire was connected to the electrodes of the Bluetooth device, which was positioned outside the chest.Then the TRI-TENG was placed on the rat heart to collect the V OC under normal and ischemic conditions.After receiving the wireless signal, the waveform of the V OC was displayed on a mobile phone.ECG signal was collected by signal acquisition system simultaneously.In addition, in order to evaluate the effect of rat's breathing movement on sensing function of the TRI-TENG, the output voltage of the TRI-TENG and the corresponding ECG signals under both open-chest and closed-chest conditions were recorded.Male SD rats were anesthetized and carried out thoracotomy.Then, the TRI-TENG was directly transplanted on the exposed heart to harvest biomechanical energy from the heart's beating under the open-chest and the closed-chest conditions, respectively.ECG signal was collected by the signal acquisition system simultaneously. Neonatal rat CMs culture The hearts of 1-3-day-old After the hearts were carefully dissected, the heart tissues were washed three times with PBS solution to remove the blood clots and dissociated in trypsin overnight at 4 °C, followed by into single-cell suspension with 0.1% collagenase type II (Sigma).The isolated cells were pre-plated for 2 h to remove cardiac fibroblasts and nonadherent cells were seeded on the different scaffolds (5 × 10 6 cells/cm 2 ).CMs were cultured in high-glucose Dulbecco's modified Eagle's medium (DMEM, GIBCO) supplemented with 15% fetal bovine serum (FBS, GIBCO), 100 U/ml penicillin, and 100 µg/ml streptomycin.All cells were maintained in an incubator at 37 °C under 5% CO 2 and the culture mediums were exchanged every 2 days. Biocompatibility evaluation The biocompatibility of TRI-TENG was tested using live/dead staining.After CMs were cultured on Ecoflex, PDE and TRI-TENG for 1 day, 3 days, 7 days, respectively, the cells were washed three times with PBS and stained with the live/dead working solution at 37 °C for 5 min in the dark.The images of dyed samples were acquired with a laser scanning confocal microscope (LSM 880, Zeiss).Cell viability was defined as the ratio of living cells to total cells.The number of cells was quantified using Image J software (v2.0.0) at two independent sites of each sample, and each sample was prepared three times. Scanning electron microscopy (SEM) The microstructure of different scaffolds and the micromorphology of CMs seeded on scaffolds were observed by SEM.After being cultured for 7 days, the CMs-loaded scaffolds were rinsed with PBS, fixed with glutaraldehyde at 4 °C overnight, dehydrated with ethanol and dried by the critical point method.Then they were photographed under a SEM (ULTRA55, Zeiss).All samples were treated by spray-gold. Immunofluorescence staining for the cultured cells For immunofluorescence staining, CMs were co-cultured with different matrixes for 3 days and 7 days, respectively.The samples were fixed with 4% paraformaldehyde (PFA) at 4 °C overnight and washed three times with PBS.Then samples were permeabilized with 0.2% Triton X-100 solution for 15 min at room temperature and blocked with 2% bovine serum albumin (BSA)/PBS for 30 min at room temperature.The samples were incubated with the primary antibodies Mouse anti-α-actinin (1:250) plus Rabbit anti-CX43 (1:200) diluted in 2% BSA/PBS at 4 °C overnight.The primary antibodies were removed and the samples were incubated in secondary antibodies for 2 h at room temperature.The secondary antibodies applied were as follows: Alexa Fluor 488 donkey anti-Mouse IgG (H&L) (1:500) plus Alexa Fluor 568 donkey anti-Rabbit IgG (H&L) (1:500).The F-actin (Yeasen Biotechnology (Shanghai) Co., Ltd.) status was analyzed by phalloidin staining with fluorescein-isothiocyanate-conjugated phalloidin (1:500) for 1 h at room temperature.All samples were stained with DAPI (Santa Cruz) and imaged under a fluorescence microscope (BX53, Olympus). Implantation of different cardiac patches into rat MI model Male SD rats were served as MI model and divided into the sham group, MI group, the Ecoflex group, the PDE group and the TRI-TENG group.In brief, all rats were anesthetized, performed ventilation and then carried out thoracotomy.Rats in the sham group underwent thoracotomy, and other rats underwent ligation operations of the left Fig. 8 | Gene expression pattern analysis for the infarct region (IR) and border region (BR) of rats in different groups at 4 weeks after transplantation by RNA-Seq.A Principal component analysis (PCA) of the IR samples (left), the BR samples (middle) and all samples (right).B, C Differentially expressed genes (DEGs) in each pairwise comparison in the IR (B) and the BR (C).Upper: pyramid diagrams of the number of DEGs.Red represents the number of upregulated genes, blue represents the number of downregulated genes.Lower: the tables of the number of downregulated genes, upregulated genes and total DEGs.D Clustering trend analysis based on the union of differential genes comparing the groups (sham, Ecoflex, PDE and TRI-TENG) vs the MI group in the IR (left) and the BR (right).E Clustering trend analysis based on the union of differential genes comparing the groups (MI, Ecoflex, PDE and TRI-TENG) vs the sham group in the IR and the BR.F Clustering trend analysis based on the union of differential genes comparing TRI-TENG vs Ecoflex, TRI-TENG vs PDE and PDE vs Ecoflex in the IR and the BR.Left: Hierarchical clustering heatmap, right: gene expression trend diagram in the (D-F).G Gene expression heatmap of the specified function-related genes in different groups.Green, low expression.Red, high expression. anterior descending artery (LAD) after thoracotomy.Electrocardiograph monitoring showed a remarkable elevation of ST segment, indicating a successful MI model of rat.Fifteen minutes after ligation, Ecoflex, PDE and TRI-TENG were implanted respectively on the surface of the infarcted myocardium, and the edges of patches were sutured on the epicardium of the border of the infarcted myocardium with 7-0 polypropylene sutures.To further validate the safety and impact of the TRI-TENG on normal heart's function and structure, the TRI-TENG patch was implanted on the normal heart. Cardiac stimulus threshold and contraction force of ventricular tissue Rats were given heparin to prevent clotting, humanely killed by cervical dislocation and the hearts were quickly removed.Then the health heart was stabilized by perfusion for 15 min to discharge the residual blood until the cardiac activity resumed normal rhythm, and ECG was monitored.Pulses were performed by an electronic stimulator under the left aurcle, and the pacing voltage was started at 1 V and increased in 0.5 V increments up to achieve ECG capture.The ECG was recorded as the stimulus threshold of the normal heart.After applying Ecoflex, PDE and TRI-TENG to the left ventricle, respectively, the ECG signals were recorded as the threshold of the transplanted heart.For further examining the effects of different patches on stimulus threshold after cardiac ischemia, cardiac ischemia was produced by ligation of the LAD, which made ST-segment elevation in ECG.Then Ecoflex, PDE and TRI-TENG were applied.ECG signals of ischemic and transplanted hearts were recorded. The heart was quickly carried out, then intact ventricular tissue was isolated in KH buffer, secured between two vascular clamps, and contracted by 1 Hz pulse of the ventricular apex.First, the contractility of ventricular tissue was recorded using the mechanical sensing module of signal acquisition system.Then, Ecoflex, PDE and TRI-TENG were attached to the ventricular tissue in sequence, and their contraction force were recorded.Four weeks after patches (Ecoflex, PDE and TRI-TENG) transplantation in rats, the ventricular tissues in different groups were collected and tested for contraction force by the same method. Electrical mapping and optical mapping Rats treated with heparin were sacrificed.Lungs were clamped when hearts were exposed.After that, the hearts were lifted and cut quickly along the back of the lungs.Hearts were then Langendoff-perfused with KH buffer solution with the flow rate of 10 ml/min at 37 ± 0.5 °C.After the residual blood was discharged, hearts were perfused and monitored for stability for 15 min before experimental procedures commenced.The 64 separated electrodes (8 × 8 grids, 0.55 mm spacing) were placed at the border region between the healthy and infarct myocardium.A 2 mV 5 Hz pulses of electrical stimulus was applied to the epicardium right under the left atrial appendage.ECG electrodes were positioned on the right atrium and left ventricle respectively, pacing from the right atrium and continuously recording the ECG. Hearts were rapidly collected and perfused in KH solution containing 10 μM blebbistatin used to stop contractions and avoid movement artefacts.Dye loading was aided by pre-perfusion with pluronic F-127 (20 % w/v in DMSO).The calcium dye Rhod-2-AM (1 mg/ml) and the voltage-sensitive dye Rh237 (1 mg/ml) were added to the perfusion solution in sequence to measure membrane potential and Ca 2+ .The heart was illuminated by 530 nm fluorescence light using LEDs (MGL-III-532-100mw) and imaged by the 50-mm camera to record voltage signals.Emission lights were collected at >700 nm for Rh237 and 590 ± 15 nm for Rhod-2-AM.The emitted fluorescence signals were recorded using two CMOS cameras (01-KINETIX-M-C, TeledynePhotometrics), 350 × 350 pixel spatial resolution for a sampling rate of 100 Hz. Echocardiography evaluation of rat cardiac function The left heart function of all animal groups was evaluated by IE33 echocardiography (Vevo2100, Visual Sonics).At two weeks and four weeks after patch transplantation, the rats were anesthetized, and transthoracic echocardiography was performed.M-mode echocardiography was recorded with a 40-MHz transducer, and short-axis views were obtained to measure the cardiac parameters.The cardiac functional parameters, including length of Left ventricle internal diameter in diastole (LVIDd), length of Left ventricle internal diameter in systole (LVIDs), Fraction Shorting (FS), and Ejection Fraction (EF) were measured. Morphology, histology, and immunofluorescence assay for cardiac sections At week 4 after implantation, the rats were sacrificed, and the hearts were collected.The harvested hearts were sliced into three sections and immediately fixed in 4% paraformaldehyde overnight at 4 °C.Images of the heart cross-sectional gross morphology were captured.Then the slices were dehydrated in 15% and 30% sucrose, respectively, and frozen in OCT at −20 °C.The cryosections of 6 µm in thickness were prepared and placed onto slides for histological analysis.The infarct area and wall thicknesses were defined by the Masson's Trichrome staining.The infarct size was determined by the ratio of the inner perimeter of the scar region and the entire inner perimeter of the ventriculus sinister wall.The thicknesses of the scar region were measured three times and averaged.The immunostaining for cryosections used the same method as above.The primary antibodies of Rabbit anti-vWF antibody (1:200) plus Mouse anti-α-SMA (1:100), Mouse anti-α-actinin (1:250) plus Rabbit anti-CX43 (1:200) were used.The secondary antibodies applied were as follows: Alexa Fluor 488 donkey anti-Mouse IgG (H&L) (1:500) plus Alexa Fluor 568 donkey anti-Rabbit IgG (H&L) (1:500).All samples were imaged in a fluorescence microscope and the images were analyzed by Image J software. Minipigs model of MI and TRI-TENG array implantation Minipigs were kept in the constant temperature of 22-25 °C for 1 week to adapt the environment and were randomly divided into the Sham group, MI group and TRI-TENG group.The pigs were anesthetized by propofol (10 ml/h) and isoflurane (1-3%), and then subjected to thoracotomy.The fourth intercostal space was widened to visualize the LAD and the LAD was ligated with a 5-0 polypropylene suture (Prolene, Ethicon) for 10 min, reperfused twice and permanently tied.MI models were confirmed by ST-segment elevation on electrocardiogram and cyanosis of the myocardial surface during the operation.In the Sham group, the same thoracotomy was performed without the LAD ligation.Then the chest of MI group was sutured, and the infarcted myocardium area of TRI-TENG group was covered with TRI-TENG array.Electrocardiogram, body temperature, blood pressure and arterial oxygen saturation were monitored throughout the operation.Body weight and physiological status was monitored daily. After 4 weeks, ex vivo electrical signal propagation in the infarcted area was evaluated by measuring ECG using a signal acquisition system.After euthanizing the pig, the hearts were removed and immersed in KH solution.One end of the infarcted area was connected to stimulating electrodes, and used ECG electrodes to detect electrical signals by the two-lead method.The output was set for 2 Hz pulse. Echocardiography of minipigs Echocardiography of minipigs was obtained using a portable echocardiograph before occlusion, at 2 and 4 weeks after operation.The anesthetized minipigs were placed in the left lateral decubitus position and covered with warmed ultrasound gel in the chest.The echocardiography was acquired to measure LVIDs, LVIDd, FS and EF from M-mode tracings at the mid-papillary level. Histology and immunofluorescence of tissue sections of minipigs The pigs were euthanized at 4 weeks postoperatively.A small part of organs (lung, liver, spleen and kidney) was removed, washed with PBS and fixed with 10% formalin.The whole hearts were quickly excised and balloons with 20 ml of formalin were filled into the hearts to prevent the chamber collapse of the left ventricle.Then, the hearts were cut into four 1 cm-thick slices from the apex to the atrium and fixed with 10% formalin.After fixation for 48 h, the hearts were completely dehydrated in 20% sucrose solution for 24 h and 30% sucrose solution for 24 h in a 4-degree refrigerator.For Masson's trichrome staining and immunostaining analysis, the procedure was the same as described above.The tissue sections of the lung, liver, spleen and kidney were embedded in paraffin and stained with hematoxylin and eosin (H&E) for pathological examination. Collection and testing of blood samples To assess the potential toxicity and inflammation in the porcine infarcted hearts resulting from TRI-TENG array transplantation, blood samples were collected before MI induction, 2 weeks after operation and 4 weeks after operation.Blood routine was tested using anticoagulant, and blood biochemistry and ELISA were tested using serum.Blood samples were tested by Guangzhou Huayin Medical Laboratory Center Co., Ltd. RNA sequencing of heart tissues and bioinformatics analysis The rats were euthanized at 4 weeks after MI, and heart tissues were rapidly collected from the infarct region and the border region.RNA extraction and quality examination were performed by Annoroad Gene Technology Co. Ltd. (Beijing, China).RNA degradation and contamination were monitored on 1% agarose gels.RNA purity was checked using the NanoPhotometer® spectrophotometer (IMPLEN, CA, USA).The integrity of RNA samples was assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system (Agilent Technologies, CA, USA).Each 2 µg RNA sample was used as input material for the RNA sample preparations.Sequencing libraries were generated using NEBNext® UltraTM RNA Library Prep Kit for Illumina® (NEB, USA) following the manufacturer's recommendations.The clustering of the index-coded samples was performed on a cBot Cluster Generation System using TruSeq PE Cluster Kit v3-cBot-HS (Illumia) according to the manufacturer's instructions.After cluster generation, the library preparations were sequenced on an Illumina Novaseq platform. When analyzing data, reference genome and gene model annotation files were downloaded from genome website directly.Gene Ontology (GO) enrichment analysis of differentially expressed genes was implemented by the clusterProfiler R package, the statistical enrichment of differential expression genes in KEGG pathways was tested by clusterProfiler R package. Statistical analysis The data were expressed as mean ± SD at least three independent experiments.Comparisons between two groups were performed using independent samples t test and in multiple groups were performed using one-way ANOVA with LSD post hoc test to determine the significance of differences.P value less than 0.05 was considered statistically significant.Statistical analyses were compared using SPSS 23.0 software.The average value and s.d. for all graphs were plotted using GraphPad Prism v.8.0. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Fig. 1 | Fig. 1 | The preparation, characterization, working mechanism, and potential application of TRI-TENG.A The first column shows the schematic illustration of the preparation of the polyvinylidene fluoride (PVDF) triboelectric layer with leaf vein structure using mold casting and detailed composition of trinity triboelectric nanogenerator (TRI-TENG).The second column shows the schematic diagram (upper), actual photograph (middle) and scanning electron microscopy (SEM) image of the TRI-TENG on its cross-section (lower) showing multilayered structures: ①, Ecoflex film, ②, reduced graphene oxide (rGO) electrode, ③, polyvinylidene fluoride (PVDF) film with leaf vein structure, ④, polydopamine (PDA) modified rGO (PDA-rGO) electrode.The third column shows the healing mechanism that an electrical potential built between the PDA-rGO electrode and the myocardium upon the contraction and relaxation of the myocardium.The as-built electrical potential is equal to the electrical potential between the leaf vein patterned PVDF layer and the PDA-rGO electrode due to electric induction.B An electric system that acquires the open-circuit voltage (V OC ) between the rGO electrode and the ground resulting from heart activity and transmits it to a smartphone wirelessly.C Schematic of TRI-TENG array assembly for matching the infarct size of the porcine heart and its application in a minipig MI model.The preparation of each layer structure of TRI-TENG array adopts the same configuration as TRI-TENG using the polylactic acid (PLA) mold, and every two neighboring rGO electrodes as well as PDA-rGO electrodes were electrically connected by an air-dried poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate)/methacrylated gelatin (PEDOT:PSS/ GelMA) hydrogel.After 28 days of transplanting the TRI-TENG array into the infarcted heart, there was a significant improvement in cardiac function by approximately 14.7%, surpassing the therapeutic efficacy of most recently reported approaches for MI treatment in minipigs.We hypothesize that TRI-TENG conductive cardiac patches (CCPs) exert their therapeutic effects on infarcted hearts primarily by modulating the expression of mRNA related to cardiac muscle contraction, energy metabolism, and vascular regulation in vivo, as revealed by RNA sequencing analysis. Fig. 2 | Fig. 2 | The in vitro output performance of different TENGs and the response of TRI-TENG to different stimuli.A The performance of different TENGs with different components in the open-circuit voltage (V OC ), short-circuit current (I SC ), and transferred charges (n = 5 independent samples).B Output voltage, power density, and current of different TENGs at different load resistance.C Voc of TRI-TENG at different frequencies, pressure, and strains.D Demonstration of the TRI-TENG detecting different mechanical motions.The TRI-TENG is attached to the throat for detecting speaking, coughing, and swallowing; attached to the fingers for detecting finger joint movements; and attached to the ankle for detecting walking, running, and jumping.Statistical significance in sample of V OC was calculated using two-side one-way ANOVA with Dunnett' s post hoc test, and statistical significance in sample of ISC and transferred charges was calculated using two-side one-way ANOVA with LSD post hoc test. Fig. 4 | Fig. 4 | The impact of TRI-TENG transplantations on the excitation-contraction coupling of rat hearts.A Schematic illustrating the augmented effects of TRI-TENG transplantation on excitation-contraction coupling in the infarcted rat heart.B, C Pacing thresholds of the Langendorff-perfused whole heart in different groups under various normal condition (B) and ischemic condition (C).D, E Schematic representation (D) and Profile display (E) of contractility measurements in ventricular tissue.F-I Contraction waves (F, H) and contractility analyses (G, I) of the ventricular tissues were assessed under 1 Hz pulse stimulation in different groups following the patches' transient transplantation on the normal heart (F, G) or patches' transplantation on the infarcted heart for a duration of 4 weeks (H, I).Each symbol denotes an independent rat (G: n = 7 in each group.I Sham n = 8, and n = 10 in the other groups).The data were presented as mean ± SD.Statistical significance in (G) was calculated using two-sided one-way ANOVA with LSD post hoc test, and statistical significance in (I) was calculated using two-sided one-way ANOVA with Dunnett's post hoc test. Fig. 6 | Fig. 6 | The histological examination and assessment of cardiac function on rat hearts from different groups at week 4 after transplantation.A Gross observations of the whole heart and three cross sections of the heart in different groups.Scale bars: 1 cm.B Masson's Trichrome staining for cardiac sections in different groups.blue: fibrosis tissue; red: myocardium.Scale bars: 1 mm.C Quantitative comparisons of the infarct area and infarct wall thickness among different groups.Each symbol denotes an independent rat (MI n = 10, Ecoflex n = 9, PDE n = 10, TRI-TENG n = 10).D Immunofluorescent staining of CX43 (red) and α-actinin (green) proteins in the infarct regions in different groups.Scale bars: 10 µm.i: MI group, ii: gene expression patterns compared with Sham group in the IRChange of clustering gene expression patterns compared with MI group in the IRChange of clustering gene expression patterns among different groups in the IR Change of clusterling gene expression patterns among different groups in the BRChange of clustering gene expression patterns compared with Sham group in the BRChange of clustering gene expression patterns compared with MI group in the BR
v3-fos-license
2018-04-03T05:54:11.583Z
2013-02-28T00:00:00.000
16883576
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/742545.pdf", "pdf_hash": "e441add38a5cc2013be4e1dc9d29289ccdbdc9d0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44728", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5ef1d4e08bc1fb131335bbf311443c4bdfd0cc17", "year": 2013 }
pes2o/s2orc
Environmental Lead Exposure Accelerates Progressive Diabetic Nephropathy in Type II Diabetic Patients Whether environmental lead exposure has a long-term effect on progressive diabetic nephropathy in type II diabetic patients remains unclear. A total of 107 type II diabetic patients with stage 3 diabetic nephropathy (estimated glomerular filtration rate (eGFR) range, 30–60 mL/min/1.73 m2) with normal body lead burden (BLB) (<600 μg/72 hr in EDTA mobilization tests) and no history of exposure to lead were prospectively followed for 2 years. Patients were divided into high-normal BLB (>80 μg) and low-normal BLB (<80 μg) groups. The primary outcome was a 2-fold increase in the initial creatinine levels, long-term dialysis, or death. The secondary outcome was a change in eGFR over time. Forty-five patients reached the primary outcome within 2 years. Although there were no differences in baseline data and renal function, progressive nephropathy was slower in the low-normal BLB group than that in the high-normal BLB group. During the study period, we demonstrated that each 100 μg increment in BLB and each 10 μg increment in blood lead levels could decrease GFR by 2.2 mL/min/1.72 m2 and 3.0 mL/min/1.72 m2 (P = 0.005), respectively, as estimated by generalized equations. Moreover, BLB was associated with increased risk of achieving primary outcome. Environmental exposure to lead may have a long-term effect on progressive diabetic nephropathy in type II diabetic patients. Introduction Over the past 25 years, the prevalence of type II diabetes in the USA has almost doubled, with 3-to 5-fold increases in developing countries [1]. Diabetes is now the major cause of end-stage renal disease and the primary diagnosis causing kidney disease in 20-40% of patients starting treatment for end-stage renal disease worldwide [2,3]. However, few studies have investigated the relationship between environmental exposure to lead and diabetic nephropathy. Previous epidemiological studies [4][5][6] showed that blood lead levels (BLL) are related to renal function [4,5] and exacerbated age-related decreases in renal function [6] in the general population, suggesting that environmental exposure to lead influences renal function in healthy individuals. Because BLL only indicates recent lead exposure [4,7], body lead burden (BLB) is usually assessed by X-ray fluorescence to determine bone lead content and calcium disodium ethylenediaminetetraacetic acid (EDTA) mobilization tests [7]. A BLB greater than 600 g, as determined by EDTA mobilization tests, is considered lead poisoning. Previous investigations that used EDTA mobilization tests to assess BLB in nondiabetic chronic kidney disease (CKD) patients with normal BLB [8][9][10][11][12] suggested that environmental lead exposure is associated with progressive CKD. A 6-year study [13] indicated that bone lead content is related to progressive elevation of serum creatinine in persons with diabetes. However, these values were not adjusted for daily urinary protein excretion or daily protein intake. A short-term 1-year observational study [14] of type II diabetic patients with diabetic nephropathy suggested that environmental lead exposure might influence progressive diabetic nephropathy. However, the observation period was too short to demonstrate the long-term toxic effect of environmental lead exposure; therefore, the estimated glomerular filtration rate (eGFR) was calculated from the American Modification of Diet in Renal Disease (MDRD) formula for CKD patients [15] rather than the Chinese formula for type II diabetic patients [16]. Hence, the relationship between lowlevel environmental exposure to lead and progressive diabetic nephropathy remains unclear. This 2-year prospective study was performed to clarify the relationship in type II diabetic patients. Subjects. The Institutional Review Board Committee of Chang Gung Memorial Hospital approved the study protocol. Each patient provided written informed consent. Patients aged from 30 to 83 years who had type II diabetes mellitus with nephropathy and who received followup care at Chang Gung Memorial for more than 1 year were eligible for inclusion in this study if they met all the following criteria [14]: abnormal serum creatinine (>1.4 mg/dL); stage 3 CKD (eGFR between 30 mL/min/1.73 m 2 and 60 mL/min/1.73 m 2 ); diabetic retinopathy treated with or without laser therapy; daily urinary protein excretion of more than 0.5 g/day; no microhematuria in urine tests; normal-sized kidneys as determined by echograms; history of diabetes for more than 5 years; no known history of exposure to lead or other heavy metals; and a BLB of less than 600 g as measured by EDTA mobilization testing and 72-hour urine collection. Diabetic nephropathy diagnoses were also based on renal histological examination findings in cases where renal biopsies were performed. The exclusion criteria were as follows: type I diabetes; renal insufficiency with a potentially reversible cause such as malignant hypertension, urinary tract infection, hypercalcemia, or drug-induced nephrotoxic effects; presence of other systemic diseases such as connective tissue diseases; use of drugs that might alter the course of renal disease such as nonsteroidal anti-inflammatory agents, steroids, immunosuppressive drugs, or Chinese herbal drugs; having joined a previous study [14]; drug allergies; and the absence of informed consent. The blood pressure of each patient was maintained at less than 140/90 mm Hg with diuretics and angiotensin-converting-enzyme inhibitors (ACEI) or angiotensin II receptor antagonists (ARA), with or without calcium-blocking agents and/or vasodilators [17]. Calcium carbonate was employed to maintain patients' phosphate levels. No patients received vitamin D3 supplements because their parathyroid hormone was below 200 pg/mL. Each patient received dietary consultation. A diabetic diet (35 Kcal/kg of body weight per day) with normal-protein intake (0.8-1.0 g of high biological value protein per kilogram of body weight per day) was recommended to each patient. A nutritionist reviewed the dietary intake of each patient every 3 to 6 months. A 24-hour urea excretion analysis was performed every 3 months to determine nitrogen balance and dietary compliance [18]. Measurements of Blood Lead Levels and Body Lead Burdens. BLL and BLB were measured as described previously [7][8][9][10][11][12]. BLB was measured using EDTA mobilization tests as modified by Behringer et al. [19]. Urinary excretion measured 72 hours after the intravenous infusion of 1 g of calcium disodium EDTA (Abbott Laboratories, North Chicago, IL, USA) was used to measure BLB. Blood and urine lead levels were determined by electrothermal atomic-absorption spectrometry (SpectrAA-200Z; Varian, CA, USA) with Zeeman background correction and a L'vov platform. Both internal and external quality-control procedures were applied throughout this study and achieved consistently satisfactory results. A certified commercially prepared product (Seronorm Trace Elements, Sero AS, Billingstad, Norway) was utilized to monitor intrabatch accuracy and ensure interbatch standardization. The coefficient of variation for lead measurement was <5.3%. The detection limit was 0.01 g/dL. External quality control was maintained via participation in the governmental National Quality-Control Program. Low-normal BLB was defined as <80 g and high-normal BLB was defined as >80 g and <600 g [9-12, 14]. Study Protocol. Serum creatinine, glycosylated hemoglobin (HbA1c), daily urine protein excretion, daily protein intake, mean arterial pressure, cholesterol, and triglyceride levels were measured with an autoanalyzer system (model 736; Hitachi, Tokyo, Japan) at the beginning and end of the study and every 3 months during the 24-month clinical observation period. Blood pressure and body mass index were also measured at 3-month intervals. At the end of this period, we compared the changes in renal function between the 2 groups and assessed the relationship between BLB and the progressive decline of diabetic nephropathy. Renal function was assessed by creatinine clearance and eGFR (both in mL/min/1.73 m 2 of body surface area). A modified eGFR equation for Chinese patients with type II diabetes was employed [16] Outcome Measures. The primary endpoint was a 2-fold elevation in serum creatinine (measured twice, 1 month apart) from baseline values, need for long-term dialysis, or death during the 24-month observation period. The secondary endpoint was temporal changes in renal function during the study period. Statistical Analysis. The differences in variables and renal function between the 2 groups were analyzed by the Chi-square test and Student's -test. All values were twotailed, and all results are presented as means ± SD. The Mann-Whitney test was employed for data not normally distributed. We performed a sensitivity analysis that assigned the mean eGFR value of the treatment group to controls lost to followup and assigned the mean eGFR value of the control group to treated patients lost to followup. Generalized estimating equations (GEE) with linear analysis were employed in longitudinal multivariate analyses using SAS statistical previous lead exposure were enrolled. The following patients were excluded: 21 who participated in a previous study, 47 who did not receive laser therapy for diabetic neuropathy, 12 with microhematuria, 10 with and 7 without informed consents. Inclusion of 89 patients with diabetic neuropathy and normal BLB into the 24-month observation period. Eventually, 85 patients completed the 24-month observation study. Two patients were lost to follow-up, and 2 died of acute myocardial infarction. Type II diabetic patients ( = 198) with abnormal serum creatinine 2 , and without urine protein excretion <0.5 g/day, 10 with small-sized kidneys, 2 with BLB >600 g, levels, eGFR between 30 and 60 mL/min/1.73 m software (version 6.12) to further assess the temporal changes in variables and associations with progressive renal function (eGFR) during the observation period. Moreover, multivariate Cox analyses were used to determine the significance of the baseline variables for predicting the primary endpoint during the study period. These models included all variables identified in the literature as related to the progression of diabetic nephropathy [12][13][14][15][16]. A value of < 0.05 was considered statistically significant. Data were analyzed using SPSS, version 18.0 for Windows 95 (SPSS Inc., Chicago, IL). Study Subjects. A total of 89 patients participated in the study and 85 completed the 24-month observation period (58 men and 31 women) ( Figure 1). The following baseline data were obtained: patient mean age, 60.1 ± 9.5 years (range, 33-83); body-mass index (weight in kilograms divided by the square of height in meters), 24.9 ± 3.3 (range, 14.9-33.4); serum creatinine level, 1.9 ± 0.3 mg/dL (range, 1.5-2.8 mg/dL); eGFR, 41.3 ± 6.9 mL/min/1.73 m 2 of body surface area (range, 30.3-59.9 mL/min/1.73 m 2 of body surface area); daily protein excretion, 3.0 ± 2.5 g (range, 0.5-12.2 g); daily protein intake, 0.97 ± 0.18 g/kg (range, 0.58-1.63 g/kg); HbA1c, 8.3 ± 1.9% (range, 5.7-14.7%); BLL, 4.3 ± 1.1 g/dL (range, 0.8-10.4 g/dL); and BLB, 109.9 ± 52.3 g (range, 14.4-316.8 g). Sixty-two patients (70.0%) had hyperlipidemia. Eighty-four patients (95.5%) had hypertension, and they were treated with ACEI or ARA. Fourteen patients (15.7%) smoked. Seventy-six patients (85.4%) had retinopathy, which was treated with laser therapy. Among all the study patients, 29 (32.6%) had a history of cardiovascular diseases, including myocardial infarction, congestive heart failure, stroke, and diabetic foot. BLL was associated with BLB in all study patients ( = 0.274, = 0.009). Table 1 summarizes demographic data, baseline chronic disease condition, use of ACEI or ARA, daily urinary urea and protein levels, and body lead burden for participants in each group. No significant differences in these baseline values were noted between the 2 groups on initial assessment or during the observation period. Table 2 compares the progression of diabetic nephropathy between the high-normal BLB and lownormal BLB groups during the observation period. Creatinine clearance and eGFR were higher in the low-normal BLB group than in the high-normal BLB group during months 18 to 24 of the observation period. Similar results were obtained in the sensitivity test (Table 3). Outcome Measures. Thirty-nine patients had a 2-fold elevation in serum creatinine from the baseline values during the 24-month observation period; 5 patients in the highnormal BLB group required hemodialysis; 1 patient with high-normal and 1 with low BLB died of acute myocardial infarction; and 2 patients with high BLB were lost to followup. A total of 45 (50.6%) patients reached the primary endpoint. Only 9 (9/27, 33.3%) patients had a body lead burden <80 g, and 36 (36/62; 58.1%) of these subjects had body lead burdens >80 g (Logrank tests, = 0.023) (Figure 2). In addition, GEE with linear analysis showed that BLB or BLL were significant variables for predicting the progression of eGFR, after adjusting for other variables (Tables 4 and 5). Each 1 g increase in BLB led to a decrease of 0.022 mL/min/1.73 m 2 in eGFR ( = 0.009) and each 1 g/dL increase in BLL led to a 0.298 mL/min/1.73 m 2 decrease in eGFR ( = 0.010) during the 2-year study period. Moreover, multivariate Cox regression analysis demonstrated that BLB was a significant risk factor (hazard ratio [HR] = 1.01, 95% confidence interval [CI]: 1.01-1.02; < 0.001) for achieving primary outcome in type II diabetic patients, even after adjustment for other factors (Table 6). Similarly, multivariate Cox regression analysis demonstrated that BLB >80 g was a significant risk factor ( = 2.79, 95% CI: 1.25-6.25; = 0.012) for achieving primary outcome in these patients. Discussion The results of the present study indicate that BLB and BLL, even at low levels, are important risk factors for progressive diabetic nephropathy. These associations were strong, dose dependent, and consistent, even after comprehensive adjustments for other covariates. Our result is similar to those of previous reports showing that increased BLL is associated with a progressive decline in renal function in the general population [4,5]. In comparison with our previous work [14], this study enrolled a different study cohort and showed several novel findings. First, patients with a high-normal BLB showed a higher incidence of progressive diabetic nephropathy than those with low-normal BLB, although the corresponding variables were not different between the 2 groups during the 2-year followup period. Moreover, similar results were obtained in the sensitivity test. Second, each increment of 10 g/dL of BLL was determined to potentially decrease GFR by 3.0 mL/min/1.73 m 2 after adjustment for covariates. In addition to BLB, BLL is a strong predictor of progressive diabetic nephropathy and can be easily monitored in clinical practice. Importantly, there were no safe limits of lead indices in our study. Consistent with our results, previous studies of healthy populations indicated a high correlation between measured BLL and BLB [20,21]. Therefore, one can assume that under conditions of constant environmental lead exposure, a higher BLL should correspond to a higher BLB. Third, the present study included a more homogenous population than our previous study [14]. Only patients with stage 3 CKD were included in the present study, whereas patients with stages 2, 3, and 4 CKD were included in our previous work [14]. Achieving primary outcome in patients with different stages of CKD is associated with confounding effects. Moreover, patients with stage 4 CKD may have hyperparathyroidism, which can cause osteopathy; increase BLB, as measured by EDTA tests [22]; and result in selection bias in the classification of high-normal or lownormal BLB groups. Fourth, the eGFR was calculated from the Chinese-modified MDRD formula for CKD patients with type II diabetes rather than the formula used for American CKD patients. Lastly, because the present study used stricter definitions of the primary outcome (a 2-fold versus a 1.5-fold increase in serum creatinine level from that of the baseline) and a longer followup period (24 months versus 12 months) than previous studies [13,14], a more definitive conclusion regarding the long-term effect of environmental exposure to lead on progressive diabetic nephropathy can be drawn. The mean BLL of our patients was only 4.3 g/dL, which is lower than observed in our previous study [14] and slightly higher than that reported by nationwide surveys in Taiwan (3.0 g/dL) [23], Europe (2.57 g/dL) [24], and the USA (3.5 g/dL) [25]. This difference could be the result of the older age (mean, 60.1 years old) of our study patients. The mean BLB of our patients was only 109.9 g, which is much lower than subtle lead poisoning (>600 g) levels [7]. Although there were no differences in baseline data, Kaplan-Meier analysis showed that patients with high-normal BLB were more likely (58.1%) to achieve the primary outcome than those (33.3%) with low-normal BLB during the 24-month followup. Multivariate Cox analysis indicated each 100-g increase of BLB could lead to a 100% increase in the risk of achieving primary outcome. Consistent with this result, EDTA chelation therapy has shown benefits in retarding progressive diabetic nephropathy in type II diabetic patients with high-normal BLB [14,26]. Hence, environmental exposure to lead may accelerate progressive diabetic nephropathy in these patients, and it is reasonable to suggest chelation therapy for patients with high-normal BLB, who accounted for 70% (62/89) of the current study patients. The mechanism underlying the effect of environmental exposure to low-levels of lead on accelerating the development of progressive diabetic nephropathy remains unclear. Low-level lead exposure in a rat CKD model was found to hasten progressive CKD by accelerating microvascular and tubulointerstitial injury [27]. Studies performed on animals [28,29] have shown that chronic exposure to low-dose lead results in the generation of reactive oxygen species, reduces nitric oxide availability and the expression of angiotensin II, and increases blood pressure [29]. It also promotes hydroxyl radical generation and lipid peroxidation [30], enhances vascular reactivity to sympathetic stimulation, and decreases DNA repair capacity, which might be relevant for rapidly dividing cells in the inflamed arterial wall [31]. Moreover, chronic exposure to low-level lead-induced oxidative stress and reduced nitric oxide availability were successfully treated with a lead chelating agent or antioxidants in rats [29,32]. These findings support that chronic exposure to low-levels of lead may have a negative effect on diabetic nephropathy. Several recent nationwide epidemiological studies also indicated that environmental exposure to lead, even at low levels, is associated with CKD in the general population [33,34]. Moreover, higher BLL in the range below 10 g/dL was shown to be related to lower cystatin-estimated GFR [35] in adolescents. These previous studies support the current study results. However, much remains to be explored regarding the mechanisms of lead-induced progressive diabetic nephropathy. The use of eGFR to assess altered renal function is one of the limitations of the present study. However, a study on eGFR in Chinese patients with type II diabetes conducted by Barbosa et al. [36] demonstrated a strong correlation between eGFR and isotopic GFR ( 2 = 0.95) in the Chinese population. Another limitation of this study was that BLB was not assessed using X-ray fluorescence methods. However, there are several important limitations associated with X-ray fluorescence-based methods [36], such as lack of precision, nonhomogenous lead distribution in cortical bone, and a low turnover rate with low biological activity of lead in cortical bone. However, lead that can be chelated by EDTA predominantly reflects lead concentrations in the blood and soft tissues. Because the kidneys are included among soft tissues, EDTA mobilization may reflect the lead content of the kidney [37], which may influence progressive CKD. Conclusion The results of this prospective study indicate that environmental exposure to lead may accelerate progressive diabetic nephropathy in type II diabetic patients despite the control of treatable factors during long-term followup. These results suggest that avoiding exposure to any sources of lead in the environment and chelation therapy are important in patients with BLB >80 g. The findings of the current study are important because diabetic nephropathy is the major cause of end-stage renal disease in the world.
v3-fos-license
2018-12-24T10:09:30.628Z
2018-08-28T00:00:00.000
115758711
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1073/11/9/2258/pdf?version=1535421088", "pdf_hash": "ef6ab79e2ef5efdaf120c662309b38aa5782c124", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44730", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "ef6ab79e2ef5efdaf120c662309b38aa5782c124", "year": 2018 }
pes2o/s2orc
Analysis of the Effects of Blade Installation Angle and Blade Number on Radial-Inflow Turbine Stator Flow Performance Organic Rankine cycle (ORC) is a reliable technology to recover low-grade heat sources. The radial-inflow turbine is a critical component, which has a significant influence on the overall efficiency of ORC system. This study investigates the effects of the blade installation angle and blade number on the flow performance of radial-inflow turbine stator. R245fa and toluene were selected as the working fluids in the low and high temperature range, respectively. Two-dimensional stator blades model for the two working fluids were established, and numerical simulation was conducted through Computational Fluid Dynamics (CFD) software. The results show that for low temperature working fluid R245fa, when the installation angle is 32◦ and blade number is 22, the distribution of static pressure along the stator blade has no obvious pressure fluctuation, and the flow loss is least. Meanwhile, the stator blade obtained the optimal performance. For high temperature working fluid toluene, when the installation angle is 28◦ and blade number is 32, the average outlet temperature is the lowest, while the average outlet velocity is the largest. The flow state is well and smooth, and the remarkable flow separation and shock wave are not present. Moreover, the stator blade for R245fa has a larger chord length, cascade inlet diameter, and cascade outside diameter but a lower blade number compared to toluene. Introduction Due to massive consumption of primary energy, the problems of energy shortage and environmental deterioration are prominent.Therefore, the recovery of low-grade heat sources such as solar energy, geothermal energy, biomass energy, and low temperature waste heat is imperative [1][2][3].Among all existing technologies, organic Rankine cycle (ORC) has proven to be a viable alternative to convert low-grade heat source into electricity [4,5].Additionally, it has advantages of high reliability, small size, and low capital cost since it has the same configuration as conventional steam Rankine cycle [6].ORC uses organic compounds with a low boiling point instead of water as working fluid.Therefore, a higher inlet pressure of the turbine can be obtained even for the low temperature heat sources [7].However, the thermal efficiency of the ORC system is at a relatively low level due to the low operating temperature.The ORC expander is another factor limiting the system efficiency, thus a high-performance expander is significant to enhance the performance of the ORC system. In order to achieve higher thermal efficiency, the initial parameters of organic working fluid at the turbine inlet are close to the critical point which is the called real-gas thermodynamic region.Therefore, the ideal gas law does not apply anymore, and an accurate thermodynamic model is required to calculate the organic working fluid thermodynamic properties for one-dimensional preliminary design and Computational Fluid Dynamics (CFD) simulation of ORC turbines [8,9].Lio et al. [10] conducted an optimum design of a single stage radial-inflow turbine based on the mean-line model.Taking R245fa as an example, the effect of design choice and working conditions on the turbine efficiency was studied using real-gas properties.Sauret et al. [11] presented 1D design process of the radial-inflow turbine working with R143a, and Peng-Robinson equations of state were used to calculate the real-gas properties.Furthermore, a 3D numerical simulation of the R134a radial-inflow turbine was performed for off-design conditions.Rahbar et al. [12] proposed a new mathematical approach which integrates the mean-line modeling method coupled with real gas equation of state and GA (genetic algorithm) optimization method.Turbine efficiency was selected as optimize objective function, and eight working fluid candidates were investigated to optimize radial-inflow turbine performance.R152a, obtained the highest efficiency among all of the selected working fluid candidates.Colonna et al. [13] compared the flow fluids and performance parameters coupled with different equations of state (EoS) including simple ideal gas EoS, the Peng-Robinson-Stryjek-Vera cubic EoS, and the state-of-the-art Span-Wagner EoS.The results show that there are large deviations of the fluid dynamic results between the ideal gas EoS and the other two EoS.Afterwards, the same research team found that the standard design methods with ideal gas law cannot obtain a proper design of the turbine.The 2D inviscid fluid dynamic numerical simulation of a turbine stator blade with accurate thermodynamic model was conducted [14]. Generally, the scroll expander, screw expander, and radial-inflow turbine are commonly used as heat-to-power machine in a conventional ORC system.Due to the characteristics of dealing with large enthalpy with low peripheral speeds, higher efficiency at off-design working conditions, and higher single-stage expansion ratio, the radial-inflow turbine is the most common choice for the commercial and large-scale ORC system [15].Compared to conventional working fluids, organic working fluids invariably possess large molecular weight, high density, and low critical temperature and pressure.Therefore, these special physical properties lead to some distinguished characteristics such as high Mach number at the stator outlet, supersonic flow in the stator, a small size, and a high rotational speed [16].These characteristics limit the radial-inflow turbine efficiency, which further lowers the ORC system efficiency.Therefore, optimization of the turbine design and aerodynamic performance is a critical step for the ORC system.Li et al. [17] conducted an aerodynamic and profile design for a radial-inflow turbine with R123, as working fluids, and numerical analysis was carried out based on the designed turbine.The simulation results show that there is shock wave in the stator due to the high expansion ratio.Afterwards, the same radial-inflow turbine was aerodynamically optimized based on the NURBS (Non-Uniform Rational B-Splines) curve method and CFD software, including the nozzle, meridional path, and turbine blade [18].Song et al. [19] firstly developed a thermodynamic model of the ORC system, and R123 was selected as the optimal working fluid among six different working fluid candidates.Then, the aerodynamic and mechanical design of three centrifugal turbines with different stages using R123 as working fluid was performed.The power and efficiency characteristics of the designed centrifugal turbines were assessed based on the CFD simulation methods.In addition, some researchers [20][21][22][23][24] coupled the one dimensional model of the radial-inflow turbine with the thermodynamic model of the ORC system to investigate the influence of turbine efficiency on the ORC system parameters optimization and working fluids selection. As a critical component of the radial-inflow turbine, the flow performance of stator blade has significant influence on the radial-inflow turbine efficiency.A part of the peripheral work produced by the turbine is directly converted from the inertial force, therefore the effects of velocity coefficients of the stator and the impeller on the turbine efficiency are different.When both velocity coefficients of the stator and the impeller are decreased by 1%, the turbine peripheral efficiency will decrease by about 1% and 0.2% respectively as a result [25].Thus, it is essential to optimize the design of the stator blade for improving the flow efficiency.Dong et al. [26] evaluated the effects of outlet blade angle, solidity, blade height, expansion ratio, and surface roughness on the stator velocity coefficient through Energies 2018, 11, 2258 3 of 15 numerical simulation method.The existing semi-empirical formula for the stator velocity coefficient is modified to well capture the velocity coefficient considering the surface roughness.Harinck et al. [27] conducted a numerical study on the flow field of a high-expansion ratio radial-inflow turbine stator.A shock-induced separation bubble was found in the stator, which affected the flow velocity and angle along the stator outlet.In order to reduce the effects of shock wave in the stator channel, Pasquale et al. [28] developed an optimization loop coupling CFD solver with genetic algorithm.The results show that the optimized stator blades produced a shock-free expansion, reducing the total pressure losses significantly.Uusitalo et al. [29] designed a highly supersonic small scale ORC turbine stator considering the real gas effects of organic working fluids.The flow field in the stator was predicted using CFD software at design and off-design conditions.Oblique shock wave was found at stator blade trailing edge.To the best of the authors' knowledge, there are limited studies investigate the effects of the blade installation angle and blade number on the performance of radial-inflow turbine stator in detail, and little research compare the stators geometry parameters between low and high temperature.There is still room to be enhanced in this area. In order to investigate the effects of the blade installation angle and blade number on the performance of radial-inflow turbine stator, and compare the difference of stators geometry parameters between low and high temperature.R245fa and toluene were selected as working fluids in the low and high temperature range, respectively.Two-dimensional stator blades model for the two working fluids were established, and numerical simulation was conducted through CFD software, respectively.Then, the effects of installation angle and number of blades on the performance of stator blade were investigated.The distribution of static pressure along the stator blade, flow loss, and outlet parameters were compared and analyzed.Moreover, the optimal installation angle and number of blades were selected for both working fluids.Finally, the geometric parameters of stator blades in low and high temperature ranges were compared. Numerical Analysis Model and Method Similar to axial flow turbines, the velocity distribution at each characteristic section of the radial-inflow turbine can be expressed by defining velocity triangles, as shown in Figure 1. Energies 2018, x, x FOR PEER REVIEW 3 of 15 Harinck et al. [27] conducted a numerical study on the flow field of a high-expansion ratio radialinflow turbine stator.A shock-induced separation bubble was found in the stator, which affected the flow velocity and angle along the stator outlet.In order to reduce the effects of shock wave in the stator channel, Pasquale et al. [28] developed an optimization loop coupling CFD solver with genetic algorithm.The results show that the optimized stator blades produced a shock-free expansion, reducing the total pressure losses significantly.Uusitalo et al. [29] designed a highly supersonic small scale ORC turbine stator considering the real gas effects of organic working fluids.The flow field in the stator was predicted using CFD software at design and off-design conditions.Oblique shock wave was found at stator blade trailing edge.To the best of the authors' knowledge, there are limited studies investigate the effects of the blade installation angle and blade number on the performance of radial-inflow turbine stator in detail, and little research compare the stators geometry parameters between low and high temperature.There is still room to be enhanced in this area. In order to investigate the effects of the blade installation angle and blade number on the performance of radial-inflow turbine stator, and compare the difference of stators geometry parameters between low and high temperature.R245fa and toluene were selected as working fluids in the low and high temperature range, respectively.Two-dimensional stator blades model for the two working fluids were established, and numerical simulation was conducted through CFD software, respectively.Then, the effects of installation angle and number of blades on the performance of stator blade were investigated.The distribution of static pressure along the stator blade, flow loss, and outlet parameters were compared and analyzed.Moreover, the optimal installation angle and number of blades were selected for both working fluids.Finally, the geometric parameters of stator blades in low and high temperature ranges were compared. Numerical Analysis Model and Method Similar to axial flow turbines, the velocity distribution at each characteristic section of the radialinflow turbine can be expressed by defining velocity triangles, as shown in Figure 1.The peripheral efficiency of the radial-inflow turbine can be expressed as: where s h  is the isentropic entropy drop of organic working fluid in the entire radial-inflow turbine.The peripheral efficiency of the radial-inflow turbine can be expressed as: where ∆h s is the isentropic entropy drop of organic working fluid in the entire radial-inflow turbine. Thus, according to velocity triangles and the equations above, the peripheral efficiency calculation equation can be transformed into a dimensionless one [17]: For the equation, it can be concluded that the peripheral efficiency of the radial-inflow turbine is the function of seven parameters: velocity ratio u 1 , degree of reaction Ω, stator blade velocity coefficient φ, rotor blade velocity coefficient ψ, ratio of wheel diameter D 2 , absolute velocity angle at the rotor inlet α 1 , and the relatively velocity angle at the rotor outlet β 2 .Among the seven parameters, the effect of velocity ratio and degree of reaction on the peripheral efficiency is largest, followed by absolute velocity angle at the rotor inlet and stator blade velocity coefficient that are related to the stator blade [25]. Owing to the high molecular complexity, the relative sound speed of the organic working fluid is lower than the conventional working fluid, which commonly leads to supersonic flow in the radial-inflow turbine stator blade.Transonic cascade TC-4P (National Research University Moscow Power Engineering Institute, Moscow, Russia) is obtained on the basis of TC-2P (National Research University Moscow Power Engineering Institute, Moscow, Russia) through increasing the bending degree of the back-arc outlet profile and reducing the blade installation angle.This improved method enables the cascade TC-4P operation conditions to extend to the high Mach number region, especially for supersonic operation.Moreover, the oblique exit section of TC-4P is propitious to obtain supersonic flow.Therefore, transonic cascade TC-4P has satisfactory aerodynamic performance under the transonic conditions.Thus, TC-4P is adopted as the blade profile for the radial-inflow turbine stator in this paper.The relative coordinates for the TC-4P are shown in Table 1.Absolute velocity angle at the rotor inlet α 1 is proportional to the stator blade installation angle α b which is determined more easily in the manufacture process for radial-inflow turbine.Figure 2 shows the relationship curves between the absolute velocity angle at the rotor inlet and the stator blade installation angle for TC-4P blade profile.Both blade installation angle and blade number of the stator have significant influence on the shape of flow path of stator blade, and the flow loss of the stator would be affected by both of them as well.In this paper, the effect of blade installation angle and blade number on the performance of stator is investigated. Energies 2018, x, x FOR PEER REVIEW 5 of 15 shows the relationship curves between the absolute velocity angle at the rotor inlet and the stator blade installation angle for TC-4P blade profile.Both blade installation angle and blade number of the stator have significant influence on the shape of flow path of stator blade, and the flow loss of the stator would be affected by both of them as well.In this paper, the effect of blade installation angle and blade number on the performance of stator is investigated.In this paper, the performance of radial-inflow turbine stator is evaluated by means of Ansys Fluent (17.0, Ansys, Pittsburgh, PA, USA).Since the organic working fluid gas is compressible and viscous, the relative pressure is set to zero and a density-based implicit solver is used.In order to improve the accuracy of the calculation, the transport equations are discretized adopting the AUSM (Advection Upstream Splitting Method) discrete scheme, and the other equations are discretized using the MUSCL (Monotonic Upwind Scheme for Conservation Laws) third-order scheme.Due to the high density and low kinetic viscosity, the Reynolds number is very large and the flow is turbulent.Since the k-ω based shear stress transport (SST) model considers the transport of turbulent shear stress in the definition of turbulent viscosity, it can accurately predict the flow separation under the adverse pressure gradient.Thus, the SST k-ω model is adopted in this paper and the k-ω transport equations are as follows: The continuity equation is The momentum equations are In this paper, the performance of radial-inflow turbine stator is evaluated by means of Ansys Fluent (17.0, Ansys, Pittsburgh, PA, USA).Since the organic working fluid gas is compressible and viscous, the relative pressure is set to zero and a density-based implicit solver is used.In order to improve the accuracy of the calculation, the transport equations are discretized adopting the AUSM (Advection Upstream Splitting Method) discrete scheme, and the other equations are discretized using the MUSCL (Monotonic Upwind Scheme for Conservation Laws) third-order scheme.Due to the high density and low kinetic viscosity, the Reynolds number is very large and the flow is turbulent.Since the k-ω based shear stress transport (SST) model considers the transport of turbulent shear stress in the definition of turbulent viscosity, it can accurately predict the flow separation under the adverse pressure gradient.Thus, the SST k-ω model is adopted in this paper and the k-ω transport equations are as follows: The continuity equation is ∂ρ ∂t The momentum equations are ) Energies 2018, 11, 2258 The energy equation is ) In order to calculate the properties of organic working fluids in the CFD simulation, the Peng-Robinson Eos [30] is adopted, which is one of the mostutilized cubic equations model for real-gas in many engineering fields.This EoS provides reasonable accuracy in the calculation of organic working fluids properties especially near the critical point and the saturated vapor line.Based on siloxane MM, Dong et al. [9] compared the fluid properties calculated by the PR (Peng-Robinson Eos) equations and obtained from NIST database.The results show that the maximum difference is less than 4%, which indicates the feasibility of PR equations in CFD simulation.PR equations are presented as follows: To complete the description of the real gas properties, the zero pressure specific heat capacity c p0 is also required.c p0 is determined by a fourth-order polynomial whose coefficients is calculated by curve fitting method based on NIST database [26]. Figure 3 shows the two-dimensional stator blades model.Unstructured adaptive mesh has the advantage of straightforward treating the domains of arbitrarily complex geometry.As such, unstructured adaptive mixed mesh is generated by the Ansys ICEM CFD software (17.0,Ansys, Pittsburgh, PA, USA).Boundary layer mesh was generated in the area near blade surface and the cascade trailing edge, as shown in Figure 4.The grid quality is greater than 0.3.The computational mesh independence is conducted to minimize the error due to mesh spacing.Five different unstructured meshes were generated for the stator flow channel geometry, and the pressure at the throat of cascades was monitored to assess the mesh independence.The results show that the largest difference between the 4.0 × 10 4 cells and 4.5 × 10 4 is less than 0.1%, as shown in Figure 5.The difference is acceptable and the mesh independent can be reached with the mesh number of 4.0 × 10 4 cells.The inlet boundary conditions are set as the total pressure, total temperature, and 5% turbulence energy intensity.The flow direction at the inlet is assumed normal to the boundary.The static pressure is set to outlet boundary conditions.Adiabatic, smooth, and non-slip velocity boundary conditions are established for all passage walls.In order to avoid reverse flow, two extensions of the domain were placed in front of and behind the stator passage at a distance equivalent to approximately 25% of the stator chord.Since only one blade passage is simulated, periodic boundary conditions are applied for the stator. Due to the lack of aerodynamic experimental data for transonic cascade TC-4P working with organic working fluids, the validation of the present 2D numerical simulation was made against the 1D thermal design.Taking R245fa as an example, Table 2 compares the results from the 1D thermal design and numerical 2D turbulent simulation.It is observed that the numerical simulation results are in relatively good agreement with the 1D thermal designs and the deviation of all the parameters The inlet boundary conditions are set as the total pressure, total temperature, and 5% turbulence energy intensity.The flow direction at the inlet is assumed normal to the boundary.The static pressure is set to outlet boundary conditions.Adiabatic, smooth, and non-slip velocity boundary conditions are established for all passage walls.In order to avoid reverse flow, two extensions of the domain were placed in front of and behind the stator passage at a distance equivalent to approximately 25% of the stator chord.Since only one blade passage is simulated, periodic boundary conditions are applied for the stator. Due to the lack of aerodynamic experimental data for transonic cascade TC-4P working with organic working fluids, the validation of the present 2D numerical simulation was made against the 1D thermal design.Taking R245fa as an example, Table 2 compares the results from the 1D thermal design and numerical 2D turbulent simulation.It is observed that the numerical simulation results are in relatively good agreement with the 1D thermal designs and the deviation of all the parameters The inlet boundary conditions are set as the total pressure, total temperature, and 5% turbulence energy intensity.The flow direction at the inlet is assumed normal to the boundary.The static pressure is set to outlet boundary conditions.Adiabatic, smooth, and non-slip velocity boundary conditions are established for all passage walls.In order to avoid reverse flow, two extensions of the domain were placed in front of and behind the stator passage at a distance equivalent to approximately 25% of the stator chord.Since only one blade passage is simulated, periodic boundary conditions are applied for the stator. Due to the lack of aerodynamic experimental data for transonic cascade TC-4P working with organic working fluids, the validation of the present 2D numerical simulation was made against the 1D thermal design.Taking R245fa as an example, Table 2 compares the results from the 1D thermal design and numerical 2D turbulent simulation.It is observed that the numerical simulation results are in relatively good agreement with the 1D thermal designs and the deviation of all the parameters is controlled within 2.1%.Therefore, the proposed 2D CFD simulation method is feasible for the radial-inflow turbine stator. Results and Discussion The blade installation angle and blade number are two key geometric parameters for the radial-inflow turbine stator.Both of them have influence on the shape of flow path of the stator blade, the pressure distribution along the stator blade, and the velocity and temperature at the outlet.In addition, the blade installation angle of the stator could affect the absolute velocity angle at the rotor inlet, which has a significant influence on the velocity triangles and stator velocity coefficient.In this paper, low and high temperature is assumed to be 353.15K and 573.15 K, respectively.Correspondingly, R245fa and toluene were selected as working fluids.2D stator blades model for the two working fluids were established.The influence of installation angle and number of blades on the flow performance of stator blade was investigated.In addition, the geometric parameters of stator blades for R245fa and toluene were also compared. Low Temperature Working Fluid R245fa In order to investigate the effect of blade installation angle, the blade number is assumed to be 22.Figures 6 and 7 show the static pressure distribution curves around the stator blade surface and nozzle losses in the design condition with different blade installation angle.The relative position of the static pressure curve is taken in the direction of the x-axis, and the study object is the blade with the coordinates given in Table 1.The nozzle loss was calculated as follows: Figure 8 shows the static pressure distribution curves around the stator blade surface with different blade number when the blade installation angle is 32°.It can be observed that for the stator with different blade number, there exists a favorable pressure gradient along the pressure side in common.However, the pressure difference at the tailing edge of the stator blade for blade number 18 is larger than that for the other two blade numbers, which means a higher blade loading for the Figure 8 shows the static pressure distribution curves around the stator blade surface with different blade number when the blade installation angle is 32°.It can be observed that for the stator with different blade number, there exists a favorable pressure gradient along the pressure side in common.However, the pressure difference at the tailing edge of the stator blade for blade number 18 is larger than that for the other two blade numbers, which means a higher blade loading for the In Figure 6, the static pressure distribution curves around the stator blade surface have a similar variation trend.It can be seen that the there is a favorable pressure gradient along the pressure side, and a large pressure difference exists at the tailing edge of stator blade, in other words, the relatively high blade loading region is located at the trailing edge.An obvious pressure fluctuation appears around the streamwise region 0.75-0.85 on the suction side of stator blade, which means that there exists overexpansion of the organic working fluids.Flow in adverse pressure gradient and flow separation is presented around this region, and a shock wave also appears due to the pressure fluctuation, which would significantly deteriorate the radial-inflow turbine performance. The presence of shock wave will cause the drastic variation in the flow parameters distribution along the stator blade, and the reduction of velocity coefficient of the stator simultaneously, then the overall turbine efficiency decreases as a result.As shown in Figure 6, the pressure fluctuation for the blade installation angles of 28 • and 36 • is more violent than that for blade installation angle 32 • .When the installation angle is 32 • , the adverse pressure gradient is substantially zero.The shock wave intensity for the blade installation angle of 32 • is weaker than the other two.In addition, the position of the pressure fluctuation point for the installation angle of 32 • is further downstream, which makes the flow separation mingle into the wake more quickly.Therefore, the flow condition in the flow path for the installation angle of 32 • is much better.It is apparent from Figure 7 that the flow loss of stator blade for the installation angle of 32 • is the least among the three installation angles. Figure 8 shows the static pressure distribution curves around the stator blade surface with different blade number when the blade installation angle is 32 • .It can be observed that for the stator with different blade number, there exists a favorable pressure gradient along the pressure side in common.However, the pressure difference at the tailing edge of the stator blade for blade number 18 is larger than that for the other two blade numbers, which means a higher blade loading for the stator blade with blade number 18.There is an obvious pressure fluctuation around the streamwise region 0.7-0.75 on the suction side for the stator blade with blade number 26.Thus, the flow would be over expanded, which leads to flow in adverse pressure gradient and flow separation.As shown in Figure 9, due to the existence of shock wave, the flow loss of the stator blade with blade number 26 is larger than that with blade number 18 and 22.There is also a slight pressure fluctuation around the streamwise region 0.8~0.85 on the suction side for the stator blade with blade number 22, and the adverse pressure gradient is substantially zero.Thus, there is no shock wave existing at trailing edge of the stator blade.It can be seen from the Figure 9 that the flow loss of stator blade for blade number 22 is the least, and the organic working fluids flow efficiency in flow path of stator blade is the highest. High Temperature Working Fluid Toluene The average outlet temperature of the stator blade reflects its expansion and acceleration capacity.The lower average outlet temperature of the stator blade is, the greater the enthalpy drop of the organic working fluid passing through the stator blade will be.Lower average outlet temperature means stronger capacity of converting the heat energy of working fluid into kinetic energy, which indicates the stator blade has excellent expansion and acceleration capacity.The average outlet velocity can also reflect the stator blade performance.It can be noted that the greater the average outlet velocity, the larger the velocity coefficient, which indicates the lower flow loss in the stator blade flow path. Figure 10 presents the variations of average outlet temperature and average outlet velocity with different blade installation angles for radial-inflow turbine stator blade with blade number 32.The average outlet temperature of the stator blade decreases first and then increases with the increment of the installation angle.As mentioned before, the absolute velocity angle at the rotor inlet is proportional to the stator blade installation angle.As the stator blade installation angle decreases, the absolute velocity angle at the rotor inlet decreases accordingly, while the stator velocity coefficient increases, then the stator loss coefficient decreases.Thus, the average outlet temperature of the stator blade decreases with decrement in blade installation angle in a certain range.With further decrement of the blade installation angle, the wake loss increases, which leads to the stator velocity coefficient decrease and the stator loss coefficient increase.Therefore, the average outlet temperature of the stator blade increases with decrement in blade installation angle when the stator blade installation angle is lower than a certain value.As it can be seen from Figure 10, the average outlet velocity increases first and then decreases with the increment of the installation angle, which is just in an opposite way compared to the variation of average outlet temperature.The reason for the variation of the average outlet velocity is similar to that for the variation of average outlet temperature.When the blade installation angle is 28°, the average outlet temperature is the lowest, while the average High Temperature Working Fluid Toluene The average outlet temperature of the stator blade reflects its expansion and acceleration capacity.The lower average outlet temperature of the stator blade is, the greater the enthalpy drop of the organic working fluid passing through the stator blade will be.Lower average outlet temperature means stronger capacity of converting the heat energy of working fluid into kinetic energy, which indicates the stator blade has excellent expansion and acceleration capacity.The average outlet velocity can also reflect the stator blade performance.It can be noted that the greater the average outlet velocity, the larger the velocity coefficient, which indicates the lower flow loss in the stator blade flow path. Figure 10 presents the variations of average outlet temperature and average outlet velocity with different blade installation angles for radial-inflow turbine stator blade with blade number 32.The average outlet temperature of the stator blade decreases first and then increases with the increment of the installation angle.As mentioned before, the absolute velocity angle at the rotor inlet is proportional to the stator blade installation angle.As the stator blade installation angle decreases, the absolute velocity angle at the rotor inlet decreases accordingly, while the stator velocity coefficient increases, then the stator loss coefficient decreases.Thus, the average outlet temperature of the stator blade decreases with decrement in blade installation angle in a certain range.With further decrement of the blade installation angle, the wake loss increases, which leads to the stator velocity coefficient decrease and the stator loss coefficient increase.Therefore, the average outlet temperature of the stator blade increases with decrement in blade installation angle when the stator blade installation angle is lower than a certain value.As it can be seen from Figure 10, the average outlet velocity increases first and then decreases with the increment of the installation angle, which is just in an opposite way compared to the variation of average outlet temperature.The reason for the variation of the average outlet velocity is similar to that for the variation of average outlet temperature.When the blade installation angle is 28 • , the average outlet temperature is the lowest, while the average outlet velocity is the largest, which indicates that the stator blade with installation angle 28 • has higher speed coefficient and better expansion capability.Figure 11 displays the variation of average outlet temperature and average outlet velocity with blade number for radial-inflow turbine stator blade with blade installation angle 28°.As shown in Figure 11, the average outlet temperature of the stator blade decreases first and then increases with the increment of the blade number.The reason lies in the fact that the flow field in each channel becomes more uniform with the increment of blade number in a certain range.Thus, the flow Figure 11 displays the variation of average outlet temperature and average outlet velocity with blade number for radial-inflow turbine stator blade with blade installation angle 28 • .As shown in Figure 11, the average outlet temperature of the stator blade decreases first and then increases with the increment of the blade number.The reason lies in the fact that the flow field in each channel becomes more uniform with the increment of blade number in a certain range.Thus, the flow separation loss decreases, and the enthalpy drop of the working fluid through stator blade increases.The average outlet temperature of the stator blade decreases as a result.With further increments in the blade number, the frictional loss between the working fluid and the wall increases.Therefore, the enthalpy drop of the working fluid through stator blade decreases, and the average outlet temperature of the stator blade increases accordingly.It can be observed that the average outlet velocity increases first and then decreases with the increment of the blade number, which is just in an opposite way compared to the variation of average outlet temperature.When the blade number is 32, the average outlet temperature is the lowest, while the average outlet velocity is the largest, which indicates that the enthalpy drop of the working fluid through stator blade is the largest.The stator blade with blade number 32 has a better performance.Figure 12 shows the streamlines in the stator blade with an installation angle of 28 • and blade number 32.The flow state is well and smooth, and the there is no remarkable flow separation and shock wave.Figure 11 displays the variation of average outlet temperature and average outlet velocity with blade number for radial-inflow turbine stator blade with blade installation angle 28°.As shown in Figure 11, the average outlet temperature of the stator blade decreases first and then increases with the increment of the blade number.The reason lies in the fact that the flow field in each channel becomes more uniform with the increment of blade number in a certain range.Thus, the flow separation loss decreases, and the enthalpy drop of the working fluid through stator blade increases.The average outlet temperature of the stator blade decreases as a result.With further increments in the blade number, the frictional loss between the working fluid and the wall increases.Therefore, the enthalpy drop of the working fluid through stator blade decreases, and the average outlet temperature of the stator blade increases accordingly.It can be observed that the average outlet velocity increases first and then decreases with the increment of the blade number, which is just in an opposite way compared to the variation of average outlet temperature.When the blade number is 32, the average outlet temperature is the lowest, while the average outlet velocity is the largest, which indicates that the enthalpy drop of the working fluid through stator blade is the largest.The stator blade with blade number 32 has a better performance.Figure 12 shows the streamlines in the stator blade with an installation angle of 28° and blade number 32.The flow state is well and smooth, and the there is no remarkable flow separation and shock wave. Geometric Parameters Comparison of the Stator Blade between Low and High Temperature After the optimal blade installation angle and blade number are determined, other geometric parameters are also calculated, as listed in Table 3. Due to the low critical temperature and inlet temperature for working fluid R245fa, the specific enthalpy drop in the stator blade is low according to the T-s plots (Figure 13).Therefore, the chord length, cascade inlet diameter, and cascade outside diameter of the stator blade are larger, while the blade number is lower.These characteristics are advantage to improve the working fluid mass flow rate and the working capacity of the radial-inflow turbine.The critical temperature and inlet temperature for working fluid toluene are much higher than that for R245fa.The specific enthalpy drop in the stator blade is larger according to the T-s plots (Figure 13).Thus, less chord length, cascade inlet diameter, and cascade outside diameter of the stator blade and more blade number are preferred.These characteristics are beneficial to increase average outlet velocity and reduce the flow loss, which makes the working fluid entering the rotor more Geometric Parameters Comparison of the Stator Blade between Low and High Temperature After the optimal blade installation angle and blade number are determined, other geometric parameters are also calculated, as listed in Table 3. Due to the low critical temperature and inlet temperature for working fluid R245fa, the specific enthalpy drop in the stator blade is low according to the T-s plots (Figure 13).Therefore, the chord length, cascade inlet diameter, and cascade outside diameter of the stator blade are larger, while the blade number is lower.These characteristics are advantage to improve the working fluid mass flow rate and the working capacity of the radial-inflow turbine.The critical temperature and inlet temperature for working fluid toluene are much higher than that for R245fa.The specific enthalpy drop in the stator blade is larger according to the T-s plots (Figure 13).Thus, less chord length, cascade inlet diameter, and cascade outside diameter of the stator blade and more blade number are preferred.These characteristics are beneficial to increase average outlet velocity and reduce the flow loss, which makes the working fluid entering the rotor more uniform. Geometric Parameters Comparison of the Stator Blade between Low and High Temperature After the optimal blade installation angle and blade number are determined, other geometric parameters are also calculated, as listed in Table 3. Due to the low critical temperature and inlet temperature for working fluid R245fa, the specific enthalpy drop in the stator blade is low according to the T-s plots (Figure 13).Therefore, the chord length, cascade inlet diameter, and cascade outside diameter of the stator blade are larger, while the blade number is lower.These characteristics are advantage to improve the working fluid mass flow rate and the working capacity of the radial-inflow turbine.The critical temperature and inlet temperature for working fluid toluene are much higher than that for R245fa.The specific enthalpy drop in the stator blade is larger according to the T-s plots (Figure 13).Thus, less chord length, cascade inlet diameter, and cascade outside diameter of the stator blade and more blade number are preferred.These characteristics are beneficial to increase average outlet velocity and reduce the flow loss, which makes the working fluid entering the rotor more uniform. Conclusions This work investigates the effects of the blade installation angle and blade number on the performance of radial-inflow turbine stator.R245fa and toluene were selected as working fluids in the low and high temperature range, respectively.2D stator blades model for the two working fluids were established.The CFD simulation method was used to analyse the performance of the stator blade.The distribution of static pressure along the stator blade, flow loss, average outlet temperature, and average outlet velocity were calculated with different blade installation angle and blade number.For low temperature working fluid R245fa, when the installation angle is 32 • and blade number is 22, the distribution of static pressure has no obvious pressure fluctuation, and the flow loss is least.The stator blade obtained the optimal performance.For high temperature working fluid toluene, when the installation angle is 28 • and blade number is 32, the average outlet temperature is the lowest, while the average outlet velocity is the largest.And the flow state is well and smooth, and no remarkable flow separation and shock wave presents.Due to the low critical temperature and inlet temperature, R245fa has lower specific enthalpy drop in the stator blade compared to toluene.Thus, the stator blade for R245fa has a larger chord length, cascade inlet diameter, and cascade outside diameter but a lower blade number. Figure 4 . Figure 4.The unstructured mesh around the stator blade. Figure 4 . Figure 4.The unstructured mesh around the stator blade. Figure 4 . Figure 4.The unstructured mesh around the stator blade. Figure 6 . Figure 6.Static pressure distribution around the stator blade surface with blade number 22. Figure 7 . Figure 7. Flow loss at different blade installation angle with blade number 22. Figure 6 .Figure 6 . Figure 6.Static pressure distribution around the stator blade surface with blade number 22. Figure 7 . Figure 7. Flow loss at different blade installation angle with blade number 22. Figure 7 . Figure 7. Flow loss at different blade installation angle with blade number 22. Figure 8 . Figure 8. Static pressure distribution along the nozzle blade surface with installation angle 32°. Figure 8 .Figure 8 . Figure 8. Static pressure distribution along the nozzle blade surface with installation angle 32 • . Figure 9 . Figure 9. Flow loss at different blade number with installation angle 32°. Figure 9 . Figure 9. Flow loss at different blade number with installation angle 32 • . Figure 10 . Figure 10.Average outlet temperature and average outlet velocity for stator blade with blade number 32. Figure 10 . Figure 10.Average outlet temperature and average outlet velocity for stator blade with blade number 32. Figure 11 . Figure 11.Average outlet temperature and velocity of stator with an installation angle of 28°.Figure 11.Average outlet temperature and velocity of stator with an installation angle of 28 • . Figure 11 . 15 Figure 12 . Figure 11.Average outlet temperature and velocity of stator with an installation angle of 28°.Figure 11.Average outlet temperature and velocity of stator with an installation angle of 28 • .Energies 2018, x, x FOR PEER REVIEW 12 of 15 Figure 12 . Figure 12.Streamlines in the stator blade with an installation angle of 28 • and blade number 32. Figure 13 . Figure 13.The T-s plots for working fluids.Figure 13.The T-s plots for working fluids. Figure 13 . Figure 13.The T-s plots for working fluids.Figure 13.The T-s plots for working fluids. Table 1 . Relative coordinates for the TC-4P blade profile. Table 2 . Comparison of 1D thermal design and numerical 2D turbulent simulation. Average outlet temperature and average outlet velocity for stator blade with blade number 32. Table 3 . Geometric parameters of stator blades. Table 3 . Geometric parameters of stator blades.
v3-fos-license
2018-12-02T20:17:03.288Z
2015-04-26T00:00:00.000
54215762
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://cjsbs.sljol.info/articles/10.4038/cjsbs.v43i2.7327/galley/5675/download/", "pdf_hash": "c9f9cecd571d15d747e6b5c611bf2811420e058b", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44731", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "c9f9cecd571d15d747e6b5c611bf2811420e058b", "year": 2015 }
pes2o/s2orc
Underwater and Terrestrial Feeding in the Sri Lankan Wart-frog , Lankanectes corrugatus The vast majority of the world’s anurans feed terrestrially, with aquatic prey capture having been observed in only a handful of species. We tested the predation behaviour of the strictly aquatic ‘fanged’ frog Lankanectes corrugatus (Nyctibatrachidae) by providing specimens with both aquatic and terrestrial feeding opportunities. The frogs successfully captured prey both underwater and on land adjacent to water. During underwater feeding they located prey purely by tactile stimuli rather than by vision; prey were scooped into the open mouth using both hands. When feeding terrestrially, however, the frogs relied on visual cues alone when attacking prey, capturing prey items by lunging at them, grasping and scooping with the hands. Oral suction and tongue or jaw prehension were not observed in prey capture whether underwater or on land, and the 'fangs' do not appear to play a role in prey capture or ingestion. INTRODUCTION Lankanectes corrugatus, a nyctibatrachid frog endemic to Sri Lanka, is widely distributed in the island's south-western quarter, from near sea level to elevations of about 1500 m (Manamendra- Arachchi and Pethiyagoda, 2006).Despite being a relatively common species made additionally conspicuous by large size (up to 70 mm snout-vent length) and a loud, distinctive call, very little is known of its natural history, even oviposition being as yet unreported.Lankanectes are obligatorily aquatic frogs, usually inhabiting shallow pools in rainforest streams where they rest on the substrate, seated on their haunches, submerged except for their eyes, which protrude above the surface (Fig. 1).Juveniles retain the lateral-line sensory system and exhibit a marked sexual dimorphism: males possess a pair of prominent bony odontoid processes ('fangs') on either side of the mandibular symphysis, the processes being greatly reduced in females (Manamendra-Arachchi and Pethiyagoda, 2006). Feeding mechanisms of anurans are diverse.Most terrestrial anurans have attached, protrusible tongues and depend heavily on lingual adhesion for capturing prey.A smaller proportion of species uses jaw prehension, while others possess highly specialized jaw-closing mechanisms to capture prey (Nishikawa, 2000).Underwater feeding, however, is rare among anurans and has hitherto not been reported in any Asian species.Feeding in water poses substantially different challenges than capturing prey on land, given the much higher density, viscosity and, in the usual habitats of L. corrugatus, also the turbidity of water in comparison to air (Carreno and Nishikawa, 2010). In 2004, M. M. Bahir (pers.comm.) reported a chance observation of an L. corrugatus apparently preying on an aquatic invertebrate underwater at Agrapatana, in the Sri Lankan highlands.Recent field work at the same site provided us with an opportunity to further investigate its feeding behaviour. MATERIALS AND METHODS We placed six freshly-collected L. corrugatus (38-61 mm SVL) individuals (three of each sex) one at a time in a 30-cm wide glass aquarium with sufficient water for the frogs to rest in their usual posture (Fig. 1).Two earthworms (~5-12 cm) were then dropped into the water and the succeeding sequence was recorded at 300 frames per second (10 × real-time) using a Casio ExilimEX-F1 video camera.Frogs were released to the same stream after between two and four feeding attempts.We also tested terrestrial feeding by building a sandfilled 10-cm wide (i.e.greater than the frogs' SVL) Pethiyagoda et al. embankment on one side of the aquarium, on the far side of which a grasshopper was placed. RESULTS All six frogs attempted to capture earthworms under water.During these predation events the frogs did not respond to the worms until one of the prey made contact.They then either dived (Fig. 2A) or sank (Fig. 2B,C) to the prey, mouth agape, using both hands (manus) to capture the prey and shove it into the mouth (Movement 1, sensu Gray et al., 1997), the hands being inserted into the oral cavity in their entirety.The mouth was agape well before scooping commenced, which suggests that oral suction plays no role in the capture of the prey used here.The frogs evidently did not use visual cues to locate prey, never attacking a worm unless a tactile stimulus was received.Of the 35 attempted underwater feeding events observed, 14 were unsuccessful, the frog initiating a dive but being unable to locate the prey despite it being within a head-length of its eyes, which suggests that visual cues are not used to locate prey underwater.As is evident from Fig. 2, when feeding on earthworms underwater, L. corrugatus does not employ oral suction or tongue or jaw prehension: it appears to rely wholly on scooping to force prey into the mouth. During terrestrial feeding, using visual cues alone the frogs attacked the prey by lunging, squashing them against the substrate using their hands, and then grasping and pushing the prey into the mouth using both hands.Tongue or jaw prehension was not observed during this action.The frogs then retreated to the water by scrambling backwards immediately once the prey was secured, and resumed their resting posture.All four terrestrial feeding attempts observed were successful. DISCUSSION We report these observations because underwater feeding and/or the use of hands in feeding have been recorded in very few species of anurans.The vast majority of frogs and toads possess protrusible tongues and employ lingual adhesion as the primary means of prey capture (Nishikawa and Schwenk, 2002).The majority of aquatic-feeding anurans use "terrestrial" methods such as jaw, tongue or forelimb prehension for aquatic prey capture (Dean, 2003).The other method of underwater prey capture by anurans is inertial suction feeding, which occurs only in the tongue-less frogs of the family Pipidae (Carreno and Nishikawa, 2010).For frogs feeding on large prey, however, the forelimbs play a significant role in prey manipulation.Here, the jaws are used to capture prey and the forelimbs used to transport prey into the oral cavity (Gray et al., 1997).In contrast, when feeding on smaller prey, these frogs transfer the prey into the oesophagus without the involvement of the forelimbs.Gray et al. (1997), who identified five distinct forelimbmovement patterns used for prey manipulation in frogs, suggested that the 'scooping' movement is primitive, widespread and well developed among aquatic anuran taxa.This method of feeding, however, may depend on the size and type of prey. Although anurans are able to locate prey on the basis of tactile, olfactory or even auditory cues alone, vision appears to be the dominant sensory modality that most frogs use to detect prey (Monroy and Nishikawa, 2011).Frogs frequently use alternative kinematic strategies to deal with variation in particular attributes of their prey, such as size, shape, velocity or location, as is evident also in our observations.Lankanectes corrugatus usually inhabit shallow regions of seasonally turbid streams in which vision would likely be of limited use in locating prey underwater, whereas in terrestrial feeding visual cues alone were clearly sufficient for locating prey. The diet of L. corrugatus is poorly known, as is much of its natural history.The gut contents of a few specimens have revealed aquatic beetles, cockroaches, millipedes, centipedes and dragonflies (M.Meegaskumbura, pers.comm.)suggestive of a broad, primarily terrestrial diet.In the aquarium, Lankanectes feed readily on earthworms, grasshoppers and other arthropods that match its size (H.S., pers.obs.). It is also noteworthy that the 'fangs' of Lankanectes (analogous structures occur also in other anuran lineages, e.g. the dicroglossid genus Limnonectes) do not appear to play a role in prey ingestion or defence: the frogs did not attempt to bite when handled.As their sexual dimorphism suggests, their function is likely to be associated with combat or threat behaviour between males, as in Limnonectes (Tsuji and Matsui, 2002).
v3-fos-license
2021-08-03T00:05:16.247Z
2021-01-14T00:00:00.000
236748274
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jspp.psychopen.eu/index.php/jspp/article/download/8051/8051.pdf", "pdf_hash": "c15f00c2fd632da90f8c9b88dd99eb4dcf62f588", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44732", "s2fieldsofstudy": [ "Psychology" ], "sha1": "6b8e4b7ffc374fd9acb7eb4b853ffef218c2b6de", "year": 2022 }
pes2o/s2orc
The Explanation-Polarisation Model: Pseudoscience Spreads Through Explanatory Satisfaction and Group Polarisation This article presents an integrative model for the endorsement of pseudoscience: the explanation-polarisation model. It is based on a combination of perceived explanatory satisfaction and group polarisation, offering a perspective different from the classical confusion-based conception, in which pseudoscientific beliefs would be accepted through a lack of distinction between science and science mimicry. First, I discuss the confusion-based account in the light of current evidence, pointing out some of its explanatory shortcomings. Second, I develop the explanation-polarisation model, showing its explanatory power in connection with recent research outcomes in cognitive and social psychology 2012). In this section, I will argue that methodological shortcomings, a lack of specificity, and contradictions with experimental results argue against the accuracy of the specific predictions of this confusion-based conception. Some examples of this conception can be extracted from an influential multi-author book on the topic, Pigliucci and Boudry (2013), in which several prominent philosophers of pseudoscience emphasize an exploitation of the epistemic authority of science when explaining the psychological function of science mimicry: • "Pseudoscience can cause so much trouble in part because the public does not appreciate the difference between real science and something that masquerades as science. (…) Pseudoscience thrives because we have not fully come to grips yet with the cognitive, sociological, and epistemological roots of this phenomenon" (pp. 3-4). • "Pseudosciences piggyback on the authority science has been endowed with in modern society. The question remains as to why it is so important for pseudosciences to seek that authority, and why they often succeed in attaining it" (p. 373). • "Pseudoscientific beliefs are usually dressed up in scientific garb. This does not substantially alter how they interact with human cognitive systems, however. All that it may do is render pseudoscientific beliefs somewhat more attractive in the context of modern cultures that hold scientific knowledge in great regard but have limited actual understanding of it" (p. 392). • "Pseudoscientists seek to be taken seriously for the same reason that scientists claim our attention, that the propositions of a rigorous and rational science are more worthy of belief than the common run of opinion" (p. 417). Some research outcomes on individual differences indicate that people have difficulties distinguishing between science and pseudoscience (Gaze, 2014;Lyddy & Hughes, 2012) 1 . Nevertheless, the trappings of science do not explain these dif ficulties, as the same lack of discernment has also been found in relation to non-science-mimicking paranormal beliefs (Brewer, 2013;Garrett & Cutting, 2017) and astrology, a borderline doctrine between pseudoscientific and paranormal rhetoric (DeRobertis & Delaney, 2000;Sugarman, Impey, Buxner, & Antonellis, 2011). The negative association between well-oriented scientific literacy and pseudoscientific beliefs, as predicted by the confusion-based conception, is similarly not well-supported. Even though one study has found negative correlations between knowledge of scientific facts and pseudoscientific beliefs (Fasce & Picó, 2019a), again, the effect is not exclusive to pseudoscience as it has been found also for non-mimicking paranormal beliefs (Aarnio & Lindeman, 2005;Fasce & Picó, 2019a;Vilela & Álvarez, 2004). Moreover, this association seems to be mediated by trust in science, which shows the same negative positive correlation with paranormal beliefs and conspiracy theories (Fasce & Picó, 2019a;Irwin, Dagnall, & Drinkwater, 2016) In a recent experimental study, researchers concluded that courses that directly promoted a motivational state of distrust in pseudoscience produced a reduction of those beliefs, whereas general education classes on critical thinking and research methods did not (Dyer & Hall, 2019), and additional experimental pre-test/post-test studies also suggest this mediated relationship (Franz & Green, 2013;Morier & Keeports, 1994;Wilson, 2018). Accordingly, contrary to the idea of a backfire effect caused by misguided trust in science, low confidence in science and disregard for the values of scientific inquiry constitute good predictors for the endorsement of pseudoscience (Lewandowsky & Oberauer, 2021;Omer, Salmon, Orenstein, deHart, & Halsey, 2009). The confusion-based account also exhibits limitations regarding experimental results on the role of scientists as judgmental shortcuts. Sloman and Rabb (2016) conducted a series of experiments to test how people behave under conditions of division of cognitive labour and epistemic dependence. Their results show that knowing that scientists understand a phenomenon gives individuals the sense that they understand it better themselves, but only when they have ostensible access to scientists' explanations and accept them. So, these individuals were not blinded by scientists' discourses and aesthetics, instead, they tended to use experts' information as an echo chamber for their subjective assessment of scientific contents. Brewer (2013) evaluated the effect of three versions of a news story about paranormal investigators: one in terms of traditional supernaturalism, a second with a pseudoscientific rationale, and a third presenting a discrediting scientific critique. Although a pseudoscientific rationale increased the parapsychologists' perceived credibility, it had no significant effect on the endorsement of paranormal beliefs - Garrett and Cutting (2017) conducted a similar experiment, replicating the previously observed lack of differences between the three versions regarding the perceived believability of the paranormal story. Likewise, other studies support that, although science mimicry tend to increase sources' credibility, it does not promote change in beliefs (Bromme, Scharrer, Stadtler, Hömberg, & Torspecken, 2015;Knobloch-Westerwick, Johnson, Silver, & Westerwick, 2015;Thomm & Bromme, 2012;Zaboski & Therriault, 2020), as the effect of scientific jargon is mediated by its adjustment to previous beliefs (Scurich & Shniderman, 2014) and is not persuasive by itself (Gruber & Dickerson, 2012;Hook & Farah, 2013;Michael, Newman, Vuorre, Cumming, & Garry, 2013). An analysis of the controversy concerning so-called "neuromania" (Legrenzi & Umiltà, 2011) helps to better un derstand these results. Neuroscientific research is particularly fascinating to the general public, so Weisberg, Keil, Goodstein, Rawson, and Gray (2008) conducted an experiment on the seductive allure of explanations with irrelevant neuroscientific information. As expected, irrelevant information with neuroscientific jargon were preferred by the participants, regardless of the quality of the underlying logic of the explanation. Other experiments have found the same effect (Fernandez-Duque, Evans, Christian, & Hodges, 2015;Weisberg, Taylor, & Hopkins, 2015), also using neuroimag ing (McCabe & Castel, 2008)-although the neuroimaging studies have low reproducibility rates (Schweitzer, Baker, & Risko, 2013). Pseudoscientific neuro-jargon is particularly effective for psychological explanations in comparison with irrelevant social science and natural science jargon (Fernandez-Duque et al., 2015;Weisberg, Hopkins, & Taylor, 2018), perhaps due to the authority ascribed to neuroscience when explaining behaviour (Racine, Waldman, Rosenberg, & Illes, 2010). Nevertheless, Tabacchi and Cardaci (2016) discovered that the allure of neuroscientific jargon is mediated by the wording of the question. All previous experiments asked participants how "satisfactory" they considered the explana tions by using a 7-point Likert scale, which is an aesthetic judgment, while in Tabacchi and Cardaci (2016) participants had to choose the correct explanation from two alternatives by using a dichotomous measure about its truthfulness. In this psychometric context, the allure of explanations with vacuous pseudoscientific jargon was not observed, and, as no additional information was given to the participants, it is not likely that pseudoscientific jargon fooled their trust in science in prior experiments. The allure of scientific jargon seems to depend on how individuals are asked to assess evidence in different motivational contexts-i.e., focusing on how satisfactory scientific jargon is in psychological terms or on how believable it is in epistemic terms. In sum, current evidence consistently suggests that uncritical acceptance of pseudoscientific information is mediated by perceived explanatory satisfaction and adjustment to previous beliefs, not by misguided trust in science. Neverthe less, the criticism expressed in this section regarding the confusion-based approach does not fully invalidate it, as confusion could still be a relevant variable within specific groups and contexts. More research is needed to know if both models can be complementary rather than antithetical. The Explanation-Polarisation Model In this section, I will now develop an explanatory framework for the endorsement of pseudoscience detached from confusion-based conceptions. EX-PO departs from the usual definition of pseudoscience, which is based on science mimicry, but it does not explain the endorsement of pseudoscience by means of a faulty distinction between science and pseudoscience, thus conferring another role to science mimicry 3 . The EX-PO model takes pseudoscience as a set of 3) There is no contradiction between a definition of pseudoscience which includes science mimicry as a necessary characteristic, and EX-PO. The definition of pseudoscience is a philosophical issue, whereas the spread of pseudoscientific beliefs is a socio-psychological matter. Although it is widely accepted that pseudoscience is defined by science mimicry, EX-PO addresses its function. The exploitation of the authority of science is not integral to the conventional definition of pseudoscience: these are mutually independent issues. flawed but appealing explanations, adding a relational aspect by including psychological phenomena related to group polarisation. Rekker (2021) has proposed a relevant distinction between psychological and ideological science rejection. On one hand, psychological rejection of science takes place implicitly and arises from individuals' tendency to favor information that maintains their status in an affinity group. On the other hand, ideological rejection (religious, political, etc.) consists of explicit contestation of science through arguments derived from complex doctrines-for example, climate change countermovement organizations (McKie, 2019) 4 . EX-PO constitutes a model for psychological rejection of science, as the main unit of analysis of the model is the interaction between the individual and the pseudoscientific doctrine, consider ing both the psychological predispositions and the rhetorical devices involved in such interaction. In this regard, EX-PO explains the endorsement of pseudoscience through the supply of psychologically satisfactory explanations and the demand for profitable ideas that conform to rewarding social norms. There is a general tendency in all individuals to favour mechanisms and categorizations, and individuals also tend to hold desirable, concerning, and useful beliefs. The rhetorical devices of pseudoscience adapt to this psychological framework to gain support. EX-PO constitutes an explanatory framework for the spread of the two major forms of pseudoscience, pseudo-theory promotion and science denial (Fasce & Picó, 2019b;Hansson, 2017), although both forms show their own characteristics 5 . The explanatory satisfaction offered by pseudo-theory promotion, due to its greater doctrinal content, should be higher than that of science denialism, whose endorsement, in turn, would be more influenced by ideology-driven group polarisation and direct confrontation with scientific information (Lewandowsky, Pilditch, Madsen, Oreskes, & Risbey, 2019;Medimorec & Pennycook, 2015). For example, as climate change denialists cannot offer satisfactory alternative explanations, they need to simulate coherence by conspiracist discourse reinforced by group behaviour (Lewandowsky, Cook, & Lloyd, 2018). Explanations matter since they feel intrinsically valuable for pragmatic concerns such as prediction and control (Lombrozo, 2011). Nevertheless, criteria identified in academy publications as explanatory virtues hardly predict positive explanation assessment in naturalistic settings (Lombrozo, 2016;. For example, although scholars often consider abstract and simple explanations to be preferable, people tend to favor less generalisable explanations Khemlani, Sussman, & Oppenheimer, 2011) and to explain inconsistencies by positing additional causes rather than disputing premises, thus preferring explanations that involve complex causal structures (Khemlani & Johnson-Laird, 2011). In this regard, pseudoscience would exploit several sources of subjective explanatory satisfaction, such as flawed categorisations and mechanistic explanations 7 . Categorical language supports particularly strong inferences, leading people to form representations in more essen tialist (Gelman, Ware, & Kleinberg, 2010), categorical (Lupyan, 2012), and prototypical terms (Lupyan, 2017). People 4) There is mutual feedback between both types of science rejection: ideologues generate explicit arguments that exploit already existing group identities, whereas group identity constrains ideology's persuasiveness by determining individuals' receptivity. 5) "Pseudo-theory promotion" refers to prototypical pseudosciences, which are primarily based on the promotion of a complicated doctrine-e.g., morphic fields, German new medicine, cellular memories, and chiropractic. In contrast, science deniers deploy their rhetoric devices at the level of controversies, casting doubt on well-established scientific theories-for example, denial of climate change, GMOs, and vaccination. 6) An ontological confusion is the attribution of a specific feature of some stratum of reality, such as the psychological, to an entity belonging to a different stratum, such as the physical. For example, "moon aims to move forward". 7) Further psychological research could integrate other strategies within EX-PO, such as self-validating belief systems (Boudry & Braeckman, 2012), ad-hoc reasoning (Boudry, 2013), or less aggressive communication styles (König & Jucks, 2019). learn named categories more quickly (Lupyan, Rakison, & McClelland, 2007), are more likely to agree with categorical statements about causes and features (Ahn, Taylor, Kato, Marsh, & Bloom, 2013;Hemmatian & Sloman, 2018), and explanations that include sharp and easily recognisable labels for categories are significantly more satisfying in psycho logical terms (Giffin, Wilkenfeld, & Lombrozo, 2017). There are numerous examples of insubstantial pseudoscientific labels, such as "cell memory", "energetic blockage", "vertebral subluxation", "detoxification", "qi deficiency", and "meta-model". Parapsychology is particularly interesting in this regard as it expands the categories of folk paranormal beliefs: where a folk paranormal believer experiences a "ghost", a parapsychologist may see a "poltergeist", an "apparitional experience", an "ectoplasm", a "psychophony", an "orb", etc. Of course, scientists also use a large number of complex categories, but these conceptual networks are typically guided by evidence (i.e., scientists tend to reject unfounded categories) and parsimony (i.e., scientists tend to reject unnecessary categories). So, categorizations are not problematic per se, but rather that both scientific and unscientific categorizations have an appeal of easy applicability on the level of the individual recipient. As outlined above, mechanistic explanations also have a relevant role within EX-PO. People have a strong preference for explanations that invoke causal mechanisms, perhaps driven by a desire to identify as many causes of an effect as possible (Mills, Sands, Rowles, & Campbell, 2019;Zemla et al., 2017). As with categories, mechanistic explanations are also widely used among scientists, although mechanisms in pseudoscience have been already refuted or are construed in a way that makes them untestable. Pseudoscientific doctrines are plenty of flawed mechanistic explanations and processes, especially in comparison with other forms of unwarranted beliefs, such as paranormal and conspiracy theories. For example, the five "biological laws" that rule the emotional aetiology of disease in German new medicine, kinesiology's viscerosomatic relationship, improved blood flow and oxygenation in tissues by magnetic stimulation, memories and experiences passing down through generations by DNA or morphic resonance, and homeopathic dynami zation by dilution and succussion. There is research supporting that the satisfying effect of neuroscientific explanations, discussed in the previous sec tion, is due to the perception of mechanistic explanations. Rhodes, Rodríguez, and Shah (2014) found that neuroscientific information boost self-assessed understanding of the mechanisms by providing perceived insight about causal chains and categorisations about psychological phenomena. Hopkins, Weisberg, and Taylor (2016) confirmed these findings, expanding the allure of mechanistic information to other fields, such as physics, chemistry, and biology. Flawed mechanistic explanations constitute a confirmed cause of illusion of explanatory depth (IOED; Rozenblit & Keil, 2002), which occurs when people believe they understand a process more deeply than they actually do. This overconfidence effect has been empirically associated with other kinds of unwarranted beliefs, such as conspiracy theories (Vitriol & Marsh, 2018), and pseudoscientific rhetoric may be particularly effective in generating this effect among its supportersindeed, Scharrer, Rupieper, Stadtler, and Bromme (2017) suggest that even legitimate, although oversimplified, science communication causes overconfidence effects among laypeople. IOED has been identified only in knowledge areas that involve complex theorization about causal networks, such as biological or physical processes (e.g., how a zipper, a toaster, tides, or the digestive system works), so it does not take place across other types of knowledge, such as declarative or narrative (e.g., how to make cookies or capital cities). Several factors converge to create a strong illusion of depth for mechanistic explanations. First, individuals have less experience in representing, expressing, and testing their explanations in comparison with other kinds of knowledge (Rozenblit & Keil, 2002;Wilson & Keil, 1998). Second, because people usually rely on others' understanding when assessing mechanistic explanations, they tend to overestimate their own understanding of mechanisms in relation to others' understanding (Fisher, Goddu, & Keil, 2015). Third, explanations are layered, so there are always higher and lower levels of analysis and people often confuse their superficial insights with a deeper understanding of how mechanisms work (Alter, Oppenheimer, & Zemla, 2010). As a result, IOED is based on a failure to construct accurate mental representations by using an appropriate level of construal, thus confusing the metacognitive experience of understanding with the capacity to offer a proper explanation. Despite IOED is usually overridden by debunking information (Kowalski & Taylor, 2017) and iterative failure (Mills & Keil, 2004), there are three variables described in the current literature which explain why this effect can be so persistent: • The strength of care and general concern for a given issue predicts persistent instances of IOED (Fisher & Keil, 2014). • People tend to accept useful explanations in the short-term by pragmatic assessment of the potential courses of action they entail (Vasilyeva, Wilkenfeld, & Lombrozo, 2017). • Social desirability fosters IOED, so people tend to accept categorisations and explanations that are shared by ingroup members by reliance on community cues and social conformity (Gaviria, Corredor, & Zuluaga-Rendón, 2017;Hemmatian & Sloman, 2018). Accordingly, a successful pseudoscience should be concerning, useful in the short-term, and socially desirable for its supporters. How pseudoscientific doctrines generate this specific motivational state is accounted by EX-PO's "polarisa tion" aspect. The Social Dimension of Pseudoscience: Ingroup Polarisation and Intergroup Clash of Cognitions Recent research has highlighted how today's societies are fractured by partisan identities and feedback loops of accom modating information, identity narratives, and anti-expertise (Kreiss, 2019; Lewandowsky, Ecker, & Cook, 2017;Pariser, 2011). For example, previous results have shown that the information shared in social media is more radical and partisan than that of open websites (Faris et al., 2017), thus facilitating the spread of appealing falsehoods (Vosoughi, Roy, & Aral, 2018). Due to its high prevalence in the public sphere and its alarming implications, the struggle between groups of pseudoscientific believers and critical thinkers is increasingly polarised, establishing an intergroup relationship dominated by an imaginary of distrust, competition, and mutual derogation (e.g., Cano-Orón, 2019). Polarisation outbreaks occur when the empirical matter in dispute is crucial to define a social identity (Fasce, Adrián-Ventura, Sloman & Fernbach, 2017). Under these conditions, intergroup threats foster self-uncertainty and self-uncertainty motivates radical identification with groups that provide distinctive normative beliefs and higher identification-contingent uncertainty reduction (Hogg, Meehan, & Farquharson, 2010;Hogg & Wagoner, 2017). As a consequence, self-uncertain people with group belongingness tend to increase their endorsement of anti-scientific beliefs (van Prooijen, 2016;van Prooijen & Jostmann, 2013), and the sense of community and perceived existential threats are well-documented root factors for pseudoscientific and conspiracy theories Franks, Bangerter, Bauer, Hall, & Noort, 2017;van Prooijen, 2020). Pseudoscientific beliefs have been proven to be influenced by tendency to accept beliefs on the basis of short-term interpersonal benefits (Fasce, Adrián-Ventura, & Avendaño, 2020), which lead individuals to assess scientific evidence depending on whether or not scientists agree with their identity and attitudes (Bromme et al., 2015;Giese, Neth, Moussaïd, Betsch, & Gaissmaier, 2020;Kahan, Jenkins-Smith, & Braman, 2011;Knobloch-Westerwick et al., 2015;Scurich & Shniderman, 2014). These ideology-driven phenomena have been explained by the presence of motivated reasoning (Kahan, 2016), although recent research has questioned the validity of this interpretation by highlighting potential confounding factors and the elusive nature of backfire effect (Druckman & McGrath, 2019;Tappin, Pennycook, & Rand, 2021;Wood & Porter, 2019). This legitimate dispute regarding the underlying mechanism of group polarization over scientific information does not undermine the fact that group polarisation and perceived social consensus play a pivotal role in 8) Despite the fact that the unit of analysis of EX-PO is located at the individual level and, therefore, the model does not directly account for political or sociological variables, macrosociological polarisation can be considered a proxy for perceptions and feelings of polarisation on the individual level. the spread of pseudoscience (Bromme et al., 2015;Giese et al., 2020;Kahan et al., 2011;Knobloch-Westerwick et al., 2015;Lewandowsky, Cook, Fay, & Gignac, 2019;Lewandowsky, Gignac, & Vaughan, 2013;Thomm & Bromme, 2012;van der Linden, Leiserowitz, & Maibach, 2018). Accordingly, EX-PO is compatible with any mechanism of group polarisation, including identity-related motivated reasoning, misperception of scientific consensus, and Bayesian belief update (e.g., Cook & Lewandowsky, 2016). As rejection of a pseudoscience is not the same as accepting the relevant science, critical thinking should not be defined as a lack of EX-PO. Critical thinkers also engage in social dynamics that strengthen their conceptions and have specific, but moderated, sources of bias. For example, even though critical thinking is positively correlated with analytical reasoning (Svedholm-Häkkinen & Lindeman, 2013), Pennycook et al. (2012) also found that around 40% of critical thinkers can be characterised as non-analytical reasoners. Moreover, these non-analytical critical thinkers are prone to endorse the kind of ontological confusions that predispose people toward paranormal beliefs (Lindeman, Svedholm-Häkkinen, & Riekki, 2016). These results are in line with Norenzayan and Gervais (2013): critical thinking may originate not only from individual cognitive styles, but also from a lack of cultural input. Potential Interventions Although both conceptions, the confusion-based and EX-PO, are not contradictory, their assumptions and implications are not equivalent, particularly regarding how to face the problem of pseudoscience. Current prevention strategies deployed by organisations of critical thinkers are often based on the assumptions of the confusion-based interpretation, so they are focused on general dissemination aimed at improving scientific literacy. In view of the current upsurge of anti-scientific groups and politics-e.g., anti-scientific conspiracy ideation around COVID-19-, this has proven to be in sufficient. EX-PO does not emphasize direct science dissemination, suggesting that the problem of pseudoscience might be better addressed through social interventions intended to reduce the allure of subjective explanatory satisfaction and group polarisation. Accordingly, efficient interventions to reduce the endorsement of pseudoscience should take into account com prehensive motivational strategies such as inoculation messages exposing misleading argumentation techniques , as well as worldview and values affirmation (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, 2012). This is important in order to boost perception of consensus cues and decrease the false consensus effect between experts and the public opinion. Another potential intervention should be focused on echo chambers-an interesting framework to deploy mechanisms of containment against these detrimental information architectures can be found in Lewandowsky et al. (2017). Even though the degree of ideological segregation in social media usage should not be overestimated-echo chambers and filter bubbles are not larger online than offline (Barberá, Jost, Nagler, Tucker, & Bonneau, 2015;Dubois & Blank, 2018)-, their influence on individuals' receptivity toward attitude-consistent misinformation is supported by evidence (Giese et al., 2020). Concluding Remarks To facilitate future research on EX-PO, I finalise this article offering a summary of the model. Unwarranted beliefs increase their explanatory appeal by science mimicry, on the basis of spurious categorisations and flawed mechanistic explanations. This would be the basic role of the trappings of science. Therefore, the appeal of pseudoscience to isolated individuals comes through two pathways: explanatory satisfaction and more general individual cognitive predispositions. Group polarisation takes place after the aggregation and organisation of believers around pseudoscientific doctrines, and this process reinforces and promotes pseudoscientific beliefs through two pathways: reinforcement of already internalised pseudoscientific beliefs and ingroup pressure to accept new pseudoscientific beliefs resembling those already internalised. Group polarisation boost the explanatory allure of pseudoscience by helping pseudoscientific doctrines be perceived as concerning, useful in ingroup terms, and socially desirable.
v3-fos-license
2024-06-30T15:08:50.527Z
2024-06-26T00:00:00.000
270834045
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "7304f236ee35adec191c6bacda237ffb8c8bebef", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44733", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c55641f9712353b7c5374d3af1bde3646d428b5e", "year": 2024 }
pes2o/s2orc
Non-Apoptotic Programmed Cell Death as Targets for Diabetic Retinal Neurodegeneration Diabetic retinopathy (DR) remains the leading cause of blindness among the global working-age population. Emerging evidence underscores the significance of diabetic retinal neurodegeneration (DRN) as a pivotal biomarker in the progression of vasculopathy. Inflammation, oxidative stress, neural cell death, and the reduction in neurotrophic factors are the key determinants in the pathophysiology of DRN. Non-apoptotic programmed cell death (PCD) plays a crucial role in regulating stress response, inflammation, and disease management. Therapeutic modalities targeting PCD have shown promising potential for mitigating DRN. In this review, we highlight recent advances in identifying the role of various PCD types in DRN, with specific emphasis on necroptosis, pyroptosis, ferroptosis, parthanatos, and the more recently characterized PANoptosis. In addition, the therapeutic agents aimed at the regulation of PCD for addressing DRN are discussed. Introduction With socioeconomic development, the incidence of diabetes mellitus continues to rise.By 2045, the global prevalence of diabetes is projected to reach 693 million [1], with the highest incidence in China and India [2].Diabetes affects multiple organs throughout the body, with diabetic retinopathy (DR) being the most common microvascular complication and the leading cause of blindness in the global working-age population.Exploring the pathogenesis of DR and developing effective treatment strategies have been key concerns in the medical field [3]. DR is classified into two stages: non-proliferative diabetic retinopathy (NPDR), characterized primarily by microaneurysms and exudates, and proliferative diabetic retinopathy (PDR), characterized by retinal neovascularization.Assessment of the severity of DR is predominantly based on retinal vasculopathy.However, there are intricate and complex physical and biochemical connections among retinal vascular cells, neurons, and glial cells, which constitute the neurovascular unit [4].In addition to vascular cells, high glucose levels affect retinal ganglion cells (RGCs), Müller cells, and microglia, leading to neurodegeneration and visual impairment in DR [4].Retinal pigment epithelial (RPE) cells are also involved in neurodegeneration.High glucose decreases RPE cell viability, indirectly affecting the visual response properties of RGCs [5][6][7].Diabetes triggers multiple pathophysiological mechanisms within the retina, involving alterations in genetic and epigenetic effects, elevated free radical formation, the accumulation of advanced glycation end-product, and the upregulation of vascular endothelial growth factor (VEGF) and inflammatory mediators [8].In this regard, novel insights suggest that interventions for DR should address not only vascular dysfunction but also retinal neurodegeneration, highlighting the necessity for innovative therapeutic modalities. The concept of diabetic retinal degeneration (DRN) has recently gained attention, which refers to the progressive degeneration of the neuroretina under diabetic conditions [9,10].DRN is characterized by neuronal apoptosis and glial activation accompanied by molecular mediators such as inflammatory cytokines, oxidative stress, mitochondrial dysfunction, and neurotrophic factor deficiency [11].Diabetic patients experience progressive retinal thinning and visual dysfunction, which can be assessed through retinal image analysis, visual function tests, and translational modeling [12,13].Importantly, even in the absence of evident signs of vasculopathy, the peripapillary retinal nerve fiber layer becomes progressively thinner in diabetic patients [14].Animal and cellular experiments also suggest that hyperglycemia may directly affect the survival of retinal neurons, indicating that DRN likely precedes microvasculopathy and may represent a pathological process independent of microvasculopathy [9].The cellular mechanisms underlying DRN require further investigation. Cell death is a crucial pathophysiological process during organism development and disease progression, and modulating cell death always provides a potential strategy for disease treatment.Recent studies have discovered several types of cell death occurring under certain pathological conditions, characterized by both programmed regulation and cell necrosis effects.These include pyroptosis, necroptosis, ferroptosis, lysosome-dependent cell death, autophagic cell death, and PANoptosis, among others [15][16][17].Unlike classical apoptosis, which rarely triggers an inflammatory response, activation of these programmed cell death (PCD) pathways ultimately leads to cell lysis, the release of cellular contents, and the secretion of inflammatory mediators, thereby triggering inflammation [18].Nonapoptotic PCD plays a critical role in maintaining tissue health and regulating disease progression [18,19] and is involved in various pathological processes such as tumors, neurodegenerative disorders, immune inflammation, and cardiovascular disease [15].Notably, the activation of non-apoptotic PCD occurs in a programmed and modulable manner, which provides the possibility to control inflammation and intervene in diseases [20,21].Targeting non-apoptotic PCD is expected to be a key component of future therapeutic strategies. In recent years, a growing body of research has underscored the significance of nonapoptotic PCD in the progression of DRN.This review aims to highlight recent advancements in identifying the role of distinct non-apoptotic PCD types across diverse retinal cell populations within the diabetic retina and to discuss the exploration of therapeutic approaches based on the regulation of non-apoptotic PCD pathways as a means to treat DRN (Figure 1). Overview of Ferroptosis Ferroptosis, first defined by Scott J. Dixon in 2012, is a form of non-apoptotic PCD caused by an imbalance in intracellular iron metabolism, leading to cell death through oxidative stress and lipid peroxidation damage [22].Several mechanisms are involved in ferroptosis, including the cystine-glutathione (GSH)-glutathione peroxidase 4 (GPX4) signaling pathway, phospholipid peroxidation, iron regulation, and cellular metabolism [23]. The GPX4 pathway is a classic signaling mechanism of ferroptosis that primarily involves regulating intracellular lipid peroxidation.GPX4 is an enzyme with antioxidant properties that catalyzes the reduction of lipid peroxides by GSH, thereby inhibiting lipid peroxidation [22,24].Under disease conditions, the activity of GPX4 can be inhibited due to the dysfunction of its upstream regulator, system Xc-, a cystine/glutamate antiporter.This inhibition results in the accumulation of lipid peroxide, leading to irreversible membrane damage and cell death [25]. Phospholipid peroxidation plays a vital role in ferroptosis [23].Lipid peroxidation begins with the reaction of molecular oxygen to form peroxyl radicals and then disrupts membrane integrity [26].Notably, neuronal cell membranes are rich in polyunsaturated fatty acids, making them particularly sensitive to lipid peroxidation [26].Two enzymes, acyl-CoA synthetase long-chain family member 4 (ACSL4) and lysophosphatidylcholine acyltransferase 3 (LPCAT3), are significant drivers of ferroptosis through phospholipid peroxidation [22]. Iron also plays an essential role in ferroptosis.Ferroptosis is regulated by the irondependent Fenton chain reaction, which involves the reaction between iron ions and hydrogen peroxide to produce reactive oxygen species (ROS) and initiate lipid peroxidation [23,26].In addition, abnormal cellular metabolism contributes to the formation of phospholipid peroxides.Gao et al. found that ferroptosis can be induced by cystine starvation, requiring iron-carrier transferrin and the amino acid glutamine in serum [27].These results suggest the importance of iron regulation and metabolism in ferroptosis. There is increasing evidence that ferroptosis plays a significant role in various pathological processes including neurodegenerative diseases and cancer.Targeting ferroptosis holds promising application prospects in treating retinal neurodegenerative diseases. Ferroptosis in Diabetic Retinal Neurodegeneration Ferroptosis is believed to be a significant factor in retinal neurodegeneration.Acroleininduced ferroptosis promotes defects in diabetic peripheral nerves, including DRN, which can be successfully reversed by anti-acrolein therapy or ferroptosis inhibitors [28].In diabetic patients, the expression of ferroptosis-associated biomarkers GPX4 and GSH was significantly reduced, while lipid peroxidation and ROS were increased.Moreover, these changes were more pronounced in NPDR patients compared to PDR patients, indicating a greater activation of ferroptosis in the early stages of DR [29].Several genes associated with ferroptosis have also been shown to be associated with DRN, including TLR4, CAV1, HMOX1, TP53, IL-1B, and ATG7 [30][31][32][33][34].These findings suggest a genetic therapeutic strategy targeting ferroptosis-mediated DRN, although its underlying mechanisms remain unclear.Several studies have demonstrated that ferroptosis may affect various retinal cells and participate in neurodegeneration through different pathophysiological mechanisms [35][36][37][38]. Retinal Pigment Epithelium The RPE plays a role in antioxidant defense and barrier maintenance, and its death can impair the function of photoreceptor cells and trigger inflammation, ultimately resulting in retinal neurodegeneration.RPE has been identified as one of the main cell types undergoing ferroptosis during DRN.Increased oxidative stress-induced RPE cell dysfunction, elevated levels of ferroptosis, increased iron-mediated apoptosis, and the activation of endoplasmic reticulum stress have been observed in both in vitro and in vivo DR models [39,40].In the early stages of diabetes, the upregulation of glial maturation factor-β, a neurodegenerative factor in the vitreous, leads to abnormal lysosomal degradation processes in RPE cells, resulting in the accumulation of ROS and ultimately inducing ferroptosis in RPE [41].Additionally, high glucose levels induced mitochondrial dysfunction, ferritinophagy, and lysosomal instability by upregulating thioredoxin-interacting protein (TXNIP), leading to ferroptosis in RPE [42]. The dysregulation of long non-coding RNAs (lncRNAs) contributes to RPE ferroptosis in a high-glucose environment.Zhu et al. found that the downregulation of PSEN1, a circular RNA, could regulate the miR-200b-3p/cofilin-2 axis and rescue RPE ferroptosis under high-glucose stimulation [43].Similarly, reprogramming of the miR-338-3p/SLC1A5 axis could increase the resistance of RPE to high-glucose-induced cell ferroptosis, thereby promoting RPE cell survival and function [44]. Some traditional Chinese medicines or plant extracts also showed potential in inhibiting RPE ferroptosis under diabetic conditions.Liu et al. found that the active ingredient of aromatic plant essential oils, 1,8-cineole, rescued RPE from ferroptosis by regulating the TXNIP-PPARγ signaling pathway [46].Tang et al. discovered that Astragaloside-IV (AS-IV), a natural product extracted from Astragalus, alleviated high-glucose-induced RPE ferroptosis and oxidative stress damage by disrupting the expression of miR-138-5p/Sirt1/Nrf2 and subsequent neuronal loss [47]. Photoreceptors The degeneration and death of photoreceptors are significant contributors to the neurodegenerative changes in the retina.Iron overload plays a major role in photoreceptor cell death and retinal degeneration, leading to cell demise, mitochondrial dysfunction, ROS accumulation, and iron deposition [48].Ferroptosis plays a crucial role in the pathogenesis of iron overload-induced retinal degeneration.Azuma et al. identified a mitochondrial isoform of glutathione peroxidase 4 (mGPx4), which promotes the development and survival of photoreceptors in mice, suggesting its significant role in preventing ferroptosis [49].Some agents, such as salvianic acid A, have been shown to mitigate iron deposition, lipid peroxidation, and mitochondrial dysfunction, thereby inhibiting photoreceptor ferroptosis and retinal degeneration [48].In addition, α-lipoic acid-L-carnitine, a bioavailable mitochondria-targeting prodrug of lipoic acid, was able to block ferroptosis and attenuate iron-induced mitochondrial dysfunction in photoreceptors, demonstrating its potential to protect against retinal degeneration and loss of photoreceptors in DR [50]. Currently, there is limited research on the role of photoreceptor ferroptosis in DRN.Gao et al. observed a significant increase in the levels of ROS, lipid peroxidation, and ironrelated proteins (such as GPX4) in cultured photoreceptor cells and in the early stages of DR mice.They alleviated neurodegeneration through the ferroptosis inhibitor Ferrostatin-1, providing direct evidence for the regulation of photoreceptor ferroptosis in the treatment of DRN [51].Further research is warranted to validate this conclusion. Retinal Capillary Endothelial Cells Retinal vascular endothelial cells also undergo ferroptosis under diabetic conditions [52].Luo et al. found that the levels of Yes-associated protein (YAP) and ROS were significantly increased in diabetic mice, while the expression of GPX4 was decreased.They demonstrated that the metabolite pipecolic acid might impede the progression of DR by inhibiting the YAP-GPX4 signaling pathway [53].In addition, tripartite motif 46 (TRIM46), a protein involved in cellular homeostasis, could interact with GPX4 to form the TRIM46-GPX4 signaling pathway and regulate the ferroptosis of high glucose-treated human retinal capillary endothelial cells (RCECs).Inhibiting TRIM46 or maintaining GPX4 expression successfully reversed the effect of high-glucose-induced vascular hyperpermeability and inflammation [54,55]. Regulating vascular endothelial cell ferroptosis shows potential for alleviating DR.The administration of 25-hydroxyvitamin D3 significantly reduced ROS and Fe 2+ levels while increasing levels of GSH, GPX4, and solute carrier family 7 member 11 (SLC7A11) protein in human RCECs under high glucose stimulation, thereby inhibiting ferroptosis and oxidative stress damage [56].Inhibition of the lncRNA zinc finger antisense 1 (ZFAS1) also alleviated high glucose-induced endothelial cell ferroptosis.ZFAS1 could competitively target miR-7-5p and regulate the downstream expression of ACSL4, which is considered as a potential driver gene of ferroptosis [57].Finally, some traditional medicine extracts, such as Amygdalin, the active ingredient of bitter almonds, also showed potential to inhibit ferroptosis, possibly by activating the Nrf2/ARE signaling pathway [58]. Retinal Ganglion Cells The RGC serves as a pivotal neuron in transmitting visual signals, and RGC death represents a primary hallmark of DRN.Studies have indicated a significant activation of ferroptosis in RGCs after optic nerve injury, attributed to the upregulation of 4-hydroxynonenal expression due to decreased levels of GPX4 [35].Direct evidence for ferroptosis in RGCs during the development of DR or DRN is currently lacking.However, diabetes-related pathological mechanisms may indirectly damage RGCs and induce the occurrence of ferroptosis, as well as other forms of cell death.Ischemia-reperfusion (I/R) injury is a crucial pathological mechanism in multiple neurovascular diseases, including DRN.The inhibition of apoptosis, necroptosis, and ferroptosis all exhibited protective effects against I/R-induced RGC death, with the suppression of ferroptosis being particularly prominent [59].Furthermore, melatonin (MT), as a promising therapeutic agent for retinal neuroprotection, inhibited RGC ferroptosis and inflammatory responses induced by retinal I/R injury, possibly by suppressing the p53 signaling pathway [60]. Other Retinal Cells Ryan et al. found that human induced pluripotent stem cell-derived microglia were highly susceptible to iron ions and that microglial iron overload and ferroptosis were observed in a Parkinson's disease model.Removing microglia from the cell culture system significantly delayed iron-induced neurotoxicity.These findings highlight the role of microglial iron overload and ferroptosis in neurodegeneration [37].In addition, the inhibition of the transforming growth factor beta (TGFβ) signaling pathway induced ferroptosis in retinal neurons and Müller cells and exacerbated retinal neurodegeneration [38].These findings suggest that ferroptosis in both glial cells and neurons may contribute to neurodegeneration, albeit not specifically induced by diabetes. In summary, ferroptosis, characterized by iron overload, lipid peroxidation, oxidative stress damage, and inflammatory responses, is intricately associated with retinal neurodegeneration.Various cell types have been shown to undergo ferroptosis during the pathogenesis of DRN.Targeting ferroptosis has demonstrated neuroprotective effects, offering a potential therapeutic approach for treating DRN.However, the spatiotemporal characteristics, signaling pathways, and interactions among different cells undergoing ferroptosis remain largely unclear.This poses a critical challenge that warrants further investigation before anti-ferroptosis therapy can be clinically applied for the treatment of DRN and other retinal neurodegenerative diseases. Overview of Pyroptosis Pyroptosis is a classical PCD first defined by Cookson in 2001 [61].It serves as a crucial host defense mechanism against pathogens; however, excessive pyroptosis can lead to cytokine storms and harmful inflammation, resulting in tissue damage and organ dysfunction [62].The NOD-like receptor protein 3 (NLRP3) is a key signaling pathway of pyroptosis, which can be activated by pattern recognition receptors (PRRs) on immune cells to scan for pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) [63,64].NLRP3 can also be triggered by Toll-like receptors (TLRs) binding to pathogens or cytokines, leading to NF-κB activation and the release of interleukin (IL)-1β and IL-18 via caspase-1 cleavage [63].The formation of NLRP3 inflammasome involves the recruitment of NLRP3, an adaptor protein called apoptosisassociated speck-like protein containing a CARD (ASC), caspase-1, and MAPK ERK kinase 7 (MEK7) [64,65].Once activated, the inflammasome cleaves caspase-1, which in turn cleaves gasdermin D (GSDMD), initiating pyroptosis. The effector proteins of pyroptosis constitute a class of pore-forming proteins known as gasdermins (GSDMs).These proteins separate the N-terminal pore-forming domain from the C-terminal inhibitory domain to form pores on the cytoplasmic membrane, leading to cell swelling and lytic cell death [66,67].The GSDM family comprises several members, including GSDMA, GSDMB, GSDMC, GSDMD, GSDME (DFNA5), and GSDMF (PJVK/DFNB59) [68].Pyroptosis is typically triggered by caspase-1 or caspase-11/4/5 to cleave GSDMD [66].Under certain circumstances, caspase-3 activation can also mediate pyroptosis by cleaving GSDME [67].It is noted that the pore formation by GSDMD is a reversible process, and pyroptosis can be halted through regulating cytoplasmic membrane repair pathways.The truly passive process of lytic cell death relies on the cytoplasmic membrane rupture mediated by a protein called Ninjurin 1 (NINJ1) [69]. Neuroinflammation is a prominent feature of neurodegeneration observed in neurometabolic disorders.Blocking pyroptosis by targeting the assembly of NLRP3 inflammasomes and GSDMD emerges as a potential therapeutic approach for addressing neuroinflammation and neurodegeneration-related conditions [70], including age-related macular degeneration (AMD) and Alzheimer's disease [71].The abnormal deposition of host proteins, such as amyloid-β, triggers neuroinflammation and neurodegeneration by activating inflammasomes as intracellular sensors of pathogens and endogenous danger signals [72].Pyroptosis has been proposed as a potential therapeutic target for neuroinflammation and neuronal death observed in Alzheimer's disease [73].Meanwhile, studies have indicated that the sigma-1 receptor regulates pyroptosis and inflammation post-traumatic brain injury by regulating endoplasmic reticulum stress and calcium signaling [74].In ophthalmology, NLRP3 inflammasome-mediated pyroptosis has been implicated in neurodegenerative disorders like glaucoma, AMD, and DR [75,76].Certain neurosteroids, acting through sigma1 recognition sites, could influence the survival and metabolic state of neuronal and glial cells following retinal I/R injury [77].Moreover, sigma-1 receptor ligands have demonstrated protective effects against oxidative stress-induced damage in the RPE during DRN pathology [78].Modulating the pyroptosis signaling pathway presents a promising therapeutic strategy for retinal neurodegenerative diseases, including DRN. Pyroptosis in Diabetic Retinal Neurodegeneration Chronic inflammation plays a pivotal role in the progression of DRN.Research has demonstrated that high-glucose-induced cellular pyroptosis, particularly the release of inflammatory mediators such as IL-1β and IL-18, serves as a major source of retinal inflammation.Targeting pyroptosis and its associated inflammatory and neurodegenerative pathways provides a potential therapeutic strategy for managing DRN.In the subsequent sections, we will delineate the involvement of pyroptosis in different retinal cell types and its underlying mechanisms in the content of DRN. Retinal Pigment Epithelium Several studies have demonstrated that high glucose levels significantly reduce the viability of RPE cells and induce pyroptosis in a manner that depends on both the duration and dosage of exposure [6,7].In models of high-glucose-induced RPE, there is a notable increase in the expression of pyroptosis-related proteins, such as caspase-1 and NLRP3, along with elevated levels of inflammatory factors like IL-1β and IL-18 [79]. Non-coding RNAs participate in regulating high-glucose-induced RPE pyroptosis [80].Huang et al. found that a circular RNA, circFAT1, was downregulated in RPE treated with high glucose, while the overexpression of circFAT1 led to enhanced expression of LC3B and reduced levels of GSDMD and inflammatory mediators, suggesting that circFAT1 inhibited RPE pyroptosis and promoted protective autophagy [81].In addition, the knock-down of circZNF532 was shown to mitigate high-glucose-induced RPE pyroptosis via regulation of the miR-20b-5p/STAT3 signaling pathway [82].Aberrantly high expression of lncRNA HOXD Cluster Antisense RNA 1 (HAGLR) was detected in RPE exposed to high glucose, and its knockdown alleviated RPE pyroptosis and cytotoxicity [83].Overexpression of miR-192 in high-glucose-exposed RPE cells inhibited pyroptosis through the regulation of the FTO/NLRP3 signaling pathway [6].Furthermore, targeting the miR-25-3p/PTEN/Akt signaling pathway through methyltransferase-like protein 3 [7] and the miR-20a/TXNIP axis through DNA methyltransferase 1 [79] showed potential therapeutic effects on high-glucose-induced RPE pyroptosis.Collectively, non-coding RNAs represent promising targets for regulating signaling pathways associated with high-glucose-induced RPE pyroptosis, although the further elucidation of their exact mechanisms is warranted. Furthermore, Yumnamcha et al. found that auranofin, an inhibitor of thioredoxin reductase (TrxR), induced mitochondrial dysfunction and lactate dehydrogenase release in RPE, which could be reversed by NLRP3 inflammasome inhibitors, but they were not affected by inhibitors of ferroptosis or necroptosis.They suggested that the TrxR redox pathway may contribute to RPE dysfunction in DRN and other retinal neurodegenerative diseases [84]. Retinal Ganglion Cells RGC pyroptosis in DR has been reported, although research in this domain remains limited.Zhang et al. employed bioinformatics and network pharmacology approaches to identify key genes associated with RGC pyroptosis in DR.They found that salidroside significantly ameliorated RGC pyroptosis, potentially through the regulation of NLRP3, NFEZL2, and NGKB1 [85].Another study indicated that the traditional Chinese medicine extract scutellarin (SCU) partially rescued RGCs from pyroptosis in DR by inhibiting caspase-1, GSDMD, NLRP3, IL-1β, and IL-18 [86].These findings provide potential directions for targeting RGC pyroptosis in the treatment of DR.However, specific drugs designed to target RGC pyroptosis are currently lacking. Glia The activation and proliferation of glial cells, along with the excessive release of inflammatory factors, are key events of neuronal inflammation [87].Recent studies have revealed that glial cells undergo cell death within specific pathological microenvironments, which further exacerbates immune inflammation and tissue damage [88].High glucose levels have been shown to impede retinal microglial function and enhance the expression of the pyroptotic core machinery, including caspase-1, GSDMD, NLRP3, and IL-1β, in a dosedependent manner.Inhibition of the NLRP3 signaling pathway inhibited microglial cell pyroptosis and its mediated cytotoxicity [89].Additionally, Müller cells undergo pyroptosis following retinal I/R injury, contributing to retinal neurodegeneration.This process was suppressed by blocking the NLRP3/GSDMD-N/Caspase-1/IL-1β pathway [90].Ma et al. identified a novel EP300/H3K27ac/TRPC6 signaling pathway implicated in high-glucoseinduced Müller cell pyroptosis, and knockdown of transient receptor potential channel 6 (TRPC6) significantly reduced inflammation and pyroptosis in Müller cells [91].Glial cell pyroptosis could thus serve as a potential therapeutic target for DRN. Retinal microvascular pericyte loss is one of the earliest pathological changes associated with DR.Recent studies have suggested the involvement of pericyte pyroptosis in DR pathology.In vitro experiments have shown that high glucose induces NLRP3-caspase-1-GSDMD-dependent pericyte pyroptosis, leading to the release of inflammatory cytokines IL-1β, IL-18, and lactate dehydrogenase.This effect is dose-and time-dependent and can be blocked by caspase-1 or NLRP3 inhibition [98].Similarly, in an environment mimicking DR created by advanced glycation end product-modified bovine serum albumin (AGE-BSA), retinal pericytes underwent caspase-1-and GSDMD-mediated active cleavage, along with the release of inflammatory IL-1β and IL-18.Treatment with the miR-342-3p mimic effectively inhibited pyroptosis in pericytes [99].These findings provide new insights into early treatment strategies for DR. In summary, cell pyroptosis represents a typical form of non-apoptotic PCD.Pyroptosis in RGCs directly leads to the impairment of retinal neural function.More importantly, the presence of pyroptosis in various cell types, including glial cells, exacerbates retinal neuroinflammation and retinal neurodegeneration under high-glucose conditions.Targeting specific components of pyroptosis signaling pathways, such as NLRP3 and caspase-1, may help modulate pyroptosis and alleviate the detrimental effects of inflammation and neuronal damage.Further evidence from in vivo experiments is needed to confirm these conclusions. Overview of Necroptosis Necroptosis, first described by Alexei Degterev in 2005, is a form of PCD characterized by necrotic cell death morphology and activation of autophagy [100].Morphologically, necroptosis is characterized by rapid plasma membrane penetration, the leakage of cell constituents, release of damage-associated molecular patterns, cell swelling, and mitochondrial membrane permeabilization [101].Unlike passive necrosis, necroptosis is regulated by specific molecular signaling pathways.It is currently understood that necroptosis can be initiated by the ligation and activation of death receptors, including tumor necrosis factor receptor 1 (TNFR1) [102], Fas ligand (FasL), and TNF-related apoptosis-inducing ligand (TRAIL) [103], as well as pattern recognition receptors, such as toll-like receptor 3 (TLR3) [104] and TLR4 [105].The activation of these receptors oligomerizes receptorinteracting protein kinase 1 (RIPK1) in the cytoplasm, leading to the formation of complexes with receptor-interacting protein kinase 3 (RIPK3), which is a critical step in the necroptosis signaling pathway.The RIPK1-RIPK3 complexes trigger the phosphorylation of the downstream mixed lineage kinase domain-like protein (MLKL) to form the "Necrosome".MLKL then undergoes conformational changes and forms oligomers, leading to membrane permeabilization and cell lysis [102].Subsequent research has identified additional signaling pathways that can induce necroptosis without RIPK1 activation, such as Z-DNA binding protein 1 (ZBP1) and TIR domain-containing adapter-inducing interferonβ (TRIF) [104,106].Furthermore, RIPK3/MLKL-mediated necroptosis can also be inhibited by caspase-8 activation [107], indicating the versatility of necroptosis regulation. Necroptosis is involved in a variety of disease conditions, including inflammation, infection, cancer, neurodegeneration, and others [108][109][110].In ophthalmology, necroptosis is associated with the pathogenesis of diseases such as glaucoma, AMD, retinitis pigmentosa, retinal detachment, and DR.Neyra et al. showed the upregulation of pro-necroptotic genes and proteins in an in vitro model of retinal neurodegeneration [111].They observed that MLKL expression began in the inner layers of the retina and progressed to the outer layers within 1 day, which was in accordance with photoreceptor degeneration [111].Ma et al. demonstrated that excessive thyroid hormone signaling induced necroptosis in the retina, leading to photoreceptor degeneration [112].Through gene expression analysis, Martin et al. showed that the secretome of human bone marrow mesenchymal stem cells could inhibit necroptosis activation in retinal neurodegeneration, suggesting necroptosis as a potential therapeutic target for retinal degenerative diseases [113]. Necroptosis in Diabetic Retinal Neurodegeneration Similar to pyroptosis, necroptosis exacerbates neural damage through its inflammatory properties.In addition, the diabetic microenvironment fosters necroptosis in retinal neurons, leading to neurodegeneration and visual impairment.In vitro experiments have confirmed that exposure to hyperglycemic conditions triggers RIPK1-and MLKL-dependent necroptosis in various cell types.This process likely hinges on feedback mechanisms involving glycolysis, advanced glycation end products, and ROS, and can be blocked by RIPK1 inhibitors or siRNA interference [114].The following section discusses the involvement of necroptosis in different cell types and its therapeutic promise in DRN. Retinal Ganglion Cells Necroptosis has been demonstrated in cultured RGCs induced by high glucose, marked by the elevated expression of RIPK1/RIPK3.Treatment with the RIPK1 inhibitor Necrostatin-1 effectively protected RGCs from necroptotic cell death.Meanwhile, there was a significant increase in the number of Nissl bodies within the cells post-treatment, suggesting an improvement in cell function [115]. There are currently fewer reports confirming the presence of RGC necroptosis in DR at the animal level.Ischemia/hypoxia-induced ROS accumulation is one of the primary pathological changes in DR, and studies have found that in both oxygen-glucose deprivation (OGD) and I/R animal models, RGCs underwent RIPK3/MLKL-dependent necroptosis.Treatment with the RIPK3 inhibitor GSK840 alleviated neurodegeneration and improved retinal microstructure and visual function [116].An in vitro study further confirmed the involvement of necroptosis under OGD, with RIPK3 assuming an important role in this process [117].Elevated RIPK1/3 levels were also observed in the retina in an I/R mouse model [118], whereas GSK840 treatment improved electroretinogram amplitude and exerted protective effects on retinal neurons [116].Activation of TNF-α was involved in the induction of RGC necroptosis in the I/R model, which could be blocked by Necrostatin-1 [119].In addition, Gao et al. identified the extracellular signal-regulated kinase (ERK) 1/2 as a regulator of RIPK3-related necroptosis [120], while Lee et al. pinpointed Daxx as a downstream component of RIPK3 [121].Targeting these molecules may aid in preventing necroptosis in the early stages of I/R injury. Microglia Microglia, as pivotal mediators of neuroinflammation, are also subject to cell death during disease progression.Huang et al. highlighted the involvement of microglial necroptosis in exacerbating neuroinflammation and retinal neurodegeneration through RIPK1/RIPK3dependent mechanisms, a process mitigated by Necrostatin-1 [105].In a streptozotocin (STZ)-induced diabetes model, the use of the RIPK3 inhibitor GSK872 effectively inhibited microglial necroptosis and concomitant neuroinflammation, thereby rescuing the decrease in neuronal density and nerve-retinal thickness in diabetic mice [122].In addition, He et al. identified a distinct microglia subpopulation, termed sMG2, in a hypoxia-induced retinal neovascularization model.The sMG2 subpopulation demonstrated specific activation of ripk3 and mlkl genes under hypoxic conditions, rendering them susceptible to necroptosis.The ensuring necroptosis induced the production of the angiogenic factor FGF2, contributing to retinal neovascularization [123].These findings underscore the critical role of RIPK1/RIPK3-mediated necroptosis in microglia in diabetic neuroinflammation and neurodegeneration. Photoreceptors Direct evidence linking necroptosis DRN in photoreceptors is still emerging.However, necroptosis does occur in photoreceptors under conditions mimicking DR pathology.RIPK3-dependent necroptosis was activated in H 2 O 2 -treated 661W cells, which could be blocked by the mitochondrially targeted peptide SS31, leading to protection against oxidative stress and cell death [124].In a retinal detachment model, RIPK3 activation triggered necroptosis in photoreceptor cells, while the blockade of RIPK1 or deficiency of RIPK3 markedly abolished this effect [125].Additionally, Sato et al. demonstrated the involvement of RIPK-mediated necroptosis in photoreceptor degeneration, indicating the potential role of the TNF-RIP pathway as a candidate target for retinal neurodegeneration [126]. In summary, necroptosis is a novel form of non-apoptotic PCD primarily driven by RIPK1 and/or RIPK3.During retinal neurodegeneration associated with conditions like diabetes, necroptosis predominantly affects RGCs and microglia and may also involve photoreceptors.Activation of necroptosis not only results in direct neuronal death but also triggers significant neuroinflammation, further fueling the neurodegenerative cascade.Several studies have demonstrated that blocking the necroptotic pathway through specific RIPK1/RIPK3 inhibitors or gene deletion can alleviate retinal neuroinflammation and neurodegeneration, indicating necroptosis as a promising therapeutic target for DRN. Parthanatos Parthanatos, defined by Dawson in 2008, is a form of non-apoptotic PCD dependent on poly (ADP-ribose) polymerase-1 (PARP1).This pathway is activated by oxidative stressinduced DNA damage, leading to excessive accumulation of poly (ADP-ribose) (PAR) and subsequent induction of cell death [127].PAR accumulation induces the release of mitochondrial apoptosis-inducing factor (AIF), which translocates macrophage inhibitory factor (MIF) to the nucleus and cleaves genomic DNA into segments [127,128].Parthanatos has been implicated in various neurodegenerative diseases, including Parkinson's disease, diabetes mellitus, and cerebral I/R injury. PARP1 plays a crucial role in parthanatos, which can lead to a significant decrease in NAD (+) and ATP depletion, ultimately resulting in cell death [127][128][129].Knocking down PARP1 not only enhanced the viability of cultured human RCECs under high-glucose conditions but also prevented high-glucose-induced inflammation [130].PARP inhibitors, which are currently undergoing clinical trials for cancer therapy, have shown potential in attenuating excitotoxicity and ischemic cell injury in neurons [131].Diabetic peripheral neuropathy (DPN) is a common complication of diabetes characterized by neurovascular damage.Blocking high-glucose-induced oxidative stress suppressed parthanatos by reducing PAR accumulation and AIF nuclear translocation [132].Overall, the use of PARP inhibitors holds promise as a neuroprotective therapy to reduce neuronal cell death and tissue damage. Studies have shown that increased levels of excitotoxic glutamate in the retina may lead to retinal neuronal cell dysfunction or death through activation of the PARP1-parthanatos pathway.Erythropoietin has been identified as a potential neuroprotective agent in the diabetic retina by regulating glutamate levels and inhibiting parthanatos [133].Additionally, nicotinamide, a form of vitamin B3, played a role in mitigating DRN by modulating the response to DNA damage.Treatment with niacinamide in diabetes led to a decrease in oxidative stress and cleaved PARP1 expression.This suggests that niacinamide may help mitigate DRN by promoting DNA repair [134].While these findings from cellular and animal models are promising, there are still gaps in translating these neuroprotective treatments to clinical practice.Further research is needed to better understand the mechanisms underlying PARP1-mediated parthanatos in DRN and to develop effective clinical interventions. PANoptosis PANoptosis, a recently characterized mode of death involving the convergence of pyroptosis, apoptosis, and necroptosis, has emerged as a novel area of research in understanding cell death mechanisms.The innate immune sensor ZBP1 and TAK1 kinases play a role in regulating the assembly of the PANoptosis-like body complex [135].Although the individual pathways of pyroptosis, apoptosis, and necroptosis are well studied, the interplay and regulation among them in PANoptosis remain complex and not fully understood.In the content of glaucomatous RGC damage, studies have investigated the involvement of PANoptosis.Treatment with melatonin has been shown to rescue RGC survival and reduce the loss of retinal nerve fiber layer thickness, possibly through inhibiting the expression of PANoptosis-associated proteins [136].In addition, the inhibition of dynamin-related protein 1 (Drp1) demonstrated neuroprotective effects against high intraocular pressure-induced injury by regulating the expression of PANoptosis-associated proteins and the ERK1/2-Drp1-ROS pathway, suggesting a potential therapeutic strategy for RGC protection [137]. In retinal I/R injury models, the upregulation of PANoptosome components has been observed, accompanied by alterations in neuron morphology and protein levels, indicative of PANoptosis-like cell death [138].Notably, studies investigating the protective effects of Dickkopf-1 in DR highlighted its role in inhibiting PANoptosis by blocking core proteins of pyroptosis, apoptosis, and necroptosis, as well as angiogenesis-related molecules, and endothelial cell proliferation and migration [139].These results provide a basis for further investigation of this novel regulated cell death in DRN. NETosis NETosis is a form of PCD induced by neutrophil extracellular traps (NETs) [140].NETs are released when the cell membrane is ruptured and are dependent on nicotinamide adenine dinucleotide phosphate (NADPH) oxidase for the production of ROS, leading to cell death and inflammatory response [141][142][143].NETosis plays a critical role in the development of diabetes and its related complications.An elevated NET component has been observed in DR patients or mouse models of ocular inflammation, correlating positively with the severity of DR [143,144].Notably, the activation of NADPH oxidase and production of ROS were implicated in high-glucose-induced NETosis, while anti-VEGF treatment was able to attenuate this process, suggesting a possible target in modulating NETosis in DR [143].Binet et al. investigated the role of neutrophils in the remodeling of unhealthy vessels during advanced inflammation, showing that aging vasculature attracted neutrophils and induced the production of NETs [145].These findings suggest that NETosis is primarily involved in vasculopathy rather than neurodegeneration in DR. Furthermore, NETosis has been implicated in DPN with histone deacetylase playing a crucial role in its progression.Suppressing histone deacetylase has been shown to inhibit NETosis, reduce DRN, and alleviate pain associated with DPN [146].The association of NETosis with neurodegeneration in the retina remains to be further investigated. Other Non-Apoptotic PCD Autosis, a form of PCD triggered by excessive or uncontrolled levels of autophagy [147,148], has been studied in the context of neuronal death after cerebral hypoxia-ischemia injury [148].While its role in some diabetes-related complications has been explored, its involvement in DR remains poorly understood.Nonetheless, considering its potential role in hypoxia-ischemia-induced neurotoxicity, autosis may represent a promising neuroprotective target in DRN. Lysosomal cell death (LCD) is a non-apoptotic PCD characterized by lysosomal rupture, lipid metabolites, and ROS production mediated by cathepsins or iron [15,19].LCD has implications in inflammation, neurodegeneration, and aging [18].Lysosomal dysfunction can impact neurons, and oxidative stress may contribute to neurodegenerative diseases [149].Das et al. suggested the intervention of the endo-lysosomal pathway as a therapeutic approach for optic neuropathies [150].However, the molecular mechanisms underlying this form of PCD in the context of DRN are currently unknown and require further investigation. Other recently identified non-apoptotic forms of PCD include entotic cell death (Entosis), alkaliptosis, and oxeiptosis.Entosis involves cellular "cannibalism" where one cell engulfs and kills another [151].Alkaliptosis is driven by intracellular alkalization, induced by the downregulation of the NF-κB-dependent carbonic anhydrase 9 (CA9) [152].Oxeiptosis is a caspase-independent PCD induced by ROS, potentially relevant to DR pathology [153], although its specific involvement in DRN is minimal. Moreover, cuproptosis is a newly discovered type of PCD associated with an imbalance in intracellular copper metabolism.Excess copper can lead to neurodegenerative diseases by causing protein-toxic stress reactions [16].Disulfidptosis is another novel PCD involved in sulfur metabolism.SLC7A11, a regulator of redox homeostasis, plays a role in the induction of disulfide stress and rapid cell death [17].However, limited studies have explored these modes of PCD in DR and their implications for neuroprotection.Further exploration is warranted, which may open a new research direction for DR and its neuroprotective mechanisms. Conclusions and Perspectives We summarized several non-apoptotic PCD mechanisms and potential pharmacological targets in the context of DRN, including ferroptosis, pyroptosis, necroptosis, and the emerging PANoptosis (Table S1).Neurodegeneration is a critical pathology in the early phases of DR, driven by inflammation, oxidative stress, neuronal death, and I/R injury.Multiple PCD types coexist in DRN, each playing a distinct pathogenic role: pyroptosis predominantly mediates inflammation, while ferroptosis induces oxidative stress damage.Future investigations should focus on identifying key determinants of retinal cell death types under diabetic conditions and their interrelationships within DRN.This is crucial for the development of PCD-targeted therapies.Furthermore, a comprehensive understanding of PANoptosis's mechanistic role in DRN may aid in creating integrative therapeutic strategies.Pathways like NETosis, autosis, and LCD are insufficiently explored in diabetic neurodegenerative diseases.By uncovering these molecular mechanisms and key signaling pathways, we aim to develop more precise and effective treatment strategies for DRN patients, thereby improving their quality of life.
v3-fos-license
2023-01-19T21:28:00.844Z
2017-12-01T00:00:00.000
255978073
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-017-0651-9", "pdf_hash": "dc00219f0fa73f11d85170aacd8cf49d5ad4cb2f", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44734", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "dc00219f0fa73f11d85170aacd8cf49d5ad4cb2f", "year": 2017 }
pes2o/s2orc
Identification and validation of a 44-gene expression signature for the classification of renal cell carcinomas Renal cancers account for more than 3% of all adult malignancies and cause more than 23,400 deaths per year in China alone. The four most common types of kidney tumours include clear cell, papillary, chromophobe and benign oncocytoma. These histological subtypes vary in their clinical course and prognosis, and different clinical strategies have been developed for their management. Some kidney tumours can be very difficult to distinguish based on the pathological assessment of morphology and immunohistochemistry. Six renal cell carcinoma microarray data sets, including 106 clear cell, 66 papillary, 42 chromophobe, 46 oncocytoma and 35 adjacent normal tissue samples, were subjected to integrative analysis. These data were combined and used as a training set for candidate gene expression signature identification. In addition, two independent cohorts of 1020 RNA-Seq samples from The Cancer Genome Atlas database and 129 qRT-PCR samples from Fudan University Shanghai Cancer Center (FUSCC) were analysed to validate the selected gene expression signature. A 44-gene expression signature derived from microarray analysis was strongly associated with the histological differentiation of renal tumours and could be used for tumour subtype classification. The signature performance was further validated in 1020 RNA-Seq samples and 129 qRT-PCR samples with overall accuracies of 93.4 and 93.0%, respectively. A 44-gene expression signature that could accurately discriminate renal tumour subtypes was identified in this study. Our results may prompt further development of this gene expression signature into a molecular assay amenable to routine clinical practice. Background According to the newest Globocan 2012, renal cancers are the 17th most common malignancy, accounting for more than 3% of adult malignancies and causing approximately 23,400 deaths per year in China alone [1,2]. In 2011, the overall incidence of renal cancers in China rose to 3.35 cases per 10 5 people, and the estimated mortality rate was 1.12 deaths per 10 5 people [3]. According to the 2016 World Health Organization (WHO) classification, there are 16 subtypes of renal cell carcinoma (RCC), a family of carcinomas that arise from renal tubule epithelia [4]. Currently, the four most common types of kidney tumours include clear cell RCC (ccRCC), papillary RCC (pRCC), chromophobe RCC (chRCC) and benign oncocytoma [4]. These histological subtypes vary in their clinical course and outcomes, and different clinical management strategies have been developed for their treatment. Among patients with the four most common types, patients with ccRCC have the worst prognosis, and there are differences between the prognosis of patients with pRCC and chRCC [5]. Different genetic alterations induce the development of renal tubules into RCCs of varying histological subtypes that exhibit different gene expression patterns or mutations, thus providing specific molecular candidates for targeted therapy (e.g., mTOR, VEGF, KIT, and checkpoint inhibitors) [6]. Improving the molecular understanding of the mechanisms underlying RCC subtypes has facilitated the development of targeted therapies and biomarkers in response to treatment [6]. Distinguishing between some types of kidney tumours based on morphology and immunohistochemistry can be very difficult for pathologists, while the correct identification of these subtypes is important for making precise decisions regarding therapeutic regimens. Recent studies focused on microarray profiling of different RCC subtypes to develop accurate diagnostic RCC biomarkers. Using microarray analysis of renal tumours, claudin-7 mRNA, a distal nephron marker, was overexpressed in chRCC compared with that in oncocytoma, ccRCC, and pRCC [7]. Further immunohistochemical analysis of two independent cohorts showed that claudin-7 expression was detected in 67 and 100% of chRCCs, 0 and 7% of ccRCCs, 28 and 90% of pRCCs, and 26 and 45% of oncocytomas [8,9]. These studies revealed the potential of claudin-7 as a biomarker for distinguishing chRCC from the remaining three RCC subtypes and indicated the accuracy of microarray technology for detecting diagnostic biomarkers. Compared with classifying diseases using a single gene marker, simultaneously quantifying the expression of numerous genes may potentially capture the complex physiopathology underlying tumourigenesis and the development of specific RCC subtypes. Several studies have used microarray technology to identify gene expression signatures for the classification of RCCs. Chen and coworkers published a four-gene panel that could classify RCC subtypes with an estimated prediction accuracy of 96% [10]. Youssef and colleagues also reported a classification system using miRNA signatures with a maximum of four steps that had sensitivities of 97% for distinguishing normal cells from RCC, 100% for the ccRCC subtype, 97% for the pRCC subtype, and 100% accuracy in distinguishing the oncocytoma subtype from the chRCC subtype [11]. In this study, to identify novel gene biomarkers for the classification of RCC subtypes, we performed an integrative analysis of six microarray data sets (n = 295). The selected genes in the training set were validated in 1020 RNA-sequencing samples from The Cancer Genome Atlas (TCGA) database and then tested in 129 independent specimens by qRT-PCR. A 44-gene signature was identified and validated as being highly sensitive and specific for the classification of RCCs. Gene expression database curation Gene expression data sets of 1315 renal tumours with histologically confirmed subtypes and adjacent normal tissues were collected from public data repositories (e.g., ArrayExpress, Gene Expression Omnibus (GEO), and TCGA data portal) and curated to form a comprehensive RCC transcriptome database. Array-based gene expression profiling of 295 tissue samples obtained from six GEO data sets (GSE12090, GSE15641, GSE19949, GSE8271, GSE7023 and GSE19982) was mainly conducted on two different Affymetrix oligonucleotide microarray platforms, GeneChip Human Genome U133A Array and U133Plus 2.0 Array. Detailed descriptions of the specimen characteristics and clinical features are provided in the original studies [12][13][14][15]. The sequence-based gene expression profiles of 1020 tissue samples (including 534 ccRCC, 291 pRCC, 66 chRCC and 129 normal kidney samples) were generated on an Illumina HiSeq 2000 RNA sequencing platform and retrieved from the cBioPortal for Cancer Genomics [16]. The gene expression profiles consisted of transcriptomic data for 20,500 unique genes, and clinical information for the selected samples was retrieved from the "Clinical Biotab" section of the data matrix based on the Biospecimen Core Resource IDs of the patients. Microarray data processing and normalization Gene expression data analysis was performed using R software and packages from the Bioconductor project [17][18][19]. We used the Single Channel Array Normalization (SCAN) approach from the SCAN-UPC package to process Affymetrix microarray data [20,21]. Upon normalising each raw CEL file, SCAN outputs probe-level expression values. We further used the custom mapping files from the BrainArray resource to summarise probe-level intensities directly to gene-level expression values [22]. Thus, probes mapping to multiple genes and other problems associated with older generations of Affymetrix probe designs were avoided. After normalization, we applied the ComBat approach to adjust for batch effects [23]. Gene signature identification and performance assessment To identify a gene expression signature, we used the support vector machine-recursive feature elimination (SVM-RFE) algorithm for feature selection and classification modelling [24]. For multi-class classification, a one-versusall approach was used by which multiple binary classifiers were first derived for each subtype. The results are reported as the subtype classifying the test sample with the highest confidence. For each specimen, the predicted subtype was compared with the reference diagnosis, and a true positive result was indicated when the predicted subtype matched the reference diagnosis. When the predicted subtype and reference diagnosis did not match, the specimen was considered a false positive. For each subtype on the panel, sensitivity was defined as the ratio of true positive results to the total positive samples analysed, while specificity was defined as the ratio (1 -false positive)/(total tested -total positive). Biological network and functional enrichment analysis Enrichment analysis of Gene Ontology and molecular pathways was performed using the Lynx Systems Biology Tool [25]. All significance tests were two-sided, and a false discovery rate less than 0.05 was considered significant. Biological network analysis was performed with Networ-kAnalyst software [26,27]. Protein-protein interaction information was retrieved from the IMEx Interactome Database [28]. A dense network was connected by retaining only the seed proteins as well as minimum essential non-seed proteins to study the key interactions. qRT-PCR analysis We included 121 renal tumour samples and 8 nontumour kidney tissues for qRT-PCR analyses. Written informed consent was obtained from all participants. The study was approved by the Ethics Committee of Fudan University Shanghai Cancer Center (FUSCC), China. Of the 121 tumours, 26 were ccRCC, 40 were chRCC, 28 were pRCC, and 27 were oncocytoma. Total RNA was isolated from formalin-fixed paraffin-embedded (FFPE) tissue sections using a FFPE Total RNA Isolation Kit (Canhelp Genomics, Hangzhou, China). Briefly, the paraffin sections were placed in sterile 1.5-ml microcentrifuge tubes, deparaffinized with 100% xylene, and washed twice with 100% ethanol. The deparaffinized tissue was digested with proteinase K at 56°C for 15 min and then incubated at 80°C for another 15 min to partially reverse nucleic acid crosslinking. The samples were treated with DNase and eluted in 40 μl RNase-free water. The concentration of total RNA was spectrophotometrically determined using total absorbance at 260 nm, and the purity was quantified using the A260/A280 ratio. RNA samples with A260/A280 ratios of 1.9 ± 0.2 were included in this study. For each sample, cDNA was generated from isolated total RNA using a High-Capacity cDNA Reverse Transcription Kit with RNase Inhibitor (Applied Biosystems, Foster City, CA, Unites States). Primers and MGB probes for the tested gene candidates and control gene were designed using Primer Express software (Applied Biosystems). Subsequently, the expression level of gene candidates was analysed on an Applied Biosystems 7500 Real-Time PCR system using TaqMan Gene Expression Assays (Applied Biosystems). The PCR program was initiated at 95°C for 10 min, followed by 40 thermal cycles, each at 95°C for 15 s and at 60°C for 1 min. Establishment of the RCC Transcriptome database To create a RCC transcriptome database for subtype classification, we performed a systematic search of major biological data repositories (e.g., ArrayExpress, GEO, and TCGA) to collect gene expression data sets from ccRCC, pRCC, chRCC, oncocytoma and adjacent normal tissue samples. Overall, we accumulated the gene expression profiles of 1315 tissue samples to form a comprehensive RCC transcriptome database. To identify a reliable gene expression signature, we adopted a training-testing-validation approach in this study. First, the microarray-based gene expression profiles of 295 specimens were retrieved from the database and curated to form a training set. Second, two independent sets were used to test and validate the classification performance of the gene expression-based signature; one was composed of the sequence-based gene expression profiles of 1020 specimens (Test Set 1), and the other was composed of the gene expression profiles of 129 specimens that were analysed with qRT-PCR (Test Set 2). Figure 1 depicts the three distinct phases of our study design, and Table 1 summarises the clinical characteristics of the samples in the study. Identification of a 44-gene signature in the training set The training set consisted of 106 ccRCC, 66 pRCC, 42 chRCC, 46 oncocytoma and 35 adjacent normal tissue samples. After the data normalization and annotation steps, a matrix of 12,263 unique genes in 295 samples (≈ 3.5 million data points) was prepared for downstream bioinformatics analyses. Extracting a subset of informative genes from high-dimension genomic data is a critical step for gene expression signature identification. Although many algorithms have been developed, the SVM-RFE approach is considered one of the best gene selection algorithms. For each subtype, we used the SVM-RFE approach to (1) evaluate and rank the contributions of each gene to the optimal separation of a specific subtype from other subtypes; (2) select the top 10 ranked genes as the most differentially expressed for that subtype; (3) repeat the process for each subtype, and obtain 5 lists of the top 10 gene set. After removing redundant features, 44 unique genes (listed in Table 2) were obtained and used to cluster the 295 training set samples. The average linkage hierarchical clustering method was performed where the metric of similarity was Pearson's correlation between the 44-gene expression profiles of the samples. As shown in Fig. 2a, the samples were clustered into five groups that closely followed the histological subtypes. Among the four tumour subtypes, the oncocytoma and chRCC samples clustered together, whereas the ccRCC samples were more similar to pRCC samples. Functional enrichment and biological network analysis We further investigated whether the 44 candidate genes exhibited biological features relevant to renal carcinogenesis. As shown in Table 3, the most significantly enriched gene categories are involved in insulin-like growth factor binding, transmembrane transport of small molecules, cocaine, amphetamine addiction, etc. Interestingly, seven of the 44 candidate genes (ASS1, DEFB1, IGFBP6, LCN2, SERPINA5, UMOD and VCAN) were indeed overrepresented in the "Renal-cell cancer" gene set (p < 1.4 E-5). More specifically, AQP6, CLDN8 and KRT7 were overrepresented in the "Renal oncocytoma" gene set (p < 6.1 E-6). We also explored the underlying biological networks of these 44 candidate genes. We used the 44 genes as seeds to generate a minimum protein-protein interaction network. As shown in Fig. 3, the network includes 33 genes of the 44-gene set and is centred on essential nodes such as APP, ASS1, ATF2, CRYAB, HNF1A, S100A2 and UBC. Enrichment analysis revealed that the most significant molecular networks were the TGF beta signalling pathway, Androgen receptor signalling pathway, Transcriptional misregulation in cancer, etc. (Table 4). Performance assessment with 5-fold cross-validation As an initial step, we assessed the performance of the classifier using 5-fold cross-validation within the training set. In 5-fold cross-validation, we created the training and testing sets by splitting the data into five equally sized subsets. We treated a single subsample as the testing set and the remaining data as the training set. We then ran and tested models on all five datasets and averaged the estimates. Given the limited sample size of the training set, we repeated the 5-fold cross-validation process 1000 times and estimated the average classification accuracy and corresponding 95% confidence interval (95% CI). The 44-gene expression signature showed an overall accuracy of 95.7% (95% CI: 0.912 to 1.00) with notable variation between different subtypes. Sensitivities ranged from 88.0% (chRCC) to 98.1% (ccRCC). Using this internal validation of the training set, these data provided a preliminary estimate of classification performance. Independent validation in renal Tumours profiled with next-generation sequencing The final classification model of the 44-gene expression signature was established using the entire training set and then applied to an independent validation set comprising 534 ccRCC, 291 pRCC, 66 chRCC and 129 adjacent normal tissue specimens profiled with nextgeneration sequencing (Test Set 1). The hierarchical clustering of 44 genes and 1020 samples revealed distinct patterns between ccRCC, pRCC, chRCC and adjacent normal samples (Fig. 2b). With the 44-gene The detailed sensitivities and specificities are listed in Table 5. Clinical validation of the 44-gene signature by qRT-PCR analysis Microarray and RNA-sequencing data provide a global assessment of transcriptomic variations, but their resolution and accuracy are limited in individual gene analyses, and they remain difficult to use in clinical practice. qRT-PCR is generally considered the "standard procedure" assay for measuring individual gene expression and often used to confirm the findings of microarray and RNA-sequencing analyses. Hence, we further evaluated the expression levels of 44 genes by qRT-PCR in an independent cohort of 121 RCC tumours (comprising 26 ccRCC, 28 pRCC, 40 chRCC, and 27 oncocytoma) and 8 normal kidney tissues (Test Set 2). Figure 2c shows the hierarchical clustering of the 44 genes and 129 samples based on the qRT-PCR data. As seen in the figure, distinct patterns were observed between four tumour subtypes and adjacent normal samples. With the 44-gene expression signature, 29 samples were classified as ccRCC, 25 as pRCC, 39 as chRCC, 26 as oncocytoma and 10 as normal kidney tissues. Overall, the gene expressionbased assignments reached 93.0% overall agreement with the reference diagnoses (120 of 129; 95% CI: 0.868 to 0.966). Sensitivities ranged from 89.3% (pRCC) to 100% (normal tissue), while specificities ranged from 96.1% (ccRCC) to 100% (chRCC and pRCC). The detailed sensitivities and specificities are listed in Table 5. Discussion Due to the comprehensive development of highthroughput microarray and next-generation sequencing technologies, as well as the comprehensive efforts of systematic cancer genomics projects, numerous genomic data sets were utilised in our research. In this study, we identified a 44-gene expression signature for the accurate and robust classification of RCC subtypes (ccRCC, pRCC, chRCC, and oncocytoma). The 44-gene expression signature demonstrated an overall accuracy of 95.7% for 4 RCC subtypes by cross-validation of the training set profiled with the high-throughput microarray and 93.4% in an independent test set of 1020 RCC and normal kidney samples profiled with next-generation sequencing. Furthermore, we tested the signature on an independent cohort by qRT-PCR. An overall accuracy of 93.0% was achieved with the 129 RCC samples with 4 subtypes and normal specimens. This signature may serve as a reliable diagnostic tool to aid pathologists with the growing unmet need for RCC classification. Kidney tumour subtypes are characterised by different genetic mutations and chromosomal variations and thus present different gene expression profiles. Numerous molecules have been reported as capable of distinguishing kidney tumour subtypes. For example, vascular cell adhesion molecule 1 (VCAM1) was reportedly significantly upregulated in ccRCC and pRCC, whereas it was downregulated in chRCC and oncocytoma [29]. Furthermore, positive immunoreactivity of the metastasis suppressor protein KAI1 was often detected in chRCC specimens and rarely in ccRCC and oncocytoma specimens [30], and GST-alpha mRNA expression was higher in most ccRCCs than in other kidney tumours [31]. However, in addition to being unable to consistently distinguish RCC subtypes based on regular microscopic morphology, single molecules seldom exhibit extensive power for classifying all 4 major renal tumour subtypes. Therefore, comprehensive analysis of multiple gene expression panels is necessary for the classification of renal tumour types. Based on the expression patterns of 44 genes, we classified the 4 most common renal tumour subtypes, ccRCC, pRCC, chRCC, and oncocytoma, with sensitivities ranging from 88% (chRCC) to 98% (ccRCC) in the training set, 90.9% (chRCC) to 94.6% (normal tissue) in Test Set 1, and 89.3% (pRCC) to 100% (normal tissue) in Test Set 2. In addition, the diagnostic histological classification accuracy was higher than that obtained with any of the genes used alone. The chRCC and oncocytoma samples displayed almost identical gene expression profiles for MAL, TMEM255A, RHCG, ATP6V0A4, STAP1, and DEFB1, as demonstrated by both RNA microarray and RNA sequencing, which is in agreement with the known fact that chRCC and oncocytoma are related neoplasms [32]. However, because chRCC is potentially malignant, and oncocytoma appears to be a benign mimic of RCC [4,33], the potential subtle difference in gene expression is expected, and the distinction between both subtypes has important clinical significance. Thus, we proposed that biomarkers identified by gene expression profiles accumulated from large cohorts indeed help to discriminate important and difficult differential diagnoses. Several studies have reported the promise of gene or protein expression-based signatures in the classification of RCC subtypes. Unlike many studies in which samples were often collected from single central or ethnic cohorts, our approach exploited tumour samples from two large databases; samples extracted from the GEO database were used for construction of the classification panel, and samples from the TCGA database were extracted for testing our 44-gene expression signature. In addition, we further validated our 44-gene expression signature in an independent Chinese cohort using qRT-PCR. In a clinical scenario, the application of multi-centre, multi-ethnic data would greatly increase the reliability and universal applicability of our 44-gene expression signature. In this study, we showed that the 44-gene expression signature could reliably identify the tumour subtypes in 95.7% of the 295 samples tested. This accuracy is comparable to that of other signatures established by mRNA or miRNA biomarkers (ranging from 90 to 96%) [10,11,34]. The performance of this mRNA signature analysis by qRT-PCR also compares favourably with protein signature analysis by immunohistochemistry, the current clinical practice standard, which has shown 78-87% accuracy in identifying RCC samples using AMACR, CK7, and CD10 [35]. Moreover, analysis of the expression patterns of 44 genes by qRT-PCR classified the 4 most common renal tumour subtypes with 100% sensitivity in distinguishing normal from RCC, 96.2% for the ccRCC subtype, 92.5% for the chRCC subtype, 89.3% for the pRCC subtype, and 92.6% for the oncocytoma subtype; this signature is also comparable to other signatures (97% in distinguishing normal from RCC, 98%-100% for the ccRCC subtype, 93% for the chRCC subtype, 97-98% for the pRCC subtype, and 86% for the oncocytoma subtype) [10,11,34]. In routine clinical settings, the most commonly used diagnostic materials are FFPE samples; thus, further research is needed to successfully translate the 44-gene signature from gene expression microarrays and qRT-PCR to immunohistochemistry, thus allowing widespread access and applications in clinical diagnoses. Conclusion In conclusion, in the present study, we developed and validated a 44-gene expression-based signature for the classification of RCC subtypes. Our results may prompt further development of this gene expression signature into a molecular assay amenable to routine clinical practice. We foresee its application in cases wherein morphology and immunohistochemistry fail to distinguish between renal tumour subtypes. Further studies are needed to determine the role of our gene expressionbased signature in personalised therapy choices and the prognosis of therapeutic outcomes for RCC patients with different subtypes.
v3-fos-license
2019-05-20T13:02:50.766Z
2017-08-01T00:00:00.000
56379063
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.sjedu.20170505.12.pdf", "pdf_hash": "b6140561a3630174d59c1cfcdc8af37d72d1a1e5", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44735", "s2fieldsofstudy": [ "Education" ], "sha1": "7ff65659789dc2c5c63994499fd73ce816c62874", "year": 2017 }
pes2o/s2orc
Research on the Evaluation System of Teachers in Universities: Perspectives from China The scientificalization, systematization and standardization of teachers' evaluation system have significant educational value in improving teachers' teaching efficiency, promoting teachers' professional development, and ensuring the quality of education and teaching and promoting the rapid progress of education in our country. On the basis of analyzing a series of problems such as the single evaluation index system, the seriousness of quantification tendency, the lack of evaluation subject status, the evaluation orientation deviation, the lack of feedback mechanism and the lack of objectivity in the evaluation process, this paper puts forward the solutions from the aspects of evaluation purpose, evaluation content, evaluation method, evaluation subject and evaluation feedback. Introduction As the advanced talents and productive production base, the responsibility of cultivating talents, three tasks to enhance productivity, social services, as the base direct producers of university teachers, the qualification, R&D ability directly affects the cultivation of talents and productive output of high-end degree [1]. People-oriented, the implementation of the strategy of talent strong school, accelerate the construction of high quality to adapt the development of colleges and Universities under the new situation of teachers, colleges and universities in our country face is one of the common problems, enthusiasm and assessment system of university teachers to establish a reasonable and scientific to stimulate college teachers, promote their professional development, and ultimately promote the overall development of colleges and universities. Despite the rapid development of China's higher education, the evaluation of university teachers will continue to absorb advanced ideas and progress, but China's College Teachers' evaluation at this stage there are still some serious problems: the evaluation index system, quantitative single serious tendency; lack in the evaluation of the subject status of teachers; evaluation of orientation deviation, lack of feedback mechanism; the objectivity of the evaluation process not enough, serious human evaluation [2]. Therefore, in the construction of efficient evaluation system for teachers, should fully reflect the core values of the modern management theory of people-oriented, based on the full analysis of the environment and working characteristics of university teachers, from the evaluation purpose, evaluation content, evaluation method, evaluation subject and evaluation feedback in several aspects of comprehensive consideration, a comprehensive evaluation on University the teacher's teaching, scientific research, ethics and other aspects, to promote the school staff work together to cultivate talent, academic innovation, to provide services for the society, the performance evaluation results into a driving force in fact. Evaluation Target in Chinese Universities The ultimate purpose of university teacher evaluation is to play the role of feedback and motivation, to help teachers find their own shortcomings, and better serve the university. At present, our understanding of the value of the university teacher evaluation system is biased, and reasonable university teacher evaluation is not for control, but assessment on the basis of guidance and promotion [3]. The evaluation of university teachers is an important means and way of teacher professional development. While paying attention to the results of performance evaluation, more attention should be paid to the feedback of performance evaluation results. Directly linked by the performance evaluation results and teachers' income, employment, promotion, rewards and punishment, can give full play to the performance evaluation of the situation, the formation of benign competition for going up and down, further improve the enthusiasm and enthusiasm of teachers, to promote teachers' development [4]. At the same time, the evaluation of university teachers should play the function of development, with the perspective of development, ongoing evaluation of teachers, through the evaluation to promote teachers' Reflection on their own work, make the teacher found himself in teaching and scientific research and other work of the advantages and disadvantages of education combined with practical activities to their continuous development and self perfection. Improve the Evaluation Content in Chinese Universities The evaluation of university teachers tend to focus only on the teachers' education level, professional level, scientific research ability, scientific research level and ignore the complexity of teachers' work, ignoring the moral and political theory level, physical and mental health quality [5]. Re dominant, light and recessive, heavy results, light process, to a certain extent exposed the utilitarian, the content of the teacher evaluation of multi-dimensional and development has not been paid attention to, it is difficult to achieve a comprehensive, objective and impartial evaluation of teachers. At the same time, the common and universal too much emphasis on the evaluation, with the same set of indicators to measure all the teachers, schools and teachers neglect the characteristics of group differences, negative impact inevitably to the personal development of teachers, each teacher can not effectively play the advantages and strengths, and will hurt their work enthusiasm. Teachers in Colleges and universities is a kind of mental labor, working with complex object, many factors in the process of teaching is not easy to determine, the education period is long, the effect of education lag, the collective characteristics of educational achievement. At the same time, the work of university teachers is a complicated labor, even in the same school, there are also differences between subjects, professional teachers' workload, using the same evaluation index to evaluate all the teachers, but also can not effectively play the advantages and strengths of each teacher. Colleges and universities should be based on the goal of their respective schools, with the school for a period of time, the focus of development and target management system, set up their own strategy of reform and development of teacher performance evaluation system, and according to the characteristics of different disciplines, departments and their own conditions set out specific weights of evaluation content [6]. The 360 degree evaluation model and the scientific analytic hierarchy process are adopted to carry out the classified evaluation and classification instruction for teachers, and encourage teachers to engage in basic theory and innovative research, and to objectively determine the content and weight. Increase the evaluation on the ability of teaching practice, research and innovation ability, service consciousness, moral and other aspects, encourage teachers to concentrate on improving teaching quality and cultivating outstanding students, the evaluation index system reflects the requirements of both ability and political integrity. Change the Evaluation Methods in Chinese Universities In the social and academic background of large data analysis, the performance evaluation of teachers also needs to be integrated with new technologies and methods to achieve the goal of keeping pace with the times. The traditional method of evaluation of university teachers mainly from the three aspects of teaching, scientific research, scientific research achievement award by the school unified production form scoring according to the project, and the results of the three aspects constitute one teacher one year evaluation results [7]. This kind of evaluation has strong subjective consciousness judgment, and it does not carry on the thorough excavation to the implicit information, it is difficult to reflect the teacher's comprehensive situation comprehensively and accurately. The era of big data through the technical aspects of the combination of big data evaluation methods, promote the teacher evaluation from the concept, contents and methods of profound changes, which make the evaluation has been out only to evaluate the results of the circle, through technical means to fully and accurately record the teacher in the teaching and scientific research in the process of behavior, attitude and practice, making the evaluation more the scientific, systematic and comprehensive. At the same time, the differences in the personality and respect teachers' evaluation, through the introduction of third party organizations to promote the diversification of the evaluation subject and construction process of teacher evaluation system based on cloud computing, big data, data visualization technology, promote the evaluation methods from experience to digitalization. But these technologies are not known, once widely, will greatly enhance the reliability and validity of teacher evaluation system, reduce the tension and conflict in the process of evaluation, teachers play a leading role should be the function of evaluation system. Pay more Attention to the Evaluation Subjects in Chinese Universities Scientific and effective teacher performance evaluation should be based on teachers, because teachers are the main body whether they are from the main body of the school or the dominant force of the realization of the school's strategic objectives. In the evaluation process, the lack of teacher's main body status not only affects the validity of evaluation, but also can not arouse the initiative and initiative of teachers [8]. In order to further promote the development of teachers, deepen the teaching reform, the teaching level of teacher education to achieve the purpose of continuous improvement, more attention should be paid to the importance of teachers' self-evaluation in the teacher evaluation system, the school should give teachers autonomy, allowing teachers to participate in the evaluation process. In the evaluation, teachers are no longer passive evaluation objects, but and other evaluation, can take the initiative to describe their work and status, expounds his views, not only can self praise, can also carry out self-criticism. In this way, the teacher as the master of his own evaluation, there is no need to meet or prevent others in the passive evaluation, so that the process of self-evaluation can become a process of self reflection and development [9]. At the same time, colleges and universities in China should allow teachers to participate in the formulation of teaching evaluation forms, or allow teachers to raise questions they want to know from students. At the end of the evaluation, the teacher will discuss the results of the evaluation and listen to the opinions of the teachers to promote the improvement of teaching and the development of teachers' profession. Establish the Evaluation Feedback Mechanism in Chinese Universities In view of the complexity of teacher evaluation work, it is carried out that the teacher evaluation, to effectively protect the legitimate rights and interests by evaluating the teacher, when judged teachers on the evaluation results do not agree or not satisfied, can be through a channel or to express their own opinions, so as to enhance the teachers' evaluation system of identity, increase the fair and reasonable and improve the teacher evaluation system. In some of our colleges and universities, teachers' complaint and Reconsideration on the evaluation results of the channels and procedures are not open and transparent, lack of supervision, when teachers had lower scores in the evaluation of professional titles or the annual performance evaluation, do not know where to appeal [10]. Many teachers have been treated unfairly, just take a bear is calm, take a step as boundless as the sea and sky 'attitude and destroyed teeth pharynx to the stomach, finally to do one day at a time attitude, teaching enthusiasm dampened and blow, the performance is always in the state of mediocrity [11]. Thus, the establishment of teacher grievance procedure is scientific and reasonable, standardized, reasonable demands can make teachers expressed [12], can effectively form the school culture atmosphere of trust, cooperation, can also play the teachers and management institutions promote mutual supervision function in Colleges and universities. Conclusion The basic purpose of teaching quality evaluation is to fully mobilize the enthusiasm of teachers' teaching work through evaluation, and constantly promote the development of teachers. Construction of College Teachers' evaluation system of scientific, systematic and standardized, to improve teachers' teaching efficiency, promote teachers' professional development, and ensure the quality of education and teaching, promote the education of our country faster progress has significant meaning and value of science education. Based on the analysis of the rapid development of China's higher education trend, points out the importance of constructing teacher evaluation system in Colleges and universities, and analyzes a series of problems in the current evaluation of university teachers in our country are based on the connotation of the evaluation of teachers in Colleges and universities. In view of the above problems, this paper puts forward the following suggestions from five aspects: the purpose of evaluation, the content of evaluation, the method of evaluation, the subject of evaluation and the feedback of evaluation.
v3-fos-license
2020-03-03T02:00:40.871Z
2020-03-01T00:00:00.000
211677467
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00526-021-02098-z.pdf", "pdf_hash": "64b2ef67b857d4a6cee286c3593efd8d87ef21ce", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44737", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "2ec3a72f67bb0a9624bd126e23dc41b578098a61", "year": 2020 }
pes2o/s2orc
Discrete Carleman estimates and three balls inequalities We prove logarithmic convexity estimates and three balls inequalities for discrete magnetic Schr\"odinger operators. These quantitatively connect the discrete setting in which the unique continuation property fails and the continuum setting in which the unique continuation property is known to hold under suitable regularity assumptions. As a key auxiliary result which might be of independent interest we present a Carleman estimate for these discrete operators. Introduction In this article, we provide robust quantitative unique continuation results for discrete magnetic Schrödinger operators P h of the form where f : (hZ) d → R, D h ±,j f (n) := ±(f (n ± he j ) − f (n)) denotes the (unscaled) left/right difference operator on scale h, B k : (hZ) d → R d is a (uniformly in h) bounded tensor field, modelling, for instance, magnetic interactions and where the potential V : (hZ) d → R is assumed to be uniformly bounded (independently of h). The operator ∆ d := The operators considered in (1) correspond to discrete versions of the continuous magnetic Schrödinger operator. While many features of the continuous and the discrete operators are shared, if correspondingly adapted (e.g. regularity estimates), there are striking differences in the validity of the unique continuation property in these settings. In fact, even for the case of the model operator, the discrete Laplacian, it is well-known that while in the continuum the (weak) unique continuation property holds as a direct consequence of the analyticity of the solutions, this fails in general in the discrete setting [GM14]. Indeed, in [GM14] the authors show that it is possible to construct non-trivial harmonic polynomials vanishing on a large, prescribed square. In spite of these differences, it is expected that as the lattice spacing decreases, h → 0, the properties of continuous harmonic functions are recovered. That this is in fact the case for the setting of the discrete Laplacian was proved in [GM14,GM13,LM15], where propagation of smallness estimates with correction terms were proved for the discrete Laplacian. For similar phenomena for related operators we refer to [FBV17,JLMP18] and the references therein. Most of the cited propagation of smallness results from the literature however strongly relied on the specific properties of the constant coefficient Laplacian, e.g. by using methods from complex analysis. It is the purpose of this article to provide quantitative unique continuation estimates and three spheres inequalities for a large class of Schrödinger operators by means of robust Carleman estimates. We emphasize that in addition to the intrinsic interest in the quantitative unique continuation properties of discrete elliptic equations, important applications of these quantitative unique continuation estimates involve inverse and control theoretic problems (see for instance [BHR10,EDG11]). Here for r > 0 we define B r = B r (0) ∩ (hZ) d , with h ∈ (0, h 0 ) denoting the lattice spacing, and all L 2 norms are L 2 norms on the lattice (hZ) d . Due to the restriction on the upper bound of τ ≤ δ 0 h − 1 2 , this logarithmic convexity estimate does not immediately yield a three balls inequality as in the continuum. It however implies a three balls estimate with a corresponding correction term: Theorem 2. There exist α ∈ (0, 1), c 0 > 0 h 0 > 0 and C > 1 such that for h ∈ (0, h 0 ) and u : (hZ) d → R with P h u = 0 in B 4 we have This estimate thus quantitatively connects the discrete situation in which the unique continuation property fails to its continuous counterpart. It provides quantitative evidence of the fact that as h → 0, the propagation of smallness properties of the associated elliptic operator is recovered. We remark that the scaling behaviour of the form e −c0h − 1 2 in h ∈ (0, h 0 ) is known to be optimal as the dimension tends to infinity (see [LM15, Theorem 1.13]). We remark that our results (and arguments) remain valid if instead of the differential equation (1) we consider the differential inequality Further, it is possible to deduce propagation of smallness estimates for some controlled hdependent growth of V and B j (see Remark 4.2) which however, of course, do not pass to the limit as h → 0. Main ideas. Similarly as in [BHR10,EDG11] and contrary to the results in [GM14,LM15], both of our results rely on a robust L 2 Carleman estimate. More precisely, as our key auxiliary result we prove the following Carleman estimate with a weight which is a slightly convexified version of the limiting Carleman weight ψ(x) = −τ log(|x|) and which we choose as, for example, in [KRS16]: for a certain constant c ps > 0. Then, there exist h 0 > 0, C > 1 and τ 0 > 1 (which are independent of u) such that for all h ∈ (0, h 0 ) and τ ∈ (τ 0 , Here , where e j is the unit vector in the j-th direction, denotes the symmetric discrete difference operator. Remark 1.1. We remark that the choice of the symmetric discrete derivative D s in (4) does not play a substantial role. With only minor changes it is also possible to replace it by D h + or D h − . We refer to the beginning of Section 2 for the precise definitions. While building on similar ideas as in its continuous counterpart (see for instance [KT01,AKS62]), our Carleman estimate is restricted to a certain range of values of τ which is a consequence of the discreteness of the problem. Similar restrictions had been observed in [BHR10,EDG11] in the context of Carleman estimates for control theoretic and inverse problems. In deriving this estimate, we localize to suitable scales on which we freeze coefficients and compare our discrete problem to the continuum setting. 1.3. Outline of the article. The remainder of the article is organized as follows: In Section 2 we compute the conjugated discrete operator and its expansion into its symmetric, antisymmetric parts and their commutator. In the main part, in Section 3, we derive the main Carleman estimate of Theorem 3. Building on this, in Section 4 we deduce the results of Theorems 1 and 2. Last but not least, in Section 5, we comment on rescaled versions of the main estimates. 1.4. Remarks on the notational conventions. Concerning notation, with the letters c, C, . . . we denote structural constants that depend only on the dimension and on parameters that are not relevant. Their values might vary from one occurrence to another, and in most of the cases we will not track the explicit dependence. For the Fourier transform of a function f we will both use the notation F f andf . The Conjugated Laplacian and the Commutator From now on, D j ± will stand for the forward/backward operators D h ±,j from Section 1 and D j s will denote the symmetric discrete derivatives in the j-th direction. All operators are understood to be taken with step size h. Moreover, D h ± := d j=1 D j ± and D s := d j=1 D j s . We remark that the symmetric difference operator is associated with the Fourier multiplier 2 d j=1 sin(hξ j ). Heading towards the proof of the Carleman inequality of Theorem 3, we introduce the conjugated Laplacian where the symmetric and anti-symmetric operators are We compute the commutator of this to be Now, using trigonometric identities, these can be simplified to read Indeed, for instance, for A j,k we obtain The arguments for the other contributions are similar. We next seek to investigate the commutator in more detail. Remark 2.1. In the one-dimensional situation the commutator can be simplified significantly: Indeed, if we study the commutator [S, A]f, f , the case j = k is quite simple and leads to The main term is a discrete version of 4φ jj |f j | 2 + 4φ jj φ j φ j |f | 2 − φ jjjj |f | 2 . Note that the main term of the higher dimensional continuous commutator is more complicated and is of the form In the general case, we can rewrite the contributions of [S, A]f, f in the following way (where with slight abuse of notation, we refrain from spelling out the sums in Z d and the sum in j, k): The interest of writing the general term in this form is that we seek to bring the commutator into a form which is as close as possible to the form of the commutator in the continuous setting which reads 4∇φ · ∇ 2 φ∇φf 2 + 4∇f · ∇ 2 φ∇f − ∆ 2 φf 2 . To this end, we note that the first four terms in (6) are closely related to the part 4φ jk f j f k −φ jjkk |f | 2 and the last four terms to 4φ jk φ j φ k |f | 2 correspondingly. We will use the expression (6) as the starting point of our commutator estimates in the following sections. Proof of the Carleman Estimate from Theorem 3 Before turning to the proof of Theorem 3 let us recall an auxiliary result showing the strong pseudoconvexity (in the continuous sense) of the weight function φ(x): Lemma 3.1. Let φ(x) := τ ϕ(|x|), where for some constant c ps > 0 (7) ϕ(t) = − log t + c ps log t arctan(log t) − 1 2 log(1 + log 2 t) . In the sequel, we present several auxiliary results which allow us to steadily transform the discrete conjugated operator into an operator that closely resembles the continuum version of the conjugated Laplacian. Recall that we define the discrete Laplacian in direction j ∈ {1, . . . , d} as As a first step towards the desired Carleman estimate, we localize the problem to scales of order ǫ −1 0 τ − 1 2 , where ǫ 0 > 0 is a small constant which will be chosen below (see the proof of Theorem 3): Here, S φ and A φ are the operators from (5) and the operator sin( D j h) is defined as a Fourier We remark that here and in the sequel, for brevity of notation, we abbreviate the L 2 norm on the lattice without adding subindeces, i.e. f := f L 2 ((hZ)) d . Proof of Lemma 3.2. As the estimates for S φ and for A φ are analogous, we mainly focus on the argument for S φ . The first bound in the estimate for S φ in (8) is a direct consequence of Minkowski's inequality. In order to observe the second estimate for S φ in (8), we spell out the contributions S φ f k (n). We begin by rewriting While we seek to keep the first contribution in this expansion to recombine it to h −2 ∆ d,h,j f after summing over the partition of unity, we only provide estimates on the remaining contributions. To this end, we note that Using the same reasoning for the term h −2 (f (n + e j h) − f (n − e j h))(ψ k (n − e j h) − ψ k (n)), and combining these estimates, we thus infer that where we used that for y ∈ B 2 \ B 1 2 we have that As a consequence, combining the estimates from (9) and (10) yields This concludes the argument for the localization estimate for S φ . The arguments for A φ and L φ are analogous. Indeed, for A φ we note that Estimating the terms of L φ by using the bounds for A φ and S φ then implies the result. As a next auxiliary step, we expand the trigonometric identities which then allows for easier manipulations of the contributions in the sequel. Then, for S φ and A φ as in (5), Here, as above, the operator sin( D j h) is defined as a Fourier multiplier, i.e. F (sin( Similarly as above, we here drop the subscript in the L 2 scalar product and simply write (·, ·) := (·, ·) L 2 ((hZ) d ) . Proof of Lemma 3.3. The results follow by expanding the expressions for S j , A j . More precisely, we first approximate all discrete derivatives of φ and the corresponding nonlinear functions and then estimate the resulting errors. We first discuss the symmetric part of the operator. For instance, we expand Here y ∈ R d is an intermediate value such that y ∈ [n, n + he j ]. Thus, the symmetric part becomes where S j f (n) is as in our statement and E Sj f ≤ C(hτ 2 + τ 4 h 2 ) (|∇ϕ| 2 + |∇ϕ| 4 + |∇ 2 ϕ| 2 )f , with n ∈ (hZ) d , φ(n) = τ ϕ(n) with ϕ a bounded function (on the relevant domain). Choosing τ ∈ (0, δ 0 h − 1 2 ) with δ 0 sufficiently small, we may assume that τ 2 h + τ 4 h 2 ≤ C (or even τ 2 h ≪ 1), hence the error E Sj f in the symmetric part is of zeroth order in τ and an L 2 contribution in f , i.e. E S φ f ≤ C f . Therefore, in the sequel, we will estimate For the antisymmetric part we argue analogously. We thus expand where y is an intermediate value in [n, n + he j ]. Thus, the antisymmetric part becomes Finally, we turn to the commutator which is given by For the first four contributions in (15), we expand Thus, the first four contributions in (15) can be written as Noting that yields the first part in the expression which is claimed for f (n)C jk f (n) in the lemma. For the second four terms in (15), we similarly expand as in (16) and (cosh(φ(n + he j + he k ) − φ(n)) − 1) Hence the second set of four terms from (15) can be rewritten as Combining the errors in the two expansions of the commutator exactly yields the claimed estimate. As a next step, we freeze coefficients in the operators S φ , A φ and C jk when acting on functions supported in sets of the size ǫ −1 0 τ − 1 2 for ǫ 0 > 0 sufficiently small and τ > 0 sufficiently large, both of which are to be determined below (see the proof of Theorem 3). Assume that φ is as in Theorem 3 and that 1 < τ ≤ δ 0 h − 1 2 for a sufficiently small constant δ 0 > 0. Letn ∈ R d be a point which is in the interior of supp(f ) and set Then, Proof. Using the triangle inequality and the support condition, we estimate As the arguments for A φ and for C jk are analogous, we do not discuss the details. Finally, as a last auxiliary step before combining all the above ingredients into the proof of Theorem 3, we prove a lower bound for the operators with the frozen variables. Proposition 3.5. LetS φ ,Ā φ andC jk be as in Lemma 3.4. Then there exist C low > 0, c 0 > 0, h 0 ∈ (0, 1) (small) and τ 0 > 1 such that for all τ ∈ (τ 0 , Proof. Using that the operators under consideration all have constant coefficients, we may perform a Fourier transform and infer that (20) In order to prove the positivity of this expression, we will choose c 0 > 0 so small, that outside of a sufficiently small neighbourhood of the union of the (joint) characteristic sets of the Fourier symbols p r,j (ξ) := −4h −2 sin 2 (hξ j /2) + (∂ j φ(n)) 2 cos(ξ j h) the third term in (20) is controlled by these. In order to observe that this is possible, we first study the contributions p r,j and p i,j separately. We first consider the terms p r,j and p r (ξ) := d j=1 p r,j (ξ) associated with the symmetric operator. We begin by observing that the first summand in (21) p r (ξ) = j |∂ j φ(n)| 2 cos(ξ j h) + 2 j cos(ξ j h) − 1 h 2 is bounded from above by Cτ 2 . For the second summand, we deduce that, since | cos(x)| ∈ (0, 1) and for ξ j ∈ h −1 (−π, π), we have where R(ξ j h) is the remainder term in the Taylor approximation. Hence, Combining these two observations, we note that there exists a constant C 1 > 0 such that if |ξ| ≥ C 1 τ , the expression in (21) can be estimated from below by Here the constant c hf > 0 is independent of τ and ξ. In the sequel, this will motivate a distinction between the two regimes |ξ| ≥ C 1 τ and |ξ| ≤ C 1 τ . We further note that if the constant c 0 > 0 in (20) is sufficiently small, then the a priori not necessarily signed Fourier multipliers associated with contributions in the third and fourth line in (20) may be absorbed into the lower bound in (22). Motivated by the estimate (22), we call the region {|ξ| ≥ C 1 τ } the high frequency elliptic region. By the above considerations the claimed lower bound (19) always holds in this region. It thus remains to study the region complementary to this, i.e. the region in which |ξ| ≤ C 1 τ . In this region, we expand the symbols in hξ j (noting that h|ξ| ≤ C 1 τ δ 2 0 τ −2 = C 1 δ 2 0 τ −1 which is small for τ > 1 and δ 0 > 0 small). For the symmetric part we obtain for some constant C > 0 which depends on C 1 > 0 For the antisymmetric part in turn we infer for denote the joint characteristic sets of the symmetric and antisymmetric parts of the operator. Further define to be a γ 0 τ neighbourhood of the joint characteristic set C τ with γ 0 > 0 small (to be determined below). With this notation fixed, we prove that for |ξ| ≤ C 0 τ outside of N τ,C there exists some constant c lf,1 > 0 (depending on γ 0 ) independent of τ > 0 such that (26) p 2 r (ξ) + p 2 i (ξ) ≥ c lf,1 (τ 4 + |ξ| 4 ). Indeed, this is true for the leading order approximations (|∇φ(n)| 2 − |ξ| 2 ) 2 + 4(∇φ(n) · ξ) 2 , and transfers to the full symbols p 2 r (ξ) + p 2 i (ξ) since the error estimates in (23), (24) are of order Ch 2 τ 4 ≤ Cδ 0 if τ ∈ (1, δ 0 h − 1 2 ). Thus, if δ 0 is sufficiently small (depending on γ 0 ), these error contributions can be absorbed. Again, if the constant c 0 > 0 is sufficiently small, we may absorb the contributions originating from the not necessarily signed Fourier symbols of the operators in the third and fourth line in (20) into the lower bound (26). We next seek to argue that by continuity a similar lower bound also holds on N τ,C . To this end, note that for ξ ∈ N τ,C we have ξ = τ ξ 0 for some ξ 0 ∈ (h −1 T) d with |ξ 0 | ∈ (C 0,1 , C 0,2 ), where the constants C 0,1 , C 0,2 > 0 only depend on γ 0 and the dimension d and, in particular, are independent of τ > 1 and h > 0. Thus, for ξ ∈ N τ,C and ξ 0 = τ −1 ξ we have that by homogeneitỹ is independent of τ . Since for ξ ∈ C τ the pseudoconvexity condition for φ implies thatq(ξ) ≥ c cf,1 > 0, by continuity, it remains true thatq(ξ) ≥ c cf,1 /2 in the neighbourhood N τ,C if γ 0 > 0 is sufficiently small (but independent of τ > 1). By the scaling of q(ξ) we thus infer that for ξ ∈ N τ,C and δ 0 > 0 sufficiently small we have Thus, in total, we have obtained that for all ξ ∈ (h −1 T) d By the Parseval identity, this implies that which yields the claim of the Proposition. With all of these auxiliary results in hand, we now address the proof of Theorem 3. Proof of Theorem 3. The proof of Theorem 3 follows by combining all the previous estimates. We first rewrite the desired estimate in terms of the functions f := e φ u for which we seek to prove and for which we note that, the action of D s on e φ u yields terms D s e φ that can be absorbed in the first term with u ). We now argue in two steps, first reducing the estimate to a bound for the localized functions and then proving the estimate for these. Step 1: Localization. As a first step, we note that it suffices to prove the estimate for the localized functions f k from Lemma 3.2. Indeed, assuming that the estimate (28) is proven for f k , an application of Minkowski's inequality and the error estimates from Lemma 3.2 yield Now choosing ǫ 0 ≤ 1 10C loc and recalling that τ h 1 2 ≤ δ 0 for some δ 0 ∈ (0, 1), we may absorb the contribution on the right hand side of (29) into its left hand side (in particular we note that τ 2 τ 1 2 h ≤ δ 2 0 τ 1 2 by our assumptions on the relation between τ and h). This then yields the estimate (28). The estimate (4) follows from this by possibly choosing the constants in the terms which involve derivatives on the left hand side of (28) smaller, carrying out the product rule and absorbing the L 2 errors into the L 2 contribution on the left hand side of (28). Step 2. Proof of (28) for the localized functions. It thus suffices to prove (28) for f = f k . To this end, we observe that for f k = f ψ k with supp(f ) ⊂ B 2 \ B 1/2 , ψ k as in Lemma 3.2 and c 0 ∈ (0, 1) to be chosen below, where by Lemma 3.3 Here we have used that τ ≥ 1 and Fixing h 0 > 0 such that Ch 0 ≤ C low 10 , where C low is the constant from Proposition 3.5, we will be able to treat the contributions in (31) as error contributions in the following arguments. Exploiting the bounds from Lemma 3.4, we may further estimate where by the estimates from Lemma 3.4 Finally, invoking Proposition 3.5, we infer that We now fix τ 0 > 1 so large and h 0 > 0 so small that Further, we possibly decrease the value of h 0 > 0 even more and choose it so small that δ 0 h − 1 2 0 ≥ 100τ 0 > 100, which in particular implies that for all h ∈ (0, h 0 ) the interval (τ 0 , δ 0 h − 1 2 ) is non-empty. With these choices, it follows that for τ ∈ (τ 0 , δ 0 h − 1 2 ), we may absorb the error contributions E 1 and E 2 from (31) and (32) into the positive right hand side contributions in (33). Therefore, we obtain that Dividing by τ > τ 0 implies the desired result. Proofs of Theorems 1 and 2 In this section we provide the proofs of the results of Theorems 1 and 2. 4.1. Derivation of Theorem 2 from Theorem 1. We first show how Theorem 1 implies Theorem 2. Proof of Theorem 2. Let us assume that Theorem 1 holds. First, let us take the value τ * such that . It is easy to check that with this value of τ * it holds Given u satisfying (2), we can assume that τ 0 < τ * , and we are in one of the following two cases: • If τ * ∈ (τ 0 , δ 0 h − 1 2 ), then plugging this into the right hand side of (2) yields, for τ = τ * , that C(e c1τ u L 2 (B 1/2 ) + e −c2τ u L 2 (B2) ) = 2C u c 2 c 1 +c 2 We hence obtain that Thus, since (2) holds for all τ ∈ (τ 0 , δ 0 h − 1 2 ) and by using (34), we have Combining both cases implies (2) with α = c2 c1+c2 and c 0 = δ0 2 c 2 . 4.2. Derivation of Theorem 1 from the Carleman estimate of Theorem 3. In this section, we deduce Theorem 2 from Theorem 3. As an auxiliary result we deduce a Caccioppoli inequality for more general second order difference equations. In particular this applies to the difference Schrödinger equation (1). Lemma 4.1 (Caccioppoli). Let a jk : (hZ) d → R d×d be symmetric, bounded and uniformly elliptic with ellipticity constant λ ∈ (0, 1), i.e. assume that for all ξ ∈ R d \ {0} we have Let 0 < 10h < r 1 < r 1 + 100h < r 2 . Then there exists a constant C > 1 depending on Here H 1 loc,h ((hZ) d ) and H 1 ((hZ) d ) denote the local and global H 1 spaces on the lattice. Proof of Lemma 4.1. The result follows along the same lines as the continuous Caccioppoli inequality; we only present the proof for completeness. As for general r 1 , r 2 the proof is analogous, we only discuss the details in the case r 1 = 1, r 2 = 2 and 0 < h ≤ h 0 for h 0 ≪ 1 sufficiently small. Noting that a ij ≤ 1 2 λ −1 (this follows from the ellipticity condition when choosing appropriate ξ) we obtain that Combining this with the bounds for B j and V , we obtain Here the first contribution in (39) originates from the first right hand side contribution in (37). We may absorb it from the right hand side of (39) into the left hand side of (39). Using Young's inequality for the contribution allows us to also absorb the gradient term in this contribution into the left hand side of (39). Due to the bounds on η, this concludes the proof of the Caccioppoli estimate. Proof of Theorem 1. The proof of Theorem 1 from the Carleman estimate in Theorem 3 follows from a standard cut-off argument. For completeness, we present the details. Let u : (hZ) d → R such that P h u(n) = 0 for all n ∈ B 4 . Fix ε > 0 to be small enough and assume that h 0 > 0 is sufficiently small. We consider the function w(n) = θ(n)u(n), with 0 ≤ θ(x) ≤ 1 a C ∞ (R d ) cut-off function defined as Using the equation for u, we then write Applying the Carleman estimate (4) from Theorem 3, using Remark 1.1 and the triangle inequality, we obtain L ∞ , B 2 L ∞ } allows us to absorb the first two contributions from the right hand side of (40) into the left hand side of (40). We thus obtain the bound We next deal with the errors on the right hand side of (41). On the one hand, we have On the other hand, for T d,j with j ∈ {1, 2}, where we used the Caccioppoli estimate from Lemma 4.1. Moreover, τ 3 e φ u 2 L 2 ≥ τ 3 e φ f 2 Remark 4.2. We remark that as a feature of the discrete setting, to a certain degree we can also deal with more singular potentials. Tracking the argument from above (in particular the passage from (40) to (41)), we note that if V and B only satisfy the bounds with µ 0 ≤ C Carl 10 δ 0 , we can deduce that for some constantsc 1 ,c 2 > 0 (independent of h) u L(B1) ≤ C(ec 1 h − 1 2 u L 2 (B 1 2 ) + e −c2h − 1 2 u L 2 (B2) ). We also remark that while yielding quantitative propagation of smallness type estimates, as expected these estimates do not pass to the limit h → 0. Further, the h dependence in the exponentials can be adapted to the size of the potentials (with different bounds in the exponents of the logarithmic convexity estimates depending on the bounds on V , B). Remarks on Scaling Having established (3), we note that to a certain degree -although this is substantially weaker than in the continuous setting -it is possible to rescale this estimate. We discuss this in the case of the Laplacian (for more general operators similar observations remain valid). To this end, we make the following observation: Lemma 5.1. Let u : B 4 → R be such that ∆ d,h u = 0 in B R ⊂ (hZ) d . Then, for any m ∈ N such that hm ≤ 2, we also have ∆ d,mh u = 0 in B R/m ⊂ (mhZ) d (i.e. with respect to the lattice (mhZ) d ). Summing and noting that the corresponding contributions in the brackets yield the Laplacian on (hZ) d implies the claim for m = 2. Assuming the induction hypothesis for any m, i.e., d j=1 (f (x + mhe j ) + f (x − mhe j ) − 2f (x)) = 0 for x ∈ (mhZ) d , we prove the statement for m + 1. We have The conclusion follows from the cases m = 1 (after translation) and the inductive steps for m and m − 1. Proof. We consider the function u m (x) := u(m −1 x) with x ∈ (hZ) d . By the considerations from Lemma 5.1 this is also harmonic on (hZ) d . Thus, we may apply Theorem 2. Rescaling z = m −1 x then implies the claim. Remark 5.3. We remark that, of course, apart from rescalings also translations are always possible due to the translation invariance of the operator at hand.
v3-fos-license
2023-01-25T16:09:09.133Z
2023-01-21T00:00:00.000
256221365
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/12/3/499/pdf?version=1674291024", "pdf_hash": "7b8cb1b20636b3c53ca0a3ca4acc893c5488b256", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44739", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "sha1": "c8df09d127ac928939cd47440ceaf63e3ec41a90", "year": 2023 }
pes2o/s2orc
A Study on Origin Traceability of White Tea (White Peony) Based on Near-Infrared Spectroscopy and Machine Learning Algorithms Identifying the geographical origins of white tea is of significance because the quality and price of white tea from different production areas vary largely from different growing environment and climatic conditions. In this study, we used near-infrared spectroscopy (NIRS) with white tea (n = 579) to produce models to discriminate these origins under different conditions. Continuous wavelet transform (CWT), min-max normalization (Minmax), multiplicative scattering correction (MSC) and standard normal variables (SNV) were used to preprocess the original spectra (OS). The approaches of principal component analysis (PCA), linear discriminant analysis (LDA) and successive projection algorithm (SPA) were used for features extraction. Subsequently, identification models of white tea from different provinces of China (DPC), different districts of Fujian Province (DDFP) and authenticity of Fuding white tea (AFWT) were established by K-nearest neighbors (KNN), random forest (RF) and support vector machine (SVM) algorithms. Among the established models, DPC-CWT-LDA-KNN, DDFP-OS-LDA-KNN and AFWT-OS-LDA-KNN have the best performances, with recognition accuracies of 88.97%, 93.88% and 97.96%, respectively; the area under curve (AUC) values were 0.85, 0.93 and 0.98, respectively. The research revealed that NIRS with machine learning algorithms can be an effective tool for the geographical origin traceability of white tea. Introduction Tea (Camellia sinensis (L.) O. Kuntze) is the second most consumed beverage in the world after water [1]. It contains rich secondary metabolites that are strongly associated with benefits to human health, such as free amino acids, polyphenols and alkaloids which are good for human health, create a complex and varied taste and attractive aroma [2,3]. In general, according to the degree of fermentation and processing techniques, tea is classified into six categories: green tea (unfermented, enzyme inactivation), white tea (slightly fermented, withering), yellow tea (partly fermented, heaping for yellowing), Oolong tea (partial-fermented, fine manipulation), black tea (fully fermented, fermentation), and dark tea (post-fermented, pile) [4,5]. Unlike other kinds of tea, white tea has the simplest producing process with only two steps: withering and drying [6]. In recent years, white tea has become increasingly popular with an ever-growing international market demand and public interest because of its unique flavor and health benefits [7]. The flavor and quality of white tea are often affected by origins, and the origin is an important basis for consumers to make a purchase. Fuding white tea is the best-known white tea in China, and it is popular among consumers as a China-Europe Geographical Indication Product with a higher commercial value compared to white tea produced in other areas. It has been reported that Fuding white tea sold on the market is far more than the actual production [8]. It is difficult for consumers to distinguish white tea from different producing areas only by its appearance, which may affect the value assessment of white tea products. Therefore, a reliable and fast method is increasingly needed to identify and trace the white tea produced in different origins, thereby providing strong technical support the high-quality development of the white tea industry. Until now, the identification of tea origins has mainly relied on professional experts to conduct sensory evaluation by tea appearance and flavor, and the results are easily influenced by experts' physical conditions and mental factors, leading to the results being more subjective and lacking repeatability [9]. In recent years, proton transfer reaction time of flight-mass spectrometry [10], inductively coupled plasma optical emission spectrometry and inductively coupled plasma mass spectrometry [11] have been used for the origin tracing of white tea, but these methods all have problems of being time-consuming, costly and complex to analyze, which make it difficult to promote and utilize in industrial application. Near-infrared spectroscopy (NIRS), as a green analytical technique with high efficiency, high accuracy, and convenience, has proven its applicability in the field of fuel [12], medicine [13] and wine [14], and has shown its advantages in the traceability of the origin of other teas. Jin et al. [15] used near-infrared spectral data combined with an extreme learning machine to build an origin traceability model for Taiping Houkui green tea in a narrow region, and the accuracy rate of the optimized model could reach 95.35%. Ren et al. [16] used a factorization method combined with NIRS data to establish rapid identification model of black tea growing regions, and the identification accuracy of black tea from different geographical regions was 94.3%. Yan et al. [17] used partial least squares discriminant analysis and a NIRS-established model to discriminate Anxi-Tieguanyin's (oolong tea) authenticity, the best model's specificity and sensitivity could reach 0.931 and 1.000. In recent years, machine learning algorithms have also been gradually used to identify food products' producing areas, authenticity. Xu et al. [18] successfully identified 16 kinds of millet origins based on Vis-NIR data combined with machine learning algorithms, with F-Score values up to 99.5% for random forest (RF) and support vector machine (SVM) models, and 99.1% for K-nearest neighbor (KNN) models. Zhang et al. [19] combined hyperspectral data with SVM algorithm to successfully achieve a fast and nondestructive identification of salted sea cucumbers, over-salted sea cucumbers and sugar-treated sea cucumbers, with the best model achieving 100% accuracy. Liu et al. [20] combined the hyperspectral data with the PCA algorithm and SVM algorithm to achieve a fast and nondestructive identification of green tea origins and the exact processing month, and the correct recognition rate of the best origin identification model could reach 97.5%; the correct recognition rate of the best processing-month recognition model could reach 95%. The above results demonstrate that NIRS combined with machine learning algorithms has the potential to achieve rapid and nondestructive identification of white tea's origins, but there is no relevant report about NIRS application in identifying white tea origins. Therefore, the main purpose of this paper is to investigate the potential and possibility of using NIRS combined with different preprocessing, feature extraction and machine learning algorithms analysis as a fast and nondestructive tool to identify and classify white tea according to geographically larger production areas (different provinces of China, DPC), narrow range of origins (different districts of Fujian Province, DDFP) and the authenticity of China-Europe Geographical Indication Product (authenticity of Fuding white tea, AFWT). This paper describes the systematic and comprehensive evaluation of the applicability of NIRS as a traceability tool for white tea. white tea, AFWT). This paper describes the systematic and comprehensive evaluation o the applicability of NIRS as a traceability tool for white tea. Spectra Acquisition The samples' NIRS was collected using an Antaris II FT-NIR spectrophotomete (Thermo Scientific, Waltham, MA, USA). The NIRS was operated at a temperature of 2 °C and humidity <70%; workflow spectral acquisition workflow parameters were set a wave number range 4000-10,000 cm −1 , scan interval 3.856 cm −1 , 64 times, and resolution 8.0 cm −1 . To ensure the reliability of the NIRS detection data, the samples were scanned once for the background before the acquisition, and the air background spectra was de ducted to reduce the influence of environmental factors on the spectra data, the spectra o each sample were collected three times, and the average spectrum was taken as th Spectra Acquisition The samples' NIRS was collected using an Antaris II FT-NIR spectrophotometer (Thermo Scientific, Waltham, MA, USA). The NIRS was operated at a temperature of 25 • C and humidity < 70%; workflow spectral acquisition workflow parameters were set as wave number range 4000-10,000 cm −1 , scan interval 3.856 cm −1 , 64 times, and resolution 8.0 cm −1 . To ensure the reliability of the NIRS detection data, the samples were scanned once for the background before the acquisition, and the air background spectra was deducted to reduce the influence of environmental factors on the spectra data, the spectra of each sample were collected three times, and the average spectrum was taken as the original spectral data. The spectra were saved as absorbance using TQ Analyst software (Thermo Nicolet Corporation, Madison, WI, USA) for subsequent analysis. Spectral Pretreatment Due to the influence of electrical noise, light scattering and other environmental factors, it is inevitable to have baseline drift and high-frequency noise in NIRS data. To further eliminate the influence of the environmental factors on the original spectra (OS), continuous wavelet transform (CWT), minmax normalization (Minmax), standard normal variate (SNV) and multiplicative scattering correction (MSC) were used as four preprocessing algorithms in MATLAB (MATLAB R2016a, Mathworks) for spectra correction. The CWT algorithm was used to correct the baseline drift and eliminate high-frequency noise; the Minmax algorithm was chosen to strengthen the data; the SNV and MSC algorithms were used to correct the scattering and eliminate the effects caused by the inhomogeneity of tea powder particle size and the nonconstant light range [21][22][23][24]. The choice of wavelet parameters (wavelet basis and decomposition scale) in CWT was crucial and directly determined the merits of the subsequent models [25]. After trial calculation and analysis, the db4 wavelet basis of the Daubechies family was chosen in this study, and the decomposition scale was set as 64. Extraction of Characteristics A large amount of redundant information existed in the continuous wavenumbers of NIRS, which closely related to feature information. Computational speed and accuracy can be easily affected due to excessive data if we use all the data to build models. Therefore, to better reduce the computational burden of models, we applied dimensionality reduction of spectra data to characterize the vast majority of information of the spectra by extracting feature vectors or wavenumbers. In this study, principal component analysis (PCA), linear discriminant analysis (LDA), and successive projection algorithm (SPA) were used to perform data dimensionality reduction, and the above algorithms were all implemented in Python v3.8.5. Among these methods, PCA is often applied to reduce the dimensionality of spectra in agricultural and livestock products and has been proven to be an effective spectra dimensionality reduction method, which can extract features from a large amount of data and convert them into the data set that still contains most of the valid information but has a smaller dimensionality. Thus, the original data information is retained to the greatest extent [26]. Therefore, the PCA method is optimal and the most commonly used. LDA is a supervised feature extraction method, which is based on the principle that all sample points are projected onto a high-dimensional line, so that the projections of the sample points of the same class are as close as possible, while the projections of the sample points of different classes are distributed as scattered as possible [27]. SPA is a forward circular feature extraction method, which can extract the information of effective predictive response variables from the original spectral matrix by continuous projection, and minimize the covariance effect between the spectral variables to maximize the predictive ability of the selected response variables [28]. The wavenumber with the largest projection vector and the smallest covariance with the wavenumber in the feature set is selected into the feature set. The number of characteristic wavenumbers is determined by the root to mean square error (RMSE) of the internal complete cross-validation of the calibration set, and the number of features and characteristic wavenumbers corresponding to the minimum RMSE value are the best values [29]. Establishment and Evaluation of Models Machine learning algorithms are widely used in the analysis and utilization of NIRS data, but so far, no classifier has shown its superior advantages in all cases. Hence, using multiple classifiers for modeling is better for constructing high-quality models. In this paper, we adopted three classical machine learning algorithms, including K-nearest neighbor (KNN), random forest (RF) and support vector machine (SVM), and combined the NIRS data processed by different pre-processing and feature extraction algorithms to build models and optimize the model parameters, to systematically and comprehensively explore the optimal process for the construction of white tea origin traceability models. All model constructions were based on Python v3.8.5, and the evaluation parameter tables were made with Excel. Before the models were constructed, the data were divided into four equal parts, of which three parts were used as the training set and one part was used as the validation set. The training set was used to construct the traceability model; the validation set was used to evaluate the source prediction ability of the model for new samples. The KNN classification algorithm is one of the simplest machine learning algorithms with mature theory and wide application. Its principle is to judge the attributes based on the category of the nearest k points when predicting new values, which is simple, fast, and insensitive to outliers. The selection of k-value will have a significant impact on the results of the algorithm, when the k-value is small, the overall complexity of the model will rise and be prone to overfitting. When the k-value is large, it will make the training set instances far away from the validation set samples, which influences the prediction's making the prediction errors occur [30]. In practice, the k-value is generally chosen as a small value, and cross-validation is subsequently used to select the optimal k-value, and the initial k-value was set as 3 in this study after in-depth analysis. RF is a supervised integrated classification algorithm that emerged mainly to solve the problem of large errors and over-fitting that may occur in a single decision tree. RF performs well in classification problems, with great potential to become the classifier with optimal effect in each case. The model consists of many decision trees, but there is no association with each other. When judging or predicting a new sample after getting the forest, each decision tree in the forest will be judged separately to distinguish which category the sample belongs to and compare which category has the highest number of choices to make a judgment on the sample category; it is crucial to decide how many trees in this model should have [31,32]. After the trial calculation, the number of trees in this study was initially set as 20 for the subsequent comparative analysis. In recent years, SVM has become one of the most widely used and effective machine learning algorithms for use in tea. It is an algorithm that uses a kernel function to map the input n-dimensional data to a K-dimensional feature space (K > n) to perform classification by a high-dimensional feature space. To improve the model quality, all SVM models in this paper were based on the radial basis function (RBF) kernel function, which could reduce the computational complexity of the training process and has good performance under the general smoothing assumption; at the same time, the determination of the optimal values of the penalty parameter C and gamma parameter is also crucial, and the accuracy of the SVM models depends on the combination of these two parameters [33]. The accuracy of the SVM model depends on the combination of these two parameters. After trial calculations, C = 100 and gamma = 0.1 were used as the initial modeling parameters in this study. The model performance was preliminarily evaluated using recognition accuracy (RA) and area under curve (AUC). In detail, RA is often used to evaluate the predictive ability of the model, and its value range is between 0 and 100%. The larger the value is, the better the predictive ability of the model for new samples. AUC is often used to evaluate the generalization ability of the model. The better the generalization ability of the model, the better the ability to classify new samples correctly. The value range of AUC is from 0 to 1.0 and is positively correlated with the quality of the model [34,35]. When the preliminary evaluation parameters of models are the same, to further evaluate the discriminant and generalization ability, four-fold cross-validation is used to verify the accuracy of the model. Four-fold cross-validation refers to dividing the original data into four subsets equally, making each subset data as a validation set respectively, and the rest data as a training set to obtain four model performance parameters, and using the average of these four models RA as the performance index of this classifier [36]. The confusion matrix is often used to evaluate the classification effect of each group, reflecting the relationship between the real category of the sample data and the prediction results, and quantifying the details of classification more intuitively [37]. In this paper, the confusion matrices were used for the classification details evaluation of the best models obtained. Data Analysis The raw NIRS saved the spectrum as absorbance through TQ Analyst (Thermo Nicolet Corporation) software for subsequent analysis. MATLAB (MATLAB R2016a, Mathworks) software was used to preprocess the raw spectrum and draw all spectra. Python v3.8.5 software was used to extract features, build models and draw 3D models. The model evaluation tables and parameter optimization diagrams were generated in Excel. Confusion matrices were generated by TBtools software (Guangdong, China). Figure 2a shows the initial NIRS of 579 white tea samples in the 4000-10,000 cm −1 band. The trend of absorbance values in each band tended to be consistent without any significant differences. With the increase of wavenumber, the absorbance values showed an overall decreasing trend, and the range of variation was between 0.239 and 0.833. Figure 2a shows the initial NIRS of 579 white tea samples in the 4000-10,000 cm −1 band. The trend of absorbance values in each band tended to be consistent without any significant differences. With the increase of wavenumber, the absorbance values showed an overall decreasing trend, and the range of variation was between 0.239 and 0.833. Spectral Analysis To visualize the differences in the NIRS of white tea from different origins, three different average spectra were plotted based on the OS data: (1) Figure 2d). With each average spectrum observed, it could be found that the absorbance values fluctuated significantly in the range of 4000-7200 cm −1 , and the average spectra could be largely separated from each other, indicating that the white tea samples of different geographical origins have different absorbance increases and decreases in this band, which indicates a correlation between the spectral information and the origin. The overlap among the average spectra from 7200-10,000 cm −1 in Figures 2b,c indicates there was less effective information related to the origin in this band; in addition, the fluctuation of the band tends to be flat without obvious peaks and valleys, which means that the characteristic information in this band was not obvious and the signal-to-noise ratio was low. Figure 2d). With each average spectrum observed, it could be found that the absorbance values fluctuated significantly in the range of 4000-7200 cm −1 , and the average spectra could be largely separated from each other, indicating that the white tea samples of different geographical origins have different absorbance increases and decreases in this band, which indicates a correlation between the spectral information and the origin. The overlap among the average spectra from 7200-10,000 cm −1 in Figure 2b,c indicates there was less effective information related to the origin in this band; in addition, the fluctuation of the band tends to be flat without obvious peaks and valleys, which means that the characteristic information in this band was not obvious and the signal-to-noise ratio was low. Spectral Pretreatment The OS contained a large amount of chemical information about the samples, but there existed obvious problems that baseline drift and spectra peak overlap, which made it difficult to trace the geographical origin of white tea by the OS only. To further optimize the OS data, spectral preprocessing was performed with CWT, Minmax, MSC and SNV. We can see clearly from the spectrogram changes that all four treatments led to great changes in the spectra morphology. Figure 3a applied CWT for spectral preprocessing; the degree of morphological transformation was the largest among the four preprocessing methods, baseline drift, background interference and noise phenomena were eliminated, the spectral peaks were clearer and the segments of difference information were more obvious. The minmax algorithm (Figure 3b) condensed the spectral absorbance values into −1 to 1, which augmented the data and eliminated the influence of data outline and the range of values, and the subsequent could make the constructed model converge faster and improve the model performance. To eliminate the influence of the uneven size of tea powder particles and the scattering generated, SNV with MSC was used for preprocessing ( Figure 3c,d), and the scattering interference in the spectra was eliminated after processing, and the feature information was more prominent. Compared with OS, the pretreatment could effectively eliminate the signal interference caused by light scattering and baseline drift in the spectra, but the treated spectrograms still could not visually distinguish the differences among the production areas, which might be due to the more similarity in the composition and content of inclusions in white tea from different producing areas. Consistent with OS, the fluctuations at 7200-10,000 cm −1 of the four pre-treated spectra were still flat and the feature information was not obvious. To reduce the data's dimensionality in the model and improve the model calculation's speed and quality, this segment was discarded in the subsequent model construction [9]. Spectral Pretreatment The OS contained a large amount of chemical information about the samples, but there existed obvious problems that baseline drift and spectra peak overlap, which made it difficult to trace the geographical origin of white tea by the OS only. To further optimize the OS data, spectral preprocessing was performed with CWT, Minmax, MSC and SNV. We can see clearly from the spectrogram changes that all four treatments led to great changes in the spectra morphology. Figure 3a applied CWT for spectral preprocessing; the degree of morphological transformation was the largest among the four preprocessing methods, baseline drift, background interference and noise phenomena were eliminated, the spectral peaks were clearer and the segments of difference information were more obvious. The minmax algorithm (Figure 3b) condensed the spectral absorbance values into −1 to 1, which augmented the data and eliminated the influence of data outline and the range of values, and the subsequent could make the constructed model converge faster and improve the model performance. To eliminate the influence of the uneven size of tea powder particles and the scattering generated, SNV with MSC was used for preprocessing (Figures 3c,d), and the scattering interference in the spectra was eliminated after processing, and the feature information was more prominent. Compared with OS, the pretreatment could effectively eliminate the signal interference caused by light scattering and baseline drift in the spectra, but the treated spectrograms still could not visually distinguish the differences among the production areas, which might be due to the more similarity in the composition and content of inclusions in white tea from different producing areas. Consistent with OS, the fluctuations at 7200-10,000 cm −1 of the four pre-treated spectra were still flat and the feature information was not obvious. To reduce the data's dimensionality in the model and improve the model calculation's speed and quality, this segment was discarded in the subsequent model construction [9]. Extraction of Characteristics In this study, the NIRS data in the range of 4000-10,000 cm −1 was obtained; after preprocessing and comparison analysis, it was decided to use 4000-7200 cm −1 for the establishment of the origin traceability models of white tea. Using all data in the range to build models may negatively affect the operation speed and accuracy due to a large amount of data. Therefore, the processes of dimensionality reduction were performed to extract features with lower dimensionality to characterize the spectra data information. This study used PCA, LDA and SPA to achieve dimensionality reduction, and the optimal dimensionality reduction method was determined based on the modeling results. PCA The characteristic vectors in OS and preprocessed NIR spectra by CWT, Minmax, MSC and SNV were extracted by PCA, and the results are shown in Table 1. The table shows the extracted eigenvalues and cumulative contributions of the first 15 principal components, and the number of model input principal components was screened based on the principle that the eigenvalue is >1 and the cumulative contribution is >80%. In the NIRS data matrix of white tea origins classified by DPC, the number of feature vectors obtained by OS was 4; the number of feature vectors obtained by CWT, Minmax, MSC and SNV preprocessed spectra were 11, 7, 7 and 7, respectively; in the NIRS data matrix of white tea origins classified by DDFP or AFWT, the number of feature vectors obtained by OS was 4; the number of feature vectors obtained by CWT, Minmax, MSC and SNV preprocessed spectra were 10, 6, 7 and 7, respectively. The cumulative contribution was >80%, which was consistent with the principle, and models were subsequently constructed based on the screened principal components. LDA LDA is commonly used as a classifier in the field of tea. However, the research on using LDA for NIRS feature extraction and building was rarely reported involving white tea recognition models based on the extracted feature vectors combined with classifiers. LDA can reduce the dimension of the data matrix to the number of categories minus 1. To reduce the dimension without losing too much original information, all dimensions obtained by LDA dimension reduction would be used for subsequent modeling. Therefore, the number of feature vectors obtained using LDA for DPC, DDFP and AFWT data matrices was 6, 5 and 1, respectively. Figure 4 shows the number of feature wavenumbers extracted by SPA. As can be seen from Figure 4, the RMSE reached the minimum value when a specific number of wavenumbers was selected; and after that, although the RMSE still fluctuated and decreased, the decrease was small and led to an increase in the selected number of wavenumbers, so there was no need to increase the number of dimensions to pursue a smaller RMSE. Figure 4a-e shows the iterative RMSE decline curves of the white tea NIRS data matrix of DPC obtained by SPA, from which it could be seen that the number of feature wavenumbers obtained from the final feature extraction was 15, 13, 13, 15 and 11; Figure 4f-j shows the iterative RMSE decline curves of the white tea NIRS spectra data matrix of DDFP obtained by SPA, from which it could be seen that the number of feature wavenumbers obtained from the final feature extraction was 13, 19, 12, 14 and 13; Figure 4k-o shows the iterative RMSE decline curves of the white tea spectra data matrix of AFWT obtained by SPA, from which it could be seen that the number of feature wavenumbers obtained from the final feature extraction was 11, 13, 10, 12 and 13. numbers obtained from the final feature extraction was 15, 13, 13, 15 and 11; Figures 4f-j shows the iterative RMSE decline curves of the white tea NIRS spectra data matrix of DDFP obtained by SPA, from which it could be seen that the number of feature wavenumbers obtained from the final feature extraction was 13, 19, 12, 14 and 13; Figures 4ko shows the iterative RMSE decline curves of the white tea spectra data matrix of AFWT obtained by SPA, from which it could be seen that the number of feature wavenumbers obtained from the final feature extraction was 11, 13, 10, 12 and 13. Models Evaluation and Optimization The KNN, RF and SVM algorithms were used to train the models on the spectra data to achieve the following objectives: (1) white tea classified by DPC (FJ vs. GZ vs. HN vs. SC vs. YN vs. ZJ vs. GX); (2) white tea classified by DDFP (FD vs. FA vs. ZH vs. SX vs. JY vs. ZR); (3) white tea classified by AFWT (FD vs. Non-FD). In this paper, the joint evaluation of RA and AUC was used to preliminarily evaluate the performance of models, and the parameters of the models with the best performance were optimized. Table 2 shows the evaluation parameters of the model obtained from NIRS data combined with different preprocessing, feature extraction and machine learning algorithms for the identification of white tea from geographically larger production areas (DPC, including FJ, GZ, HN, SC, YN, ZJ and GX). The number of training set samples of all DPC recognition models was 434, and the number of validation set samples was 145. The number of modeling features before dimensionality reduction was 831, and after dimensionality reduction, the number of modeling features was reduced to about 10, which greatly reduced the computational task and improved the computing speed. The recognition accuracy ranged from 66.90 to 86.90%, and the AUC values were in the range of 0.50 to 0.83. The majority of the obtained models have RA > 70% and AUC > 0.65, which indicated that the NIRS data of the samples were highly correlated with the classification and identification of white tea production provinces, and their combination with machine learning algorithms could effectively identify white tea from different production provinces. Therefore, the research method proposed in this study was reasonable and effective for tracing white tea production provinces. In the established DPC recognition model of white tea, the RA of KNN, RF and SVM models based on OS were 73.10%, 75.17% and 75.86%, respectively, and the AUC values were 0.62, 0.65 and 0.63, respectively. The pretreatment of the initially established KNN and RF models were significantly improved with CWT, Minmax, MSC and SNV, the RA and AUC, and the model prediction and generalization ability were further improved. Compared with the results of the DPC-OS-SVM model, only CWT and SNV algorithms achieve the purpose of optimized models. Models Evaluation of White Tea's Origins Classified by DPC After further combining the feature extraction algorithms, the dimensionality of the data was significantly reduced, but the performance of most models was not further improved by the feature extraction algorithms. The number of models whose model quality was further improved after dimensionality reduction of NIRS data used PCA, LDA and SPA algorithms were 11, 5 and 2, respectively. In the process of establishing geographically larger production area recognition models, the dimensionality reduction algorithm PCA performed the best, accomplished the reduction of data dimensionality for the vast majority of models, reduced the computational task and improved the model computing speed. The number of model quality improved using LDA was not as good as PCA, but it had the least number of feature dimensions after dimensionality reduction, and with the subsequent sample collection, the increase in the number of samples in the validation set will make the model using LDA dimensionality reduction more advanced in terms of computational tasks and recognition time. Compared with PCA and LDA, the SPA algorithm has a relatively poor ability to reduce dimensionality and improve model quality. The overall effect of KNN and RF among the three machine learning algorithms was better, and the models built had an average RA of up to 80% and an average AUC of up to 0.72, which were significantly better than the SVM model. The best recognition model for DPC appeared in the KNN model as DPC-CWT-LDA-KNN with features number 6, RA = 86.90% and AUC = 0.83; the lowest feature number, the highest recognition accuracy and AUC value made the model own the best recognition performance and good generalization capacity for different white tea production provinces. Overall, NIRS has great potential to build recognition models for geographically large production areas (different provinces) of white tea. When it comes to building geographically larger production area identification models, the preprocessing algorithms CWT and SNV showed stronger general adaptability, and the combination of the three machine learning algorithms presented significant advantages in identifying white tea origins, leading to a point similar to the results of the study by Zhang et al. [19]. It is suggested that SNV or CWT be combined with other classification algorithms for white tea origin tracing during the subsequent research. The better feature extraction algorithms were PCA and LDA, while the effect of SPA was relatively poor, presumably because LDA and PCA extracted feature vectors, which could represent most of the spectral information; while SPA extracted feature wavenumbers which might not be as comprehensive as feature vectors in terms of representativeness. The overall effect of KNN and RF in the machine learning algorithm was better, and the average RA of the established model could reach 80% and the average AUC could reach 0.72, which was significantly better than the SVM model. To obtain the optimal DPC white tea classification model, the parameters of the DPC-CWT-LDA-KNN model with the best tracking effect on the geographically larger white-tea-producing areas would be optimized subsequently. Tables 3 and 4 show the evaluation parameters of the same NIRS dataset combined with different pre-processing, feature extraction and machine learning algorithms obtained for identifying geographically narrow range of origin (DDFP) and authenticity of China-Europe Geographical Indication Product (AFWT) models. Since the data sets used were the same, the number of samples in the training set was 291 for all models and the number of samples in the validation set was 98 for all models in Tables 3 and 4. The models in Table 3 identify white tea in geographically narrow origin ranges (DDFP, including FD, FA, ZH, SX, JY, and ZR), and the RA of DDFP identification models ranged from 50.00 to 92.86%, and the AUC was in the range of 0.50 to 0.92. The vast majority of DDFP models had RA > 70% and AUC > 0.70, which indicated that the NIR spectral data used were highly correlated with the classification and identification of white tea in a geographically narrow range of origin, and that NIRS combined with machine learning algorithms could achieve fast and nondestructive identification of white tea in a geographically narrow range of origin. The models in Table 4 could make the identification of Fujian white tea as Fuding white tea or not (AFWT, including FD and Non-FD). The RA of AFWT identification models ranged from 51.02 to 97.96%, and the AUC was in the range of 0.50 to 0.98, with the majority of models having RA > 80% and AUC > 0.80, and the models had excellent performance. By comparing the performance difference between AFWT recognition models and DDFP recognition models, it could be seen that when used with the same dataset to reach different recognition goals (differentiated by DDFP or AFWT), the recognition goals with fewer categories were easier to reach, and the RA and AUC values were significantly higher. This may be due to the fact that it was easier to extract the appropriate features when there were fewer recognition categories, improving the model's quality. Models Evaluation of White Tea Origins Classified by DDFP and AFWT In the established DDFP recognition model, the RA of the KNN, RF and SVM models based on OS were 54.08%, 59.18% and 61.22%, respectively, and the AUC values were 0.62, 0.64 and 0.61, respectively. After OS was preprocessed by CWT, Minmax, MSC and SNV, the RA and AUC of the initially established KNN and RF models were significantly improved, and the model accuracy and generalization ability were improved. In the DDFP recognition model established by SVM algorithm, the quality of the spectral model decreases after MSC preprocessing, and other preprocessing algorithms improve the model's quality. In the AFWT recognition model, the RA of KNN, RF and SVM models based on OS were 75.51%, 76.53% and 84.69%, respectively, and the AUC were 0.76, 0.77 and 0.85, respectively. After pretreatment with CWT, Minmax, MSC and SNV, the RA and AUC of the initial identification model were significantly improved, and the model accuracy and generalization ability were improved. Like RA and AUC, the preprocessing algorithm performs better in establishing the AFWT recognition model. It is speculated when there were fewer recognition categories, the universality of the preprocessing algorithms would be wider. After further combination with the feature extraction algorithms, the data dimensions of the DDFP and AFWT recognition models were greatly reduced. In the DDFP recognition models, more than half of the model's performance was further improved due to the feature extraction algorithms. After a dimensionality reduction using PCA, LDA and SPA algorithms, the number of models whose model quality was further improved was 4.15 and 4, respectively. In the AFWT recognition models, most of the model performance was not further improved by the feature extraction algorithms; after dimensionality reduction using PCA, LDA and SPA algorithms, the number of models whose model quality was further improved was 3.13 and 0, respectively. In general, when establishing DDFP and AFWT recognition models based on the same data, LDA performs best in feature extraction algorithms, and the extracted model has the least number of features and the best model performance. As the number of subsequent samples increases, the increase in the number of validation set samples will make the model using LDA dimensionality reduction more obvious in terms of computing tasks and recognition time. In the DDFP and AFWT recognition models, the machine learning algorithm RF has the best overall effect, with the highest average RA and AUC values. The best DDFP recognition model appeared in the KNN model, which was DDFP-OS-LDA-KNN with a number of features 5, RA = 92.86%, AUC = 0.92, indicating that the model had good prediction ability for DDFP recognition of white tea; the lowest number of features enables the model to have fewer computing tasks and better computing speed when the number of samples in the subsequent validation set increases. There were three best AFWT recognition models, namely AFWT-OS-LDA-KNN, AFWT-OS-LDA-RF and AFWT-OS-LDA-SVM. Their feature numbers were all 1, RA = 97.96%, and AUC = 0.98. In order to further explore their performance differences, four-fold cross-validation results were introduced to evaluate these three models. The principle of four-fold cross-validation is to divide the original data set into four subsets equally and make the data of each subset into a validation set, respectively, and the data of the remaining three subsets as the training set, which can get four models performance parameters, and the average of these four models RA is the four-fold crossvalidation result. The higher RA of the cross-validation results represents the stronger generalization ability of the model and the better prediction ability for new samples. Table 5 shows the four-fold cross-validation results of AFWT-OS-LDA-KNN, AFWT-OS-LDA-RF and AFWT-OS-LDA-SVM. As shown in the table, the four-fold cross-validation could distinguish small differences in generalization ability among the models when the RA values of the training and validation sets of the three models were the same. AFWT-OS-LDA-KNN had the highest four-fold cross-validation RA of 97.96%, which indicated that KNN was more suitable than the RF and SVM algorithms for the construction of authenticity models with fewer classification categories. To obtain the optimal AFWT identification model, the AFWT-OS-LDA-KNN would be subsequently optimized for the model parameters. In general, the NIRS dataset combined with different pre-processings, feature extractions and machine learning algorithms was excellent for identifying the geographically narrow range of origin (DDFP) and authenticity of China-Europe Geographical Indication Product (AFWT). SNV performed the best among the preprocessing algorithms and improved the model quality best, with similar findings in the study by Zhang et al. [19]. LDA performs best among the feature extraction algorithms, with the least number of dimensions obtained by dimensionality reduction, which could significantly reduce the model computational task and thus improve the computing speed. Machine learning algorithms with RF in combination with different algorithms present good model results with higher overall average performance parameters; however, the best performing models were found in the KNN model. It is suggested that a reference standard for higher-quality model evaluation parameters can be modeled using the RF algorithm in subsequent studies, and then the KNN algorithm can be used to build a higher-quality model. Models Optimization To further improve the performance of the model, the parameters of the three models with the best comprehensive performance in the three types of identification models were optimized, including DPC-CWT-LDA-KNN, DDFP-OS-LDA-KNN and AFWT-OS-LDA-KNN. In the KNN algorithm, the number of neighbors k plays a decisive role in the quality of the model [38]. To further optimized the models, the k-values were defined between 1-100 for model optimization, and the established models were evaluated by the magnitude of RA values as the model performance. Figure 5 shows the curves of RA values for the optimization of parameter k in the above KNN model. The black circle represents the occurrence of the maximum value of RA at that parameter. Thus, the optimal parameter k = 8 when the RA of the DPC-CWT-LDA-KNN validation set reached a maximum value of 88.97% (Figure 5a); the optimal RA of the DDFP-OS-LDA-KNN validation set was 93.88%, Performance Analysis of Optimal Models After the models' evaluation and optimization, we obtained the best models for identifying white tea DPC, DDFP and AFWT, and the optimal model parameters are shown in Table 6. As shown in the table, the best models for identifying and classifying white tea based on a geographically larger production area (DPC), narrow range of origin (DDFP) and authenticity of China-Europe Geographical Indication Product (AFWT) all had modeling feature numbers less than 10, with the RA all close to or greater than 90% and AUC values all close to or greater than 0.90, which indicated these models possessed excellent prediction and generalization abilities. The excellent quality of the above models demonstrates the ability of NIRS for rapid and nondestructive origin tracing of white tea and provides a reference for other agricultural products in terms of technology and algorithm application in origin traceability. To further evaluate the ability of the best models to recognize each category, confusion matrices were introduced for in-depth evaluation. The confusion matrix provides a Performance Analysis of Optimal Models After the models' evaluation and optimization, we obtained the best models for identifying white tea DPC, DDFP and AFWT, and the optimal model parameters are shown in Table 6. As shown in the table, the best models for identifying and classifying white tea based on a geographically larger production area (DPC), narrow range of origin (DDFP) and authenticity of China-Europe Geographical Indication Product (AFWT) all had modeling feature numbers less than 10, with the RA all close to or greater than 90% and AUC values all close to or greater than 0.90, which indicated these models possessed excellent prediction and generalization abilities. The excellent quality of the above models demonstrates the ability of NIRS for rapid and nondestructive origin tracing of white tea and provides a reference for other agricultural products in terms of technology and algorithm application in origin traceability. To further evaluate the ability of the best models to recognize each category, confusion matrices were introduced for in-depth evaluation. The confusion matrix provides a detailed reflection of the performance of the classification model, where the rows represent the true class, and the columns represent the predicted class. The confusion matrix enables the visualization of the number of the correctly classified as well as the categories and number of misclassified categories for each white-tea-producing area. The higher the value on the diagonal of the matrix, the better the prediction ability of the model. The confusion matrix of the best DPC, DDFP and AFWT identification models are shown in Figure 6. From Figure 6a, it could be seen that in distinguishing white tea from different provinces, the predicted accuracy of DPC-CWT-LDA-KNN for YN and ZJ was 100%, and the predicted accuracy for both FJ and GZ was greater than 85.00%; the misclassification occurred mostly in HN and SC samples, and the HN production area was often misclassified as SC production area, and the SC production area was often misclassified as FJ production area. When identifying white tea from different production districts in Fujian Province, the predicted accuracy of DDFP-OS-LDA-KNN for FD, FA and ZH was 100%; misclassification occurred in SX, JY and ZR, and the JY production district was often misclassified as ZH ( Figure 6b). As shown in Figure 6c, when performing authenticity identification of Fuding white tea, AFWT-OS-LDA-KNN correctly identified 97.92% and 98.00% of FD and Non-FD, respectively, with excellent prediction ability and good model performance. Overall, the models had excellent correct identification rates for each appellation, and misclassifications occurred mostly among appellations bordering geographic locations. The high similarity of geographic environment, climatic factors and processing processes may be the reason for the frequent misclassification among these appellations. detailed reflection of the performance of the classification model, where the rows represent the true class, and the columns represent the predicted class. The confusion matrix enables the visualization of the number of the correctly classified as well as the categories and number of misclassified categories for each white-tea-producing area. The higher the value on the diagonal of the matrix, the better the prediction ability of the model. The confusion matrix of the best DPC, DDFP and AFWT identification models are shown in Figure 6. From Figure 6a, it could be seen that in distinguishing white tea from different provinces, the predicted accuracy of DPC-CWT-LDA-KNN for YN and ZJ was 100%, and the predicted accuracy for both FJ and GZ was greater than 85.00%; the misclassification occurred mostly in HN and SC samples, and the HN production area was often misclassified as SC production area, and the SC production area was often misclassified as FJ production area. When identifying white tea from different production districts in Fujian Province, the predicted accuracy of DDFP-OS-LDA-KNN for FD, FA and ZH was 100%; misclassification occurred in SX, JY and ZR, and the JY production district was often misclassified as ZH ( Figure 6b). As shown in Figure 6c, when performing authenticity identification of Fuding white tea, AFWT-OS-LDA-KNN correctly identified 97.92% and 98.00% of FD and Non-FD, respectively, with excellent prediction ability and good model performance. Overall, the models had excellent correct identification rates for each appellation, and misclassifications occurred mostly among appellations bordering geographic locations. The high similarity of geographic environment, climatic factors and processing processes may be the reason for the frequent misclassification among these appellations. The optimal model was visualized to reveal the clustering trend of samples from each producing area (Figure 7). In the three-dimensional model diagram of DPC-CWT-LDA-KNN (Figure 7a), the clustering effect of the GX and YN samples was excellent, which could be clearly distinguished from tea in other provinces. The FJ sample's clustering effect was good, which could be basically separated from other provinces. The spectral characteristics of GZ, HN, SC and ZJ samples were very close to each other in three-dimensional space, the geographical location of these provinces and the similarity of the climatic conditions and tea processing technology may be the reasons for the clustering effect's good performance. By observing the three-dimensional model diagram of DDFP-OS-LDA-KNN (Figure 7b), it could be found that samples from different producing areas could basically be clustered separately in three-dimensional space, which could easily distinguish them. ZH and SX samples were close, which may be due to their similar geographical locations and similar processing technology. By observing the visualization of the AFWT-OS-LDA-KNN model (Figure 7c), it could be found that the clustering of FD The optimal model was visualized to reveal the clustering trend of samples from each producing area (Figure 7). In the three-dimensional model diagram of DPC-CWT-LDA-KNN (Figure 7a), the clustering effect of the GX and YN samples was excellent, which could be clearly distinguished from tea in other provinces. The FJ sample's clustering effect was good, which could be basically separated from other provinces. The spectral characteristics of GZ, HN, SC and ZJ samples were very close to each other in three-dimensional space, the geographical location of these provinces and the similarity of the climatic conditions and tea processing technology may be the reasons for the clustering effect's good performance. By observing the three-dimensional model diagram of DDFP-OS-LDA-KNN (Figure 7b), it could be found that samples from different producing areas could basically be clustered separately in three-dimensional space, which could easily distinguish them. ZH and SX samples were close, which may be due to their similar geographical locations and similar processing technology. By observing the visualization of the AFWT-OS-LDA-KNN model (Figure 7c), it could be found that the clustering of FD and Non-FD samples were very effective, which may be the reason for the excellent effect of the model. In general, after visualization, the distribution of spectral characteristics of white tea samples in some producing areas was very close in three-dimensional space, which may be related to the small number of samples collected in these producing areas and the lack of obvious spectral characteristics. In addition, it may also be related to the similarity of the white tea quality caused by geographical location, climatic factors and similar processing technology. To solve these problems, we will strengthen the spectral characteristics of the production area by increasing the number of samples year by year, so as to further improve the performance of the model. and Non-FD samples were very effective, which may be the reason for the excellent effect of the model. In general, after visualization, the distribution of spectral characteristics of white tea samples in some producing areas was very close in three-dimensional space, which may be related to the small number of samples collected in these producing areas and the lack of obvious spectral characteristics. In addition, it may also be related to the similarity of the white tea quality caused by geographical location, climatic factors and similar processing technology. To solve these problems, we will strengthen the spectral characteristics of the production area by increasing the number of samples year by year, so as to further improve the performance of the model. Conclusions This study proved the feasibility of using NIRS data to verify the origin of white tea simply and quickly. Combining different spectra data preprocessing methods (CWT, Minmax, MSC and SNV) with different feature extraction algorithms (PCA, LDA and SPA), 180 white tea origin traceability models were established based on KNN, RF and SVM algorithms. The modeling results show that the SNV effect was the most excellent among the preprocessing algorithms, and the performance of the model was improved best without combining other algorithms. LDA has the greatest advantages in different feature extraction algorithms, and the number of features obtained by dimensionality reduction was the least. RF has the strongest general adaptability in machine learning algorithms, but the best model quality generally appears in KNN models. DPC-CWT-LDA-KNN, DDFP-OS-LDA-KNN and AFWT-OS-LDA-KNN were proved to be the optimal models for identifying white tea origins classified by DPC, DDFP and AFWT. The RA of the optimal models was close to or greater than 90%, and their AUC value was close to or greater than 0.90, these models had excellent predictive ability and good generalization ability. Overall, this study demonstrates the possibility of achieving white tea origin traceability based on NIRS, representing a step forward in the method selection for origin traceability and quality control of white tea. Based on the above research results, in order to solve the problems of unbalanced model samples and the close distance of model clusters, we will increase the number of samples year by year in the future, enrich the white tea NIRS data set to optimize model performance and develop portable white tea origin traceability devices on this basis. In addition, we will also try to build an online white tea origin identification platform using internet technology to carry out remote white tea origin traceability. Conclusions This study proved the feasibility of using NIRS data to verify the origin of white tea simply and quickly. Combining different spectra data preprocessing methods (CWT, Minmax, MSC and SNV) with different feature extraction algorithms (PCA, LDA and SPA), 180 white tea origin traceability models were established based on KNN, RF and SVM algorithms. The modeling results show that the SNV effect was the most excellent among the preprocessing algorithms, and the performance of the model was improved best without combining other algorithms. LDA has the greatest advantages in different feature extraction algorithms, and the number of features obtained by dimensionality reduction was the least. RF has the strongest general adaptability in machine learning algorithms, but the best model quality generally appears in KNN models. DPC-CWT-LDA-KNN, DDFP-OS-LDA-KNN and AFWT-OS-LDA-KNN were proved to be the optimal models for identifying white tea origins classified by DPC, DDFP and AFWT. The RA of the optimal models was close to or greater than 90%, and their AUC value was close to or greater than 0.90, these models had excellent predictive ability and good generalization ability. Overall, this study demonstrates the possibility of achieving white tea origin traceability based on NIRS, representing a step forward in the method selection for origin traceability and quality control of white tea. Based on the above research results, in order to solve the problems of unbalanced model samples and the close distance of model clusters, we will increase the number of samples year by year in the future, enrich the white tea NIRS data set to optimize model performance and develop portable white tea origin traceability devices on this basis. In addition, we will also try to build an online white tea origin identification platform using internet technology to carry out remote white tea origin traceability. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: Jiaya Chen from LiuMiao White Tea corporation (the author contributed to the resources, the company provides a large number of experimental samples); Gang Lin from Fujian Rongyuntong Ecological Technology Limited Company (the author contributed to the resources, the company provides algorithmic support); Linhai Chen from Fu'an Tea Industry Development Center (the author contributed to the resources, the company provides a large number of experimental samples).
v3-fos-license
2019-01-22T22:22:31.056Z
2018-12-25T00:00:00.000
58638665
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/19/1/71/pdf", "pdf_hash": "de74695eb292a487e51e73b474b7be6ded1680c5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44740", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "de74695eb292a487e51e73b474b7be6ded1680c5", "year": 2018 }
pes2o/s2orc
Localization Based on MAP and PSO for Drifting-Restricted Underwater Acoustic Sensor Networks Localization is a critical issue for Underwater Acoustic Sensor Networks (UASNs). Existing localization algorithms mainly focus on localizing unknown nodes (location-unaware) by measuring their distances to beacon nodes (location-aware), whereas ignoring additional challenges posed by harsh underwater environments. Especially, underwater nodes move constantly with ocean currents and measurement noises vary with distances. In this paper, we consider a special drifting-restricted UASN and propose a novel beacon-free algorithm, called MAP-PSO. It consists of two steps: MAP estimation and PSO localization. In MAP estimation, we analyze nodes’ mobility patterns, which provide the priori knowledge for localization, and characterize distance measurements under the assumption of additive and multiplicative noises, which serve as the likelihood information for localization. Then the priori and likelihood information are fused to derive the localization objective function. In PSO localization, a swarm of particles are used to search the best location solution from local and global views simultaneously. Moreover, we eliminate the localization ambiguity using a novel reference selection mechanism and improve the convergence speed using a bound constraint mechanism. In the simulations, we evaluate the performance of the proposed algorithm under different settings and determine the optimal values for tunable parameters. The results show that our algorithm outperforms the benchmark method with high localization accuracy and low energy consumption. Introduction Underwater Acoustic Sensor Networks (UASNs) have been widely applied in many fields such as underwater surveillance, pollution detection and disaster prevention [1,2]. Generally, an UASN is comprised of different types of nodes, which can be floating sensor nodes, surface buoys, Autonomous Underwater Vehicles (AUVs) and other application specific devices [3,4]. These nodes communicate with each other and sense underwater environments collaboratively. The sensed data is then analyzed to provide decision support for the upper applications. In this process, the locations of nodes need to be aware to interpret the sensed data meaningfully. Hence, localization is one of the critical services in UASNs. The whole localization process consists of two steps: MAP estimation step and PSO localization step. In the former step, the nodes that can communicate with each other form a cluster. The distances between cluster nodes are estimated using the TOA method. Then, the priori localization information and the distance measurements are fused to obtain the posterior probability distribution of unknown nodes' locations and derive the weighted objective function by maximizing the logarithm of posterior distribution under the Bayesian framework. In the latter step, a swarm of particles are initialized according to its movement area, and then updated iteratively towards local and global best solutions with a certain speed. By calculating the fitness value of the objective function and communicating with each other, the particles collaboratively determine best location solution. Specifically, our contributions are mainly as follows: • This paper proposes a novel localization method without the presence of beacon nodes for DR-UASNs, which achieves higher localization accuracy and lower computational cost compared with the benchmark method. • The noises varying with distances are taken into account, which is modeled by additive and multiplicative noises. Hence the noises in distance measurements can be efficiently filtered for improving localization accuracy. • The reference selection and bound constraint mechanisms are proposed to combat the problems of localization ambiguity and low convergence speed in the PSO step. The rest of this paper is organized as follows: in Section 2, we briefly review existing works on UASN localization. Then, the network model is presented and the localization problem is formulated in Section 3. Section 4 presents the localization process of the MAP-PSO algorithm. In Section 5, we evaluate the performance of the MAP-PSO algorithm under different settings and conduct comparison with the AFLA algorithm. Finally, conclusions are drawn in Section 6. Related Work Existing localization schemes can be divided into two categories: range-free and range-based. While range-free schemes provide coarse-grained location estimations with low communication cost, range-based schemes can achieve a relatively high localization accuracy, but with additional communication and hardware cost. In this work, we are interested in range-based schemes. Next, we briefly review some works related to our method, and more detailed research review can be found in [4,27]. In general, range-based localization consists of two stages: distance estimation and location calculation. The ranging method commonly used is TOA, which gets the difference between packet sending and receiving time and estimates the distance by multiplying the time difference and the acoustic speed. However, it suffers from low accuracy due to multiple factors such as time synchronization, multipath effect and stratification effect. In [3,25] , the localization does not require time synchronization. It is assumed that beacon nodes move in the vertical direction and unknown nodes are stationary, which are unpractical in underwater environments. Several works jointly consider the time synchronization and localization problems [6,9,19,28], in which time synchronization is first performed to obtain clock skew and offset, and then the locations are estimated based on synchronized distance measurements. In [10], a novel ranging method is proposed under the assumption of isogradient sound speed profile (SSP), i.e., the sound speed is linearly related to the depth in each SSP layer. Given the depths of two nodes and the TOA measurements, it calculates the horizontal distance through a root finding algorithm. This method has high accuracy, but with high computational cost. Further, RAR [29] is proposed to enable real-time localization based on Bellhop model. Its main drawback is that the Bellhop model can't reflect time-varying characteristic. In [8], TOA measurements from multiple paths are assumed to follow a mixture of three Gaussian distributions corresponding to LOS, SNLOS and ONLOS links, and then an EM algorithm is used to accurately classify different types of links. In a 3D UASN, the locations of unknown nodes can be figured out by using classic multilateration algorithm and the ranging measurements to at least four beacon nodes. Further, the number of beacon nodes can be reduced to three by projecting them onto unknown node's horizontal plane [7,12]. In [30], a hyperbola-based localization method is proposed, in which the ambiguity existing in multilateration can be eliminated. For large-scale and sparse UASNs, many unknown nodes can't be localized due to lack of sufficient beacon nodes. Hence, an iterative localization strategy is commonly adopted to improve the localization coverage [15,25,31,32], in which unknown nodes that have been localized with high accuracy can be regarded as reference nodes to help other unknown nodes' localization. All of these algorithms mentioned above assume the nodes drift freely with ocean currents and need the presence of beacon nodes prior to localization. Moreover, the number of beacon nodes often increases with the network scale. This increases both the network cost and the difficulties of network recycling and maintenance. In [21], the authors assume nodes drift in a restricted manner and propose a localization algorithm without beacon nodes, called AFLA. The algorithm takes advantage of the geometrical relationship of three adjacent nodes and forms six equations to figure out their locations. However, AFLA does not take into account the noises varying with distances and has a high computational cost due to direct search in the solution space, which makes it inapplicable for UASN localization. In this paper, we adopt the similar network architecture and propose the MAP-PSO algorithm to solve the problems of varying noises and computation complexity. Network Model and Problem Definition In this section, we present the network model, analyze the drifting characteristic of underwater nodes and present formal definition of the localization problem. Network Model We consider an UASN that consists of a number of nodes deployed at the surveillance area. To reduce the network cost, each node is low-complexity with constrained energy and limited computational ability. Due to the intrinsic fluid property of underwater environments, the nodes move continuously with ocean currents. This requires the localization should be accomplished in a short time, otherwise the estimated locations will become obsolete as the nodes move to new locations. Hence, it is necessary to design a fast and energy-efficient algorithm to provide real-time localization in this resource-constrained network. As a consequence of the continuous mobility, some nodes may drift out of the deployment area, which increases the difficulties of network recycling and maintenance. Aiming at this problem, we follow a drifting-restricted UASN. Its network architecture is shown in Figure 1. Each node is linked to an anchor point by a cable and thus the movement of each node is confined in a local area. The locations of anchor points and the length of cables are known beforehand. Take node i for example, let L i denotes the length of its cable and A i denotes the location of its anchor point. In practical scenarios, the cable length may vary with surveillance requirements (e.g., tens of meters in nearshore areas [33] and hundreds of meters in offshore areas [34,35]). At the deployment phase, to prevent the cables of different nodes from twisting together, the distance between the anchor points of nodes i and j should be longer than the sum of the length of their cables: To ensure the nodes float in the water all the time, the length of the cables should be less than the sea depth H. Due to the impact of tide rise and fall, the sea depth should satisfy: where H min denotes the sea depth of the highest tide and H max denotes the sea depth of the lowest tide. Therefore, the length of the cables satisfies: At the same time, due to the tension of the cable, the depth of each node satisfies: Generally, the movement of the nodes is controlled by joint forces from ocean current, water buoyancy and the cables. The buoyancy of each node is related to its volume, the water density and the acceleration of gravity. The water density is further influenced by ocean temperature and salinity. Hence the buoyancy of each node is a constant value in a certain spatial and temporal extent due to slow change of ocean temperature and salinity. The force of ocean current is mainly influenced by the current speed. At a specific depth, the three forces reach a balance state. The node can drift on the plane of this depth and move along a circle centered at its anchor point. According to Pythagorean theorem, we have the following equation for node i: where r i denotes the radius of its movement circle. Ideally, if we assume ocean current is infinitesimal, the forces from water buoyancy and the cables reach balance in the vertical direction, then we have h i = L i and r i = 0; on the contrary, if we assume ocean current is infinite, we obtain h i = 0 and r i = L i . Hence, according to different current speed, the movement radius r is 0 < r i < L i . In practice, the depth h i can be obtained by equipping with a cheap pressure sensor. According to Equation (2), the radius r i can be calculated as r i = L 2 i − h 2 i . Therefore, the location of node i is constrained on a circle centered at the anchor point A i with the radius r i . This indicates that the surveillance coverage will increase with the cable length and the node depth. This priori localization information hidden in the network model, as we will see later, can simplify the search space of localization solutions and reduce the localization time greatly. To exploit the spatial relationship between nodes for localization, a node communicates with its neighbor nodes and measures the distances to them by multiplying the propagation delay by the acoustic speed. Most researches adopt one-way TOA method for distance estimation [5]. However, this method requires time synchronization between nodes, which is non-trivial in the harsh underwater environments. Alternatively, we adopt two-way TOA method for two reasons. One reason is that two-way TOA method can eliminate the clock offset between nodes and thus time synchronization is not requisite. The other reason is that the acknowledgement mechanism is commonly used in UASN MAC layer due to the inherent unreliability of underwater acoustic channel. Even if we adopt one-way TOA method, each packet needs an acknowledgement to guarantee successful packet transmission, which is similar with two-way TOA method. There have been multiple MAC protocols [36,37] for arranging packet transmissions and solve packet collisions, which is out of the scope of this paper. Herein, we take two nodes A and B as an example to illustrate the packet exchanges of two-way TOA method. A sends a ranging packet at time instant t 1 . B receives the packet at time instant t 2 and responds with an acknowledgement at time instant t 3 . Then A receives the response at time instant t 4 , the propagation delay can be calculated as After that, the distance between A and B can be estimated as where c represents the speed of acoustic signals. In each packet exchange, a node sends a packet at the power level P tx and receives a packet at the power level P rx . Let L P and L A represent the size of ranging packet and acknowledgement packet. Hence, the energy consumption of the transmitting node is E tx = P tx × L P R + P rx × L A R where R denotes the data rate. The energy consumption of the receiving node is we can estimate the distances between a node and all of its neighbor nodes and calculate the overall energy consumption. Based on the priori localization information and spatial relationship between nodes, our purpose is to accurately estimate the locations of all the nodes with low computational and communication costs as fast as possible. Next we will present formal definition of the localization problem. Problem Definition Suppose there are N nodes in the whole network and each node needs to get its location periodically. We define T as the localization period in which each node needs to estimate its location. Note that, the localization period can be tuned according to practical requirements (e.g., in a UASN that observing ocean phenomena, it can be set to be equal to an hour [1]). For simplicity, the superscript for the n-th localization period is suppressed in what follows. The node i moves at the circle centered at the anchor point A i with the radius r i . Therefore, its two-dimensional location can be represented as where (x i , y i ) denotes the locations of node i in the X-and Y-axes directions, (x i A , y i A ) denotes the locations of anchor point A i in the X-and Y-axes directions, and θ denotes the azimuth angle between the X-axis and the line that connects the anchor point and the node. On this basis, we aim to find M nodes to form a node cluster S. The number M should be more than 3 (e.g., M = 4 in Figure 1) so that a localization polygon can be constructed. Moreover, these nodes should be mutually neighbor nodes. Hence, for any two nodes i and j in the cluster, their locations satisfy the equation the three-dimensional location of node i, and C i denotes the communication radius of node i. The distance estimation z i,j between the two nodes can be obtained using Equations (3) and (4). In the cluster, there are totally M(M − 1)/2 node pairs and each pair has its corresponding distance estimation. Hence the measurements of the cluster S can be represented as Z = {z i,j |∀i, j = 1, 2, ..., M, i < j}. Similarly, we can find all the other clusters in the network and obtain their corresponding measurements. For each cluster, we now have represented the two-dimensional location of the nodes and obtained the distance estimation between any two nodes. Given these data, the localization problem is to fuse them using a proper model and figure out the locations of all the cluster nodes. Specifically, our purpose is to minimize the sum of the squares of the errors between estimated distances and real distances. The objective function can be written as arg min The locations of M nodes can be resolved using efficient optimization methods. The whole localization process lasts until all the clusters have been localized. Next, we will present the algorithm that can fast localize the nodes without ambiguity. MAP-PSO Design In this section, we present MAP-PSO, a novel localization scheme without the presence of beacon nodes for DR-UASNs. The whole localization process consists of MAP estimation and PSO localization. In the following, we describe the details of our MAP-PSO algorithm starting from the MAP estimation step and followed by the PSO localization step. MAP Estimation Recall the localization problem defined in Equation (6), we didn't take the noise into account, which may incur large errors of location estimations. Here, we consider two types of noise. One is that every node may not accurately move along its corresponding circle due to the error introduced in the practical deployments, such as the displacement of anchor points, the error of cable length, the volume of nodes, etc. The other is the distance measurements between nodes may deviate from real distances due to complex underwater environment factors, including non-straight propagation of acoustic signals, varying acoustic speed and tiny propagation time errors from clock skew. To characterize the nodes' mobility patterns, we assume the location of each node is randomly distributed around its corresponding movement circle, which can be represented as where w i x and w i y denote the noises of the location of node i along X-and Y-axes, and follow a Gaussian distribution with zero mean and precision Λ, that is, w i x ∼ N (0, Λ) and w i y ∼ N (0, Λ). This implies that the noises of different nodes are assumed to be identically distributed. Then, we transform Equation (7) to its vector form and obtain the following formula For simplicity, we assume w i x and w i y are uncorrelated with each other. Then the noise w i is a Gaussian variable with zero mean and the precision matrix ΛI, where I is the identity matrix. Hence, the location of node i follows a joint Gaussian distribution with meanx i and precision ΛI: Here, the depth h i is not considered because it can be measured beforehand by a pressure sensor. The uncertainty of the location x i and radius r i due to inaccuracy of depth measurements can be accommodated by the Gaussian distribution in Equation (9). It is important to note that this probability distribution is extracted from DR-UASN architecture, which can be treated as the prior knowledge of MAP estimation. We now discuss how to model the noises in distance measurements. In [10], it has been found that the TOA error increases with real distance due to non-straight propagation. Hence, it is reasonable to assume that the ranging error caused by noise increases linearly with the real distance. Here, we use both additive and multiplicative noises [38] to model the distance measurements: where d i,j = ||X i − X j || denotes the real distance between node i and j, α i,j is the multiplicative noise that follows a Gaussian distribution with mean µ α and precision λ α , that is, α i,j ∼ N (µ α , λ α ), and β i,j is the additive noise that follows a Gaussian distribution with zero mean and precision λ β , namely, . For simplicity, we assume that these two types of noises are uncorrelated with each other. The total noise can be denoted by i,j = α i,j d i,j + β i,j , which is also a Gaussian variable with mean Then, the conditional probability distribution of z i,j over x i and x j can be written as For all the distance measurements Z in the cluster S, the likelihood function of the locations can be written as Based on Equation (9), the priori probability distribution of the locations X is given by Using the Bayesian theorem, the posterior probability distribution of X given the measurements Z can be derived as Then the locations X can be determined by maximizing the posterior distribution p(X|Z). Taking the logarithm of p(X|Z), we have Substituting Equations (9) and (11) into Equation (15), the maximum of the posterior probability is given by the minimum of where the irrelevant terms with the unknown locations X terms are omitted. Note that, considering λ i,j depends on the real distance d i,j that are determined by the unknown variable X, we have replaced λ i,j withλ i,j = (λ −1 α z 2 i,j + λ −1 β ) −1 under the assumption of the small deviations between z i,j and d i,j . Now let us analyze the ratioλ i,j toλ k,l (z i,j , z k,l ∈ Z): Multiplying both numerator and denominator with λ −1 β , we get In the simulations, we have assigned approximate values to λ α and λ β , that is, λ −1 α λ β ≈ 1. Taking the assumption that z i,j 1, which is reasonable for DR-UASNs, the above equation can be reduced aŝ Then we can replaceλ i,j with z −2 i,j in Equation (16) and execute equal operations to the second sum term. The objective function can be rewritten as: where δ = λ −1 α Λ. This function has two implications. On one hand, for the first sum term, each square error term has its own weight that is inversely proportional to the square of the distance measurement. Thus the location variables X tend to determine their values that fit the square terms with high weights better. On the other hand, the parameter δ can be regarded as a penalty factor between the priori knowledge and the likelihood information. Assigning a large value to δ propels the location of each node to be at its movement circle accurately; on the contrary, if δ is assigned a small value, each node is allowed to deviate its movement circle to certain extent and its location is more determined by the likelihood information. PSO Localization The minimization problem of the objective function in Equation (17) has no analytic solutions due to it is nonlinear function of the location variable X. Traditional optimization methods such as gradient descent easily fall into local optimum and have low convergence speed. In this paper, we resort to PSO method to solve this minimization problem. It uses a population of particles to represent candidate solutions in the search space, moves these particles to local and global best solutions iteratively, and then finds the particle that fit the objective function best. The PSO method can efficiently escape from local optimum by considering local and global views simultaneously. Moreover, we propose the bound constraint mechanism to improve the convergence speed greatly. Next, we describe the PSO procedure in detail. In Equation (17), besides the location variable X, the optimization variables also include µ α . Hence there are totally 2M + 1 variables, which can be denoted as We assume that there are N P particles initialized, that is, {P k } N P k=1 . Each of the particles represents an instance of the variable set. To reduce the search space, we initialize the position of the particle as follows: where µ L α and µ U α represent the lower and upper bound of µ α respectively, and rand is a random number in the range of [0, 1]. Note that, µ L α and µ U α can be determined based on the knowledge of the surveillance area or estimated by running multiple tests between two nodes with known locations. Once the particles are initialized, they move to new positions with a certain speed in the search space. By calculating the value of the objective function, they can find the local best position pBest. Then the particles communicate with each other to know the global best position gBest. After that, the velocity and the position of each particle is updated as follows where is inertial weight, c 1 is cognitive acceleration factor, c 2 is social acceleration factor, rand is a random number in the range of [0, 1], pBest t k denotes the best solution found by particle k at iteration t, gBest t denotes the best solution found in the particle swarm at iteration t, v t k and P t k denote the velocity and position of particle k at iteration t, respectively. The parameter is critical to keep balance between local and global search. At the later phase of optimization, to avoid the oscillation phenomenon and improve the local search capability, the linear decreasing inertia weight [39] is commonly adopted to slow the particles over time. The inertial weight is updated as follows where max and min are the maximum and minimum weights respectively, and t max is the maximum iteration number. The optimization repeats the operations in Equations (19)-(21) until the termination condition is satisfied. There are two termination conditions in our method. One is the relative change of the best objective function value over t l iterations is less than a predefined threshold ρ; the other is the current iteration has reach the maximum iteration number, that is, t = t max . In the above optimization process, there exist two problems that limit the localization accuracy and the convergence speed. First, a cluster with a small M is prone to localization ambiguity. As shown in Figure 2, three black solid circles are the real locations of nodes 1, 2 and 3, whereas the optimization gives the estimated locations as three red solid circles, resulting in low localization accuracy. This is mainly because that the distance measurements generated by few nodes have little constraint on the nodes' locations. Second, although we have bounded the position of each particle in the initialization phase, the optimizations of many clusters still have low convergence speed, especially in a high-dimensional space (a large M). To solve the first problem, we propose a novel reference selection mechanism to eliminate the ambiguity in localization. In practice, different clusters may overlap, that is, for two clusters S 1 and S 2 , part of their nodes are the same, S 1 ∩ S 2 = ∅. If S 1 is first optimized, the common nodes that are localized with high accuracy can be regarded as reference nodes to eliminate the localization ambiguity in the optimization process of S 2 . Traditional localization schemes mainly use the confidence value to indicate the localization accuracy, which can be calculated as where (u, v, w) is the estimated location of the unknown node, (u i , v i , w i ) is the location of reference node i and l i is the distance measurement between the unknown node and reference node i. However, this confidence mechanism does not hold true in our method. Take the nodes in Figure 2 for example, even though the estimated locations marked red has large deviations from the real locations marked black, the nodes 1, 2 and 3 have high confidence values because the distances based on the estimated locations fit the distance measurements well. Alternatively, we propose to select the reference nodes according to localization stability. This approach is simple, yet effective. Specifically, we define the sliding window with length set to T w that is an integer multiple of T, i.e., T w = B × T. A node i can act as a reference node when the following two conditions are satisfied: (1) the node should be localized in every localization period of a sliding window; (2) the change of the estimated locations of the node in any two consecutive localization periods should be below the maximum movement distance, ||X n i − X n+1 i || < V max × T, where X n i and X n+1 i denote the estimated location of the node at two consecutive localization period, V max denotes the maximum velocity of nodes. Aiming at the second problem, we propose to further confine the particles' positions on the basis of their values initialized in Equation (18). As we have discussed above, a reference node has strong localization stability because its estimated locations have little changes in a sliding window. Then we can use this characteristic to confine the search space of the reference nodes. Let X m j and θ m j denote the location and azimuth angle of the reference node j at the localization period m, respectively. Our approach is to set the tight bound of the location of the reference node at the localization period m + 1 and thus speed up the convergence of the optimization. We assume the cable linked to the reference node is always tense. The path that the reference node moves along is actually an arc, and its length should be less than the maximum movement distance V max × T. Accordingly, the maximum movement angle is calculated as for the lower and upper bound of the azimuth angle. Hence, for the reference node j, we can set the lower bound of its X-and Y-axes location as min(x j A + r j cos θ LB j , x j A + r j cos θ UB j ) and min(y j A + r j sin θ LB j , y j A + r j sin θ UB j ), and set the upper bound of its X-and Y-axes location as max(x j A + r j cos θ LB j , x j A + r j cos θ UB j ) and max(y j A + r j sin θ LB j , y j A + r j sin θ UB j ). This tight bound not only improves the convergence speed, but also propels the estimated locations of other nodes to their real locations. Performance Evaluation In this section, we evaluate the performance of the MAP-PSO algorithm using simulations. Simulation Settings In our simulations, 100 unknown nodes are deployed in a cubic region with the range 1000 m × 1000 m × 20 m. MAP-PSO does not need beacon nodes. Each node is linked to a fixed anchor point through a cable with the length 20 m. The anchor points of any two neighbor nodes should satisfy the Equation (1). The node depth is randomly distributed in the range [4,16], which ensures that every node floats underwater and has a proper monitoring coverage. Each node moves at the maximum velocity 10 m/s clockwise or anti-clockwise. The precision matrix of the noise of each node's location is given by Λ = 50I 2 . As for the distance measurements, a node uses two-way packets to communicate with its neighbor nodes and estimates the distance based on Equation (3) and (4). For energy consumption, we adopt the parameters given in [40]. The packet size is set to be L P = 100 bytes and the acknowledgement size is set to be L A = 20 bytes. To model LinkQuest UWM4000 underwater modem, the data rate, transmission and reception power are set as R = 8.5 kbps, P tx = 7 W and P rx = 0.8 W, respectively. The communication range of all the nodes is set to be 150 m. The mean and precision of the multiplicative noise are set to be µ α = 0.02 and λ α = 60, respectively, and the precision of the additive noise is set to be λ β = 100. Once the distance measurements for all the node pairs in a cluster are collected, the PSO method is used to resolve the nodes' locations. The maximum iteration number is set to be 600. In each iteration, we set the number of the particles as N P = 200. Every particle updates itself according to Equations (19) and (20). The inertial weight is updated according to Equation (21), where the maximum and minimum weights are set to be max = 0.9 and min = 0.4, respectively. The cognitive and social acceleration factor are set to be c 1 = 0.5 and c 2 = 1.25, respectively. The optimization stops when the relative improvement of the best objective function value is less than ρ = 10 −3 over t l = 50 iterations. Every simulation lasts for 100 s. We set the localization period as T = 1 s and use a sliding window with B = 5 localization period, i.e., T w = 5 s. Every simulation is run 50 times to eliminate the impact of randomness. In the simulations, we consider three performance metrics, which are localization accuracy, localization coverage and localization time. Localization accuracy is defined as the average error between estimated locations and real locations of all the localized nodes, and the standard deviation of the localization error is calculated as where e i denotes the localization error of the i-th localized node and e denotes the average error of all the localized nodes. Localization coverage is defined as the proportion of the localized nodes to all nodes. Localization time is defined as the period that starts when all the distance measurements for a cluster are collected and ends when the locations of all the cluster nodes are obtained. Based on the above settings, MAP-PSO and AFAL are implemented using the MATLAB 2014a platform, on a computer with an Intel Core i3-6100 CPU 3.7 GHz and 8.0 GB RAM. Localization Performance under Different Parameters The performance of the MAP-PSO algorithm depends on many important parameters, including the penalty factor, the maximum iteration number, the particle number and the minimum cluster node number. We aim to determine the optimal values of these parameters by simulations. Impacts of the Penalty Factor The penalty factor has an important impact on the localization accuracy. As we discussed above, assigning a proper value to the penalty factor will make a balance between the priori knowledge and the likelihood information. Therefore, it is necessary to determine the value of the penalty factor by simulations. Figure 3 shows the relationship between the localization error and the penalty factor. We can observe that there exists some critical value on the curve, where the algorithm has the minimum localization error and standard deviation. This implies that the algorithm has a strong robustness at this value. When the penalty factor is below this value, the algorithm overfits the noisy distance measurements and underfits the priori mobility pattern. On the contrary, if the penalty factor is above this value, the algorithm underfits the noisy distance measurements and overfits the priori mobility pattern. In both cases, the algorithm has larger localization error and standard deviation. Impacts of The Maximum Iteration Number The maximum iteration number is another important factor that impacts the localization performance. On one hand, if the maximum iteration number is set to be too small, the optimization may stop too early to converge to the optimal solution, resulting in low localization accuracy; on the other hand, if setting the maximum iteration number to be large helps to improve the localization accuracy, the optimization needs more computational resource, which is infeasible for resource-constrained UASNs. By choosing the penalty factor as 0.56 according to Figure 3 and changing the maximum iteration number from 50 to 600, the localization error and time under different maximum iteration number are shown in Figure 4. We can see that with the increase of the maximum iteration number, the localization error decrease rapidly and the localization time increase rapidly in the first. When the maximum iteration number increases to some point (e.g., 400 in Figure 4), both the localization error and the localization time stabilize gradually. This is because the optimizations of most nodes reach the convergence in less than 400 iterations. Impacts of the Particle Number The number of particles is a critical parameter to determine the search ability of PSO localization. In theory, the search ability can be improved by simply increasing the particle number. However, a large number of particles have to be updated according to Equations (19) and (20), which leads to high computational cost. Hence, it is necessary to find a critical point, where a balance between the localization error and the localization time can be reached. By choosing the penalty factor and the maximum iteration number as 0.56 and 400 respectively, the simulation results using 50 to 400 particles are shown in Figure 5. From the figure, we have two observations: on one hand, the localization time increases linearly with the particle number; on the other hand, the localization error decreases rapidly with the increase of the particle number at the beginning. But with the further increase of the particle number (e.g., greater than 200 in Figure 5), the localization accuracy does not have clear improvement any longer. Therefore, it is reasonable to choose the particle number as 200, which makes a tradeoff between the localization error and the computational cost. Impacts of the Minimum Cluster Node Number In the above simulations, we ignore the localization coverage because it is rarely affected by the penalty factor, the iteration number and the particle number. We now evaluate the performance under the minimum cluster node number. This parameter means that a cluster can't be localized if the number of its nodes is less than the minimum cluster node number. Figure 6 illustrates the results when the value of this parameter changes from 3 to 7. It is clear that when the minimum cluster node number is set to be small, the algorithm has low localization accuracy and high localization coverage. This is because most clusters can be localized when the number of their nodes is more than the minimum cluster node number. However, as we stated before, there exists the localization ambiguity due to that the polygon constructed by a small number of cluster nodes has little constraint on the nodes' locations. As the minimum cluster node number grows, both the localization error and the localization coverage decreases rapidly. This is mainly because with the increase of the minimum cluster node number, the localization ambiguity can be eliminated gradually and few clusters satisfy the localization condition. Further, we can observe that there exists a turning point at the minimum cluster node number of 4. Below this value, the localization coverage does not decrease much and the localization accuracy is greatly improved. But above this value, the localization coverage decrease sharply and the localization accuracy has little improvement. In addition, the minimum cluster node number is closely related to the network density. As the network density grows, its value can be set to relatively large. In this way, the algorithm can achieve high localization accuracy and coverage simultaneously. Localization Results By choosing the optimal values of the penalty factor, the maximum iteration number, the particle number and the minimum cluster node number, we set the node density as 8 by adjusting the communication range of the nodes as 160 m and evaluate the localization result of each node. Figure 7 shows the localization error of all the unknown nodes. The average localization error of all the unknown nodes is 2.33. The error of 59% unknown nodes is less than the average error and the error of 93% unknown nodes is less than 3.5 m, which means that most unknown nodes have high localization accuracy compared with the communication range. Furthermore, we note that the localization error of some unknown nodes has large values (such as nodes 25, 52, 88 and 78), which means the localization accuracy of these nodes is not very high. This is mainly because these nodes are localized when the number of nodes in their clusters is small, resulting in that the localization ambiguity is not eliminated completely. On the whole, the localization result indicates that the localization accuracy is satisfiable and the localization algorithm is robust to noisy underwater environments. Performance Comparison with AFLA In this section, we compare the localization accuracy and time of MAP-PSO with that of AFLA under different parameters, including the measurement noise level and the node density. The localization coverage is not evaluated because both algorithms have the same localization coverage under the two parameters. According to the above simulations, we choose the penalty factor, the maximum iteration number, the particle number and the minimum cluster node number as 0.56, 400, 200 and 4, respectively. The unique parameters of AFAL are set according to [21]. Impacts of the Measurement Noise Level We first compare the performance of both algorithms under different measurement noise levels. As we stated before, the measurement noise is composed of the additive and multiplicative noises. Due to the additive noise is assumed to follow a Gaussian distribution with zero mean, it is reasonable to use the mean of the multiplicative noise to represent the measurement noise level. We change the mean of the multiplicative noise from 0.01 to 0.06. The results for localization accuracy and localization coverage are illustrated in Figure 8. From Figure 8a, we can see that the localization errors of MAP-PSO and AFAL both increase with the measurement noise level. In comparison, MAP-PSO has higher localization accuracy and lower standard deviation. Meanwhile, as the measurement noise level grows, the standard deviation of AFAL increases faster than that of MAP-PSO, which implies that MAP-PSO is more robust to complex underwater noises. This is mainly because MAP-PSO fits the noisy measurements and the priori mobility pattern in a weighted way, while AFAL treats the two types of information equally. Especially, when the measurement noise is of high level, it is difficult for AFAL to find an exact solution that fit the two types of information well. Figure 8b shows us the relationship between the localization time and the measurement noise level. It is obvious that MAP-PSO can complete the localization process in a shorter time and have lower standard deviation compared with AFAL. Furthermore, the localization time of AFAL increases rapidly with the measurement noise while that of MAP-PSO has no obvious change. This is because AFAL have to directly search the whole solution space when the measurement noise is of high level. Impacts of the Node Density We next compare the performance of both algorithms under different node densities. Node density is defined as the expected number of nodes in a node's neighborhood. We change the node density from 4 to 12 by adjusting the communication range of the nodes. The results for localization accuracy and localization coverage are illustrated in Figure 9. As shown in Figure 9a, MAP-PSO achieves higher localization accuracy and lower standard deviation compared with AFAL. As the node density grows, the localization error of MAP-PSO decreases slowly, but that of AFAL has no obvious improvements. This is mainly because that the number of nodes in a cluster increases with the node density and thus more distance measurements can be generated. These measurements have strong constraints on the nodes' locations, which can significantly relieve the localization ambiguity problem. Figure 9b shows the relationship between the localization time and the node density. We can see that MAP-PSO takes less time for localization compared with AFAL. Besides, the localization time of MAP-PSO has the lower standard deviation. This indicates that each node has similar computational cost in our algorithm, which can efficiently prolong the network life. In addition, the localization time of MAP-PSO increases monotonically with the increase of the node density. This is because with the increase of the node density, the number of cluster nodes increases and the location variable is high-dimensional. By using a limited number of particles, searching the best solution in a high-dimensional space may need more iteration times. Besides the localization accuracy and time, we evaluate the energy consumption under different node densities. For simplicity, we follow LDSN [22] to set the acknowledgement reception consume one unit of energy. According to the values of the parameters L P , L A , P tx and P rx , we set the the ranging packet reception consume 5 units of energy, the acknowledgement transmission consume 9 units of energy and the ranging packet transimmsion consume 45 units of energy, respectively. As illustrated in Figure 9c, the energy consumption of the two algorithms increases with the node density. When the node density is relatively small, MAP-PSO and AFLA have similar energy consumption. This is because most clusters in MAP-PSO consist of three or four nodes in this case. As the node density grows, AFLA has more energy consumption than MAP-PSO. The reason may be that AFLA forms more clusters due to it adopts the three-node cluster localization. Conclusions In this paper, we considered the localization problem in UASNs where nodes are permanently moving with restrictions and measurement noises vary with the distances. We introduced a localization algorithm that uses the priori knowledge provided by nodes' mobility patterns and the likelihood information offered by distance measurements. Under the Bayesian framework, the algorithm fuses the two types of information in a weighted way. In the localization phase, the algorithm fits the components in the localization objective function according to their weights. To improve localization accuracy and convergence speed, we further propose novel reference selection and bound constraint mechanisms. We evaluate the localization performance of our algorithm under different unique parameters. The simulation results give the suggestions of selecting proper values for these parameters. We also compare the localization performance with the benchmark AFAL algorithm. The simulation results show that our algorithm can use less localization time and energy consumption to achieve higher localization accuracy. In the future, our work focuses on two aspects: on one hand, considering the depth measurements may be obtained with low accuracy from a cheap pressure sensor due to the variation of atmosphere and tides, we will study the accurate localization algorithm with coarse depth measurements; on the other hand, we will extend the proposed method to improve the localization performance by using temporal dependency between a node's locations of consecutive time instants. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2023-07-22T15:05:31.549Z
2023-07-19T00:00:00.000
260011268
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/15/14/11244/pdf?version=1689760599", "pdf_hash": "c9ca03e0fb9ada91a5f2b62f21e4913618ab1f95", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44742", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "f5748bab968540231bdf4813153c606dfdec0333", "year": 2023 }
pes2o/s2orc
Remediation of Heavy Metal (Cu, Pb) Contaminated Fine Soil Using Stabilization with Limestone and Livestock Bone Powder : Soil environments contaminated with heavy metals by typhoon flooding require immediate remediation. High-pressure soil washing using water could be a viable short-term solution for cleaning soil contaminated with heavy metals. Soil washing employing high-pressure generates heavy metal contaminated fine soil and wastewater. This contaminated fine soil cannot be reused without proper treatment because of the high levels of heavy metal contamination. Stabilization was used for immobilizing heavy metals (Cu, Pb) in the contaminated fine soil. The stabilizing agents used for immobilizing heavy metals (Cu, Pb) in the contaminated fine soil included two types of limestone (Ca-LS and Mg-LS) and livestock bone powder (LSBP). The Ca-LS, Mg-LS, and LSBP were applied to the contaminated fine soil at dosages in the range of 2 wt%~10 wt%. Two different particle sizes (-#10 vs. -#20 mesh) and curing times (1 week vs. 4 weeks) were used to compare the effectiveness of the stabilization. Extractions using 0.1 N HCl were conducted to evaluate the stabilization effectiveness. Heavy metal leachability was significantly decreased with higher Ca-LS and LSBP dosages. The LSBP treatment was more effective than the Ca-LS and Mg-LS treatments and the Mg-LS showed the poorest performance. The highest degree of immobilization was attained using a 10 wt% LSBP (-#20 mesh), resulting in an approximate leachability reduction of 99% for Pb and 92% for Cu. The -#20 mesh material and 4 weeks of curing were more effective than the -#10 mesh material and 1 week of curing, respectively. The SEM-EDX results showed that metal precipitates and pyromorphite like phases could be responsible for effective heavy metal immobilization. This study suggests that Ca-LS and LSBP used at an optimum dosage can be effective stabilizing agents for immobilizing Cu and Pb in contaminated fine soils. Introduction The contamination of soil with heavy metals such as copper (Cu) and lead (Pb) may be caused by a variety of anthropogenic activities (e.g., mining, smelting) or natural disasters.Copper (Cu) and lead (Pb) are known to be very toxic elements to humans.Specifically, Cu is considered an aquatic toxin [1] and chronic exposure to it can harm the liver and kidneys [2].Lead (Pb) exposure can damage the brain, red blood cells, blood vessels, kidneys, and the nervous system [3,4].Therefore, soil contaminated with high levels of heavy metals requires remedial action.The use of high-pressure soil washing has been reported in a previous study for a soil contaminated with heavy metals (Cu, Pb, Zn) [5].Removal rates for heavy metals (Cu, Pb, Zn) were reported at the optimal operation of the high-pressure soil washing device.Soil washing at high-pressure was designed for emergency response situations when soil contamination occurs from accidental wastewater spills caused by natural disasters such as typhoons or flooding. In high-pressure soil washing, cavitation flow causes the separation of dispersed soil aggregates into a fine soil and a wash water (wastewater) stream.The separated fine soil contains high levels of heavy metals that preclude disposal or the reuse of this soil "as is" due to possible heavy metal leaching upon exposure to severe environments such as very low pH conditions.Therefore, this heavy metal contaminated soil should be properly remediated in order to prevent severe heavy metal leaching.Among the various remediation technologies (i.e., electrokinetics, phytoremediation, stabilization, etc.), the stabilization process is selected for its cost effectiveness, convenience, and rapid treatment timeframes.In the past, the stabilization process was utilized widely to remediate heavy metal contaminated soil using industrial products and/or waste materials (i.e., Portland cement, cement kiln dust, fly ash, etc.) [6][7][8][9].Mahedi et al. [6] used cement activated fly ash and slag for stabilizing Al, Cu, Fe, and Zn in contaminated soil to evaluate the leaching behavior using the toxicity characteristic leaching procedure (TCLP) test.Accordingly, the leachability of Al, Cu, and Zn followed amphoteric leaching behavior where the concentrations increased in both acidic and basic conditions.Ouhadi et al. [7] used cement stabilization/solidification to remediate heavy-metal-contaminated clay.They reported that a significant reduction in TCLP Pb concentration was attained with the cement treatment [7].Zha et al. [8] also used cement and fly ash to stabilize/solidify heavy metal contaminated soil.They reported that treatment resulted in an increase in the unconfined compressive strength (UCS) and a decrease in the leached ion concentration.Currently, CaCO 3 based natural waste resources such as eggshell, starfish, and oyster shell are used broadly to immobilize heavy metals in contaminated soil and have been proven to be effective in reducing heavy metal leachability [10][11][12][13][14][15][16][17][18].Specifically, Torres-Quiroz et al. [10] used oyster shell powder, zeolite, and red mud as stabilizing agents for toxic metal contaminated soil.They reported that oyster shell powder was the best low-cost adsorbent material to stabilize toxic metals in contaminated soils [10].Zheng et al. [11] used cow bone meal and oyster shell meal to immobilize Cd, Pb, Cu, and Zn in contaminated soil.They reported that cow bone meal and oyster shell meal were effective agents for the remediation of heavy metal contaminated soils.Moreover, Ahmad et al. [12] used eggshell and calcined eggshell to immobilize Pb in contaminated military shooting range soil.They reported that both egg shell and calcined egg shell treatments reduced the exchangeable Pb fraction to about 1% of the total Pb in the soil.Ok et al. [12] also used eggshell as the CaCO 3 source in order to immobilize the Cd and Pb in the contaminated soil.They suggested that eggshell waste could be used as an alternative CaCO 3 source to immobilize heavy metals in soils.Therefore, CaCO 3 -based waste has been successfully demonstrated as an effective stabilizing agent.Thus, limestone was used in this study as the main CaCO 3 material to immobilize heavy metals.Specifically, two types of limestone and livestock bone, available on the market at low cost, were used to treat the heavy metal (Cu, Pb) contaminated fine soils.Moreover, two types of limestone (high Ca limestone vs. high Mg limestone) were used and compared for their stabilization effectiveness.Phosphate rich materials are known to be effective for immobilizing heavy metals in contaminated soils [19][20][21][22][23][24][25][26].Moreover, in a previous study, high phosphorus biochar derived from soybean stover was effective with the immobilization of Pb in contaminated soil [22].Ren et al. [24] also reported that phosphate-induced stabilization was very effective for Pb contaminated soil and that the immobilization rate could be significantly increased depending on the pH of the contaminated soil for other heavy metals such as Zn and Cd [24].Moreover, Zhang et al. [21] showed that phosphorous could decrease Cu and Cd leachability by two to three times in paddy soil with phosphorus modified biochar made by pyrolysis using biomass feedstocks.Andrunik et al. [20] reported that upon phosphate treatment of heavy metal (Pb, Cd, Zn) contaminated soil, new stable mineral substances formed, causing a reduction in leachability as measured by the toxicity characteristic leaching procedure (TCLP).Hence, phosphate rich livestock bone powder was used in this study as a renewable waste material alternative to limestone.To establish the influence of particle size, -#10 and -#20 mesh materials were added to the contaminated soil.Moreover, the curing period effects on heavy metal immobilization were also investigated between 1 week and 4 weeks. The objective of the study reported herein was to assess the feasibility of using limestone and livestock bone powder as stabilizing agents for immobilizing heavy metals (Cu, Pb) in contaminated fine soil.The stabilization effectiveness was evaluated with 0.1 N HCl extraction methods.X-ray powder diffraction (XRPD) was employed to assess the mineralogical makeup of the soil while the stabilization mechanism was studied by scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDX). Heavy Metal Contaminated Soil Upon treatment with high-pressure soil washing, the soil samples contaminated with heavy metals were collected.As reported in a previous study, cavitation flow induced by high-pressure soil washing caused dispersion of the soil aggregates [5].The resulting fine solid suspension and wastewater were discharged from one of the two outlets [5].The copper and lead levels in the fine soil were about 668 and 515 mg/kg, respectively.These levels exceeded the Soil Contamination Warning Standards of 500 mg/kg for Cu and 400 mg/kg for Pb (Region 2).The collected contaminated fine soil was air-dried, thoroughly mixed, and used in the stabilization process.Quality characteristics of the contaminated soil including mineralogical information are listed in Table 1.The chemical composition of the contaminated soil and stabilizing agents determined by X-ray fluorescence (XRF, ZSX100e, Rigaku, Japan) is presented in Table 2. Stabilization Agents Two types of commercially available limestone were used in this study.One is Ca-rich (98% CaO) limestone (Ca-LS, -#10 mesh materials) and the other is Mg-rich (28% MgO) limestone (Mg-LS, -#10 mesh materials).Livestock bone powder (LSBP, -#10 mesh materials) was also used as an alternative renewable waste material to non-renewable limestone.The chemical makeup of the stabilizing agents (Ca-LS, Mg-LS, and LSBP) quantified by XRF is presented in Table 2. Stabilization Experiments The stabilizing agents (Ca-LS, Mg-LS, and LSBP) were added to the contaminated soil at 2~10 wt%.A control treatment without any stabilizing agents was also prepared to serve as a no-treatment baseline for stabilization effectiveness.A mixture using DI water prepared at a 20:1 liquid to solid ratio was sufficient to ensure full hydration.After stabilization, all of the samples were allowed to cure for durations of 1 week and 4 weeks in sealed plastic containers under normal conditions (20 • C, 25% humidity).The effectiveness of heavy metal stabilization was assessed using a rigorous leaching test with a 0.1 N HCl extraction agent.The experimental conditions for all stabilizing agents (Ca-LS, Mg-LS, and LSBP) are shown in Table 3.A flowchart of the stabilization process using Ca-LS, Mg-LS, and LSBP as stabilizing agents is shown in Figure 1. Mineralogical Characterization The mineralogical composition of the contaminated soil and stabilizing agents (Ca-LS, Mg-LS, and LSBP) was performed by X-ray powder diffraction (XRPD).Prior to analysis, the samples were pulverized to clear the -#200 mesh (0.075 mm).A XRPD diffractometer (X'Pert PRO MPD, PANalytical, Almelo, The Netherlands) was used to collect the step-scanned diffraction patterns.The instrument was equipped with a beam graphite Mineralogical Characterization The mineralogical composition of the contaminated soil and stabilizing agents (Ca-LS, Mg-LS, and LSBP) was performed by X-ray powder diffraction (XRPD).Prior to analysis, the samples were pulverized to clear the -#200 mesh (0.075 mm).A XRPD diffractometer (X'Pert PRO MPD, PANalytical, Almelo, The Netherlands) was used to collect the step-scanned diffraction patterns.The instrument was equipped with a beam graphite monochromator with Cu radiation operated at 40 kV and 40 mA.Diffraction patterns were collected at a 2θ range of 5-60 • , a step size of 0.02 • , and a count time of 3 s/step.Mineral characterizations were accomplished by using Jade software v. 7.1 [29] supplemented with the PDF-2 reference database [30]. Stabilization Mechanism Analytics The stabilization mechanism was assessed by scanning electron microscopy-energy dispersive X-ray spectroscopy (SEM-EDX) on the treated samples exhibiting the lowest heavy metal leachability, namely, the 10 wt% Ca-LS (-#20 mesh material) and 10 wt% LSBP (-#20 mesh material) samples.Prior to SEM testing (Hitachi S-4800 SEM, Horiba EMAX EDX system), the collected samples were secured on a plate with double-sided Pt-coated carbon tape. Physicochemical Testing The pH tests for the soil and stabilizing agent samples were performed at a liquid to solid (L:S) ratio of 5:1 (mass basis) according to the Korean Standard Test (KST) method [31].The National Academy of Agricultural Science (NAAS) method was used to measure the electrical conductivity (EC) [32].An aqua regia extraction agent was used for the total heavy metal determinations.The dissolved heavy metal concentration was tested using an inductively coupled plasma-optical emission spectrometer (ICP-OES, Optima 8300DV, PerkinElmer, Waltham, CT, USA).All ICP-OES determinations were reported as the mean values of triplicate samples (less than 10% measurement error).For QA/QC purposes, the devised protocol entailed three quality-control standards and spiking with a standard solution for every 10 samples analyzed (recovery rate >95%). (PDF# 14-0164) were the main mineral phases identified in the contaminated soil.Calcite (CaCO3, PDF# 47-1743) was identified as the main phase in Ca-LS while dolomite (CaMg(CO3)2, PDF# 26-0426) was observed in the Mg-LS sample.This indicated that Ca-LS and Mg-LS had different main phases based on the elemental content.The main phase of LSBP was hydroxylapatite (Ca5(PO4)3(OH), PDF# 09-0432), and this was most probably due to the high phosphorus content.Relative Intensity (%) . Stabilization Treatment Effectiveness The heavy metal (Cu, Pb) leachability results with an extraction solution of 0.1 N HCl for the Ca-LS (-#10 mesh, -#20 mesh), Mg-LS (-#10 mesh, -#20 mesh), and LSBP (-#10 mesh, -#20 mesh) treatments upon 7 days and 28 days of curing are shown in Figures 4-11.Evidently, an increasing stabilizing agent dosage results in decreased heavy metal leachability.Upon 7 days of curing, the Cu and Pb leachability was significantly decreased with an increasing dosage for the -#10 mesh Ca-LS treatment sample (Figure 4).The Mg-LS treatment resulted in the low immobilization of Cu and Pb.This indicates that the pH increase in the soil system upon the addition of Mg-LS compared to the Ca-LS treatment was not sufficient to provide the effective immobilization of heavy metals.In the past, CaCO3-based stabilization was utilized in numerous research papers [10][11][12][13][14][15][16][17][18].Various CaCO3-based stabilization agents have been previously used including eggshell, oyster shell, limestone, starfish, etc.It has been reported that the exchangeable fractions of heavy metals could be decreased and the residual fraction increased upon eggshell treatment [15].Upon the addition of CaCO3 to contaminated soil, the soil pH increased.This can be strongly associated with an increase in the metal sorption on the calcite surface of the soil [33][34][35].When the soil pH increased, the surface negative charge increased.This could be the result of an increase in cation adsorption [34,36].It has been reported that the Pb ad- Upon 7 days of curing, the Cu and Pb leachability was significantly decreased with an increasing dosage for the -#10 mesh Ca-LS treatment sample (Figure 4).The Mg-LS treatment resulted in the low immobilization of Cu and Pb.This indicates that the pH increase in the soil system upon the addition of Mg-LS compared to the Ca-LS treatment was not sufficient to provide the effective immobilization of heavy metals.In the past, CaCO 3 -based stabilization was utilized in numerous research papers [10][11][12][13][14][15][16][17][18].Various CaCO 3 -based stabilization agents have been previously used including eggshell, oyster shell, limestone, starfish, etc.It has been reported that the exchangeable fractions of heavy metals could be decreased and the residual fraction increased upon eggshell treatment [15].Upon the addition of CaCO 3 to contaminated soil, the soil pH increased.This can be strongly associated with an increase in the metal sorption on the calcite surface of the soil [33][34][35].When the soil pH increased, the surface negative charge increased.This could be the result of an increase in cation adsorption [34,36].It has been reported that the Pb adsorption on the calcite surface is enhanced at high pH condition [37].According to Naidu et al. [34], hydroxy species of metal cations could be formed at high pH, which would result in a higher affinity for adsorption sites compared to the metal cations alone [38].Moreover, the precipitation of metal as metal hydroxides can occur when the pH is high.Ok et al. [39] have reported that the dissolution of CaCO 3 in water at high pH conditions can be described as follows: This alkaline condition can expedite metal precipitation as metal hydroxides as follows: where M denotes metal [39].Ok et al. [39] and Torres-Quiroz et al. [10] reported significant heavy metal reduction upon natural oyster shell application, where CaCO 3 was the main phase in the treatment.Moreover, Torres-Quiroz et al. [10] reported that the preference of sorption with oyster shell was in the following order: Pb 2+ > Cu 2+ > Zn 2+ > Cd 2+ > Ni 2+ for Pb and Cu contaminated silty sand and sandy soil samples.Similar results were obtained upon Ca-LS treatment in this study.Therefore, a combination of the aforementioned mechanisms may be accountable for the high degree of heavy metal immobilization observed in the Ca-LS treatments.The Ca-LS -#20 mesh size particles caused higher Cu and Pb leachability reduction than the -#10 mesh materials.This was most probably due to the larger surface area of the finer particles, which is strongly associated with high reactivity.Upon 28 days of curing, Cu and Pb leachability was found to be lower compared to the samples cured for 7 days for the -#10 mesh Ca-LS treatment.This suggests that the curing time duration is also important in reducing heavy metal leachability.Therefore, an optimal curing period should be applied to the contaminated soil during the design phase of the stabilization process. Even upon curing for 28 days, the Mg-LS treatment was not effective in significantly reducing the heavy metal leachability compared to the Ca-LS treatment.This indicates that the high Mg content present in the Mg-LS sample did not play an effective role in the stabilization process.Moreover, the extraction pH values for the Mg-LS samples were very low, indicating a low buffering capacity for acid neutralization.Specifically, this indicates that 50.9% of the CaO content in the Mg-LS sample, as shown by the XRF analysis, may not be sufficient to endure the strong acid leaching conditions (0.1 N HCl extraction), and therefore high levels of heavy metals could be leached from the stabilized-contaminated soil. Similar to the 7 days of curing, the -#20 mesh stabilizers showed better immobilization compared to the -#10 mesh materials, most probably due to its larger surface area.Particle size influence on the immobilization of heavy metals was more pronounced than the effect exhibited by the curing duration.The lowest heavy metal leachability was obtained for the -#20 mesh materials for each Ca-LS and Mg-LS treatment.This indicates that increased surface area is important for reducing heavy metal leachability. Upon 7 days of curing for the -#10 mesh materials, the LSBP treatment outperformed the Ca-LS and Mg-LS treatments in reducing the Cu and Pb leachability.This indicates that high P content (40.58% as P 2 O 5 ) plays an important role in significantly reducing the heavy metal leachability.Similar to the -#10 mesh materials, the LSBP treatment with the -#20 mesh materials showed better immobilization of the heavy metals compared to the Ca-LS and Mg-LS treatments with -#20 mesh materials.This suggests that the reduced particle size was more reactive in reducing the heavy metal leachability. Similar to the 7 days of curing for -the #10 mesh materials, the LSBP treatment upon 28 days of curing outperformed the Ca-LS and Mg-LS treatments in terms of leachability when the dosage was higher than 4 wt%.Moreover, the trend in the increased stabilization at higher treatment dosages for the LSBP treatment was more pronounced in stabilizing Pb rather than Cu.Specifically, a Pb leachability reduction of higher than 98% was attained with the 6 wt% LSBP treatment.It has been reported that phosphate-induced stabilization is highly effective for reducing Pb and Cu leachability [22,40].A significant reduction in Pb leachability was most probably due to the formation of hydroxypyromorphite [Pb 5 (PO 4 ) 3 OH, Ksp = 10 −76.8 ] [40].Since the hydroxypyromorphite formed during stabilization, the soluble Pb could not be leached from the contaminated soil by virtue of its very low solubility [19].In the XRPD analysis of LSBP, the main high intensity phase in the LSBP was hydroxylapatite (Ca 5 (PO 4 ) 3 (OH)).This indicates that an ample amount of a P source could have contributed to the formation of pyromorphite-like and/or metal phosphate compounds.Moreover, reduced Cu leachability may be strongly associated with the formation of Cu(H 2 PO 4 ) 2 , Cu 3 (PO 4 ) 3 , and CuP 4 O 11 , which were identified in the phosphorus-modified biochar treatment of Cu contaminated paddy soil [41].Zhang et al. [41] reported a two to threefold increase in Cu immobilization efficiency upon the 10 wt% phosphorus-modified biochar treatment.Xu et al. [42] also reported that metal phosphates such as M 3 (PO 4 ) 2 (M = Cd, Pb, Cu, and Zn) could be the responsible compound for the reduction in the soluble content of the heavy metals.Moon et al. [43] also reported that pyromorphite-like phases and the products of pozzolanic reactions (i.e., CSH, CAH) were responsible for the effective stabilization of calcined oyster shell and waste cow bone treatment in firing range soil. Similar to the Ca-LS and Mg-LS treatment, for LSBP, the treatment particle size was more influential in immobilizing heavy metals than the curing duration.The lowest heavy metal leachability value was attained from the -#20 mesh materials in the LSBP treatment. Overall, treatment with Ca-LS and LSBP generated efficient heavy metal immobilization.More specifically, for a curing duration of 28 days, the heavy metal immobilization effectiveness was ranked according to the following order: LSBP (-#20 mesh) > Ca-LS (-#20 mesh) > LSBP (-#10 mesh) > Ca-LS (-#10 mesh) Mg-LS (-#20 mesh) > Mg-LS (-#10 mesh) This indicates that the Ca and P content of the stabilizing agent along with the particle size are prevailing factors leading to more effective heavy metal immobilization.Moreover, for practical field applications, the level of heavy metal contamination is also important in addressing the dosage of the stabilizing agents.Furthermore, if the treatment results with -#10 mesh materials show compliance with the regulatory limit, a particle size reduction to -#20 mesh materials would not need to be considered due to the economic feasibility. SEM-EDX Results The results of the SEM-EDX analyses are presented in Figure 12a,b for the lowest heavy metal leachability samples (10 wt% Ca-LS -#20 mesh and 10 wt% LSBP -#20 mesh).For the Ca-Ls sample, Pb and Cu were observed in the SEM-EDX image along with Ca, Al, Si, and O.This finding establishes that heavy metal immobilization is likely to be related to the formation of metal precipitates [39], which can be adsorbed on the soil particles.The net negative charge could be increased when the soil pH is high.Ahmad et al. [44] confirmed the theoretical formation of insoluble Pb phases with visual MINTEQ, which was responsible for effective immobilization at high pH conditions.For the 10 wt% LSBP sample, Pb and Cu were observed along with Ca, Al, Si, P, and O.This indicates that pyromorphite-like compounds and/or metal phosphate formations [22,[40][41][42] could be responsible for effective heavy metal immobilization.lated to the formation of metal precipitates [39], which can be adsorbed on the soil particles.The net negative charge could be increased when the soil pH is high.Ahmad et al. [44] confirmed the theoretical formation of insoluble Pb phases with visual MINTEQ, which was responsible for effective immobilization at high pH conditions.For the 10 wt% LSBP sample, Pb and Cu were observed along with Ca, Al, Si, P, and O.This indicates that pyromorphite-like compounds and/or metal phosphate formations [22,[40][41][42] could be responsible for effective heavy metal immobilization. Conclusions Two types of limestone (Ca-LS and Mg-LS) and livestock bone powder (LSBP) were used as stabilizing agents for immobilizing Cu and Pb in contaminated fine soil collected after treatment with high-pressure soil washing.The treatment dosage of stabilizing agents to contaminated soil varied in the range of 2 wt% to 10 wt%.The effects of curing duration (1 week vs. 4 weeks) and particle size (-#10 mesh vs. -#20 mesh) on stabilization were also studied.Following the curing period, the effectiveness of heavy metal stabilization in the treated samples was evaluated after extraction with 0.1 N HCl.The stabilization results indicated that a notable reduction in Cu and Pb leachability was achieved upon the Ca-Ls and LSBP treatments.The -#20 mesh material outperformed the -#10 mesh materials.Moreover, a curing period of 4 weeks was more effective in reducing the heavy metal leachability than the 1 week curing period.Finally, the particle size was more influential than the curing time duration for heavy metal immobilization. The results suggest that the effectiveness of heavy metal immobilization upon 28 days of curing is ranked according to the following order: LSBP (-#20 mesh) > Ca-LS (-#20 mesh) > LSBP (-#10 mesh) > Ca-LS (-#10 mesh) Mg-LS (-#20 mesh) > Mg-LS (-#10 mesh) The SEM-EDX results showed that cation adsorption/metal precipitation may be the immobilization mechanism associated with the Ca-Ls treatment.Moreover, pyromorphitelike/metal phosphate phases may probably be associated with a high degree of heavy metal immobilization.Future studies will be conducted with heavy metal contaminated field soil obtained from emergency natural disaster recovery operations caused by typhoons or flooding, in order to validate the results obtained in this study. Figure 1 . Figure 1.Flowchart of the stabilization process using Ca-LS, Mg-Ls, and LSBP. Figure 2 . Figure 2. XRPD pattern of the heavy metal contaminated fine soil. Table 1 . Characteristics of the heavy metal contaminated soil. Table 3 . Treatment matrix for the contaminated fine soil.
v3-fos-license
2018-04-03T02:21:49.425Z
2016-02-17T00:00:00.000
17558136
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=21523&path[]=7467", "pdf_hash": "a689de55a1c4202805ca2ce6190697dfa7d40c95", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44743", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a689de55a1c4202805ca2ce6190697dfa7d40c95", "year": 2016 }
pes2o/s2orc
Computed tomography texture analysis to facilitate therapeutic decision making in hepatocellular carcinoma This study explored the potential of computed tomography (CT) textural feature analysis for the stratification of single large hepatocellular carcinomas (HCCs) > 5 cm, and the subsequent determination of patient suitability for liver resection (LR) or transcatheter arterial chemoembolization (TACE). Wavelet decomposition was performed on portal-phase CT images with three bandwidth responses (filter 0, 1.0, and 1.5). Nine textural features of each filter were extracted from regions of interest. Wavelet-2-H (filter 1.0) in LR and wavelet-2-V (filter 0 and 1.0) in TACE were related to survival. Subsequently, LR and TACE patients were divided based on the wavelet-2-H and wavelet-2-V median at filter 1.0 into two subgroups (+ or −). LR+ patients showed the best survival, followed by LR-, TACE+, and TACE-. We estimated that LR+ patients treated using TACE would exhibit a survival similar to TACE- patients and worse than TACE+ patients, with a severe compromise in overall survival. LR was recommended for TACE- patients, whereas TACE was preferred for LR- and TACE+ patients. Independent of tumor size, CT textural features showed positive and negative correlations with survival after LR and TACE, respectively. Although further validation is needed, texture analysis demonstrated the feasibility of using HCC patient stratification for determining the suitability of LR vs. TACE. INtrODUctION Identification and quantification of tumor heterogeneity by computed tomography (CT) textural analysis shows promise in enhancing prognostic accuracy and facilitating therapeutic decision making. Such advances are particularly important for diseases such as liver cancer, which is the second and sixth most frequent cause of cancer related-death in men and women, respectively, with hepatocellular carcinoma (HCC) most common [1]. According to the Barcelona Clinic Liver Cancer (BCLC) staging system, the diameter of a single HCC may not be a contraindication for liver resection (LR) [2][3][4][5]. However, most Asian patients with HCC have diseased liver parenchyma, such as hepatitis B virus infection and/or hepatitis B virus-related cirrhosis, and LR in this population is therefore associated with a high risk of complications [6]. This consideration may alter therapy-based decision-making in cases of single HCCs > 5 cm, particularly for potential LR candidates [7,8]. Asymptomatic patients with a solitary HCC without vascular invasion or extrahepatic spread and with wellpreserved liver function could also be considered for transcatheter arterial chemoembolization (TACE) [3,5,9]. Thus, to assess whether a patient scheduled to undergo LR would be better suited for TACE and vice versa, reliable prognostic markers for patient stratification are needed. Proposals for a subclassification system for BCLC stage B tumors have emerged in recent years. One study proposed a stratification system aimed toward tailoring therapeutic interventions based on both the evidence available to date and expert opinions [10]. Another study suggested taking the Child-Pugh score and liver transplantation status into account [11]. In clinical practice, the decision to treat with LR or TACE is made using a combination of clinical symptoms, laboratory test results, and pathological biomarkers, whereas the CT images routinely acquired during treatment and followup are largely overlooked. Conventional assessment of tumor size and enhancement of cross-sectional images are far from satisfactory in the determination of an appropriate therapeutic strategy, due to insufficient imaging of the inherent properties of the tumor and interobserver variability in image interpretation. Radiomics is an emerging research field that aims to utilize the full potential of medical imaging [12]. This includes texture analysis, which is assumed to reflect tissue heterogeneity [13][14][15][16][17]. Heterogeneity is a well-recognized feature reflecting alterations in tissue patterns, likely occurring due to cell infiltration, abnormal angiogenesis, microvasculature and necrosis [18][19][20]. One study suggested an association between image traits, including textural features, and underlying gene expressions in HCC [21]. Another stated that the radiomic signature could be transferred from lung to head-and-neck cancer, suggesting that this signature identifies a general prognostic tumor phenotype [12]. In fact, texture analysis has shown feasibility in the differential diagnosis of liver cancers [22], hepatic fibrosis detection/staging [23,24], and prediction of postoperative hepatic insufficiency [25]. In this study, we explored texture analysis as a prognostic and patient stratification approach in the determination of the appropriate therapeutic option, LR or TACE, for patients with single large HCCs. Herein, two questions were raised: (1) are the textural parameters of the primary tumor, calculated from baseline CT, related to prognosis? (2) Does texture analysis have the potential to provide an additional view for treatment modification between LR and TACE? rEsULts Patients A total of 130 patients (86 and 44 treated by LR and TACE, respectively) were retrospectively included for texture analysis. Of these, 106 (81.5%) patients had disease progression, and 96 (73.8%) patients died by the study end date. There were no significant differences in the patient baseline demographics and characteristics (Table 1). All texture features, calculated from two sets of regions of interest (ROIs), showed excellent agreement (ICC value, 0.799-0.999). cox regression and Kaplan-Meier analysis for Lr and tAcE For candidate clinical and imaging variables, univariate analysis showed that corona (P = 0.057) was the only variable with a P value < 0.10 in LR, whereas in TACE, none of the variables showed significant differences ( Table 2). For textural features, nine and 21 features in the LR and TACE groups, respectively, were identified as statistically significant (Table S1). Separated by the above-identified four textural parameters in LR and TACE, OS differed significantly for each feature whereas time to progression (TTP) did not ( Figure 1 & Table 4). There were no significant differences in patient demographics and characteristics. In all patients, for OS, univariate Cox regression showed that BCLC, corona, and subgrouping had P-values < 0.10, and the multivariate Cox regression models confirmed that subgrouping was the only factor that was significantly associated with OS (P = 0.012). For TTP, univariate Cox regression showed that the presence of a capsule, corona and subgrouping had P-values < 0.10, and the multivariate Cox regression models confirmed that the capsule was the only factor that was significantly associated with the TTP (P = 0.021). These results indicate that LR+ was associated with the best survival, followed by LR-and TACE+ (P = 0.920 and 0.854 for OS and TTP, respectively, in LRvs. TACE+), whereas TACE-was associated with the worst survival. Thus, the feasibility of texture features in patient stratification and determination of the most suitable therapy (LR or TACE) was partly confirmed; however, further validation was still considered necessary. Further validation Since wavelet-2-V (filter 1.0) and wavelet-2-H (filter 1.0) were not normally distributed among the subgroups, Kruskal-Wallis H was used for further analysis. First, wavelet-2-V (filter 1.0) was compared between LR+ and TACE+, as well as between LR+ and TACE-. The results showed that the value of LR+ was Figure 3A). Second, wavelet-2-V (filter 1.0) was compared between LR-and TACE+, as well as between LR-and TACE-. The results showed that the value of LR-was similar to that of TACE+ (median, 13.1050 vs. 12.8860, P > 0.999), but lower than that of TACE-(median, 13.1050 vs. 18.3490, P < 0.001). Therefore, if LR-patients are treated by TACE, their survival would be similar to that of TACE+ patients and better than that of TACE-patients, without compromise of OS ( Figure 3B). Third, we compared wavelet-2-H (filter 1.0) between TACE-and LR-, as well as between TACEand LR+. We observed that TACE-and LR-showed a significant difference (median: 15.3530 vs. 11.8430, P < 0.001), whereas TACE-and LR+ did not (median: 15.3530 vs. 15.7260, P > 0.999). Therefore, if TACE-patients are treated by LR, their survival would be similar to that of LR+ patients and better than that of LR-patients, and their OS would be considerably improved ( Figure 3C). Lastly, we compared wavelet-2-H (filter 1.0) between TACE+ and LR-, as well as between TACE+ and LR+. We observed that TACE+ and LR+ showed a significant difference (median: 11.5270 vs. 15.7260, P < 0.001), whereas TACE+ and LR-did not (median: 11.5270 vs. 11.8430, P > 0.999). Therefore, if TACE+ patients are treated by LR, their OS would be similar to that of LR-patients and worse than that of LR+ patients, with no extension of survival ( Figure 3D). Accordingly, when the TACE group was separated by another prognostic indicator, wavelet-2-V (filter 0), similar results were consistently noted ( Figure 2). DIscUssION In this study, we took six typical HCC subject image features and textural features into account to determine whether or not they could be used to assist in therapeutic decision-making and optimization. Corona and 29 textural parameters (nine in LR and 20 in TACE) had P -values < 0.10 in the univariate Cox regressions for OS. Sequentially, multivariate Cox regressions and Kaplan-Meier analyses identified four parameters (one in the LR group and three in the TACE group) related to OS. Filter 1.0 was the best filter, as it showed significant results in both the LR and TACE groups, which was consistent with the findings of published studies [15,16]. The reason for this result might be that textural features at filter 0 tend to reflect radiologists' impressions of image quality, which could be influenced by image noise. By using filters at larger scales (filter 1.0, 5 pixels), subjective bias might be alleviated, and underlying biologic heterogeneity could be enhanced [14]. In the subgroup comparisons, we noted that TACE would have severely compromised OS in LR+ patients, with lower values might be recommended for TACE ( Figure 3). Additionally, similar conclusions could also be drawn if separated by the receiver operating characteristic curve threshold ( Figure S1 & S2). Although BCLC stage C used to be considered a contradiction for both TACE and LR in HCC, recent studies showed that both TACE and LR could provide survival benefit [26][27][28]. Nevertheless, patient selection was crucial before LR or TACE. So, we included HCCs in BCLC stage C in our study. Furthermore, in clinical observations, sorafenib alone seldom led to necrosis in intrahepatic/extrahepatic lesions or shrinkage in thrombosis. Given that no more effective treatments are recommended by the BCLC staging system, BCLC stage C HCC patients would have significantly shorter survival times than stage B patients. However, in this study, patients in BCLC C stage only had thrombosis at bifurcations. In this situation, the branches involved could be removed by resection or embolized by TACE. Therefore, intrahepatic lesions and branch vascular invasion could be treated simultaneously. This might be one explanation for why the BCLC did not have prognostic value in this study. Nevertheless, our results should be considered as preliminary, and further study is warranted. In HCC prognosis, tumor stage and accurate evaluation of the liver-function reserve need to be incorporated [5]. Thus, the presence of cancer-related symptoms, liver function, alpha-fetoprotein, and Child-Pugh score were adjusted for, with no significant differences observed, and subgrouping according to texture analysis proved to be the only factor significantly related to OS. For TTP, subgrouping showed significant differences in the Kaplan-Meier survival curves, but not in the Cox regression. The reason for this observation might be that in the initial screen by Cox regressions, we used survival status (OS) as the event, which probably excluded some parameters related to TTP. Texture analysis is associated with challenges in image acquisition. In a previous phantom study, texture parameters were demonstrated to be relatively sensitive to tube voltage, but to be independent of the tube current [29]. Additionally, one study showed that hepatic texture features were less sensitive to changes in CT acquisition parameters [30]. Slice thickness is another major determinant of textural parameters [31], with one study revealing that a slice thickness of ≤ 3 mm was optimal for feature grading [32]. Thus, we carefully excluded images outside this criterion in the present study, which might have partly reduced the influence of textural parameter reproducibility in prognostic evaluation. This retrospective study had some limitations. First, this study included a relatively small sample size. However, in an attempt to control for possible confounding effects, patients with multiple lesions or extrahepatic metastasis were excluded. Second, the retrospective design of this study did not include some potential confounding factors. In particular, the prevalence of comorbidities that might influence liver texture, such as diabetes, alcoholic liver disease, and early cirrhosis was unknown and needs to be assessed in future studies. Further, the possibility of selection bias could not be eliminated. Finally, in this study, all ROIs were manually drawn by the two radiologists rather than by automatic segmentation. However, excellent inter-observer agreement was observed. Future adoption of a more robust algorithm is warranted. In conclusion, textural variations on baseline CT images might offer more thorough insight for HCC prognosis. Additionally, detailed grouping by wavelet features showed the feasibility of this method in patient stratification. Though further validation is still warranted, texture analysis could potentially be used to inform the LR vs. TACE decision-making strategy. Patients This study was approved by the Ethics Committee of Guangdong General Hospital. Informed consent was waived due to the retrospective design of the study, and all patient records and information were anonymized and de-identified prior to analysis. Between September 2009 and December 2014, 130 patients with a single large HCC ( > 5cm) initially treated by LR or TACE were enrolled (Figure 4). The time interval between baseline CT and initial treatment was less than 14 days. For BCLC stage C, only patients with branch vascular invasion were included, whereas those with extrahepatic metastasis and main portal vein thrombosis were excluded due to limited efficacy of LR/TACE for these patients. For all enrolled patients (if still alive by the study end date of March 2015), at least a three-year follow-up was required; patients diagnosed after March 2012 without death were excluded from this requirement. In this study, we employed the BCLC classification instead of TNM, as the BCLC classification is more informative than TNM regarding survival outcomes and therapeutic strategies [2,3]. In the BCLC classification, disputes exist on the classification of single HCCs > 5 cm without vascular invasion and extrahepatic metastasis as stage A or B [5,28]. Herein, we used stage AB, as proposed by a previous study [28]. As a result, single HCCs > 5 cm with well-preserved liver function (Child Pugh A-B cirrhosis, and PST < 1) were classified as stage AB (without vascular invasion) and stage C (with vascular invasion). Patients with PST = 1 but without vascular invasion were still classified as stage AB, which was in accordance with two studies [28,33]. Anatomical or non-anatomical LR was performed with a margin > 10 mm. All TACE procedures in this study were performed by Seldinger's technique, with epirubicin, lobaplatin and lipiodol mixed as the embolic agent. Follow-up and endpoint The follow-up interval was 4-8 weeks, and included routine laboratory tests, chest X-ray and abdominal CT. Additional CT or magnetic resonance imaging was routinely performed if extrahepatic metastasis was suspected. The primary endpoint was OS, and the secondary endpoint was time to progression (TTP). Disease progression for TACE was defined as an increase of at least 20% in the diameter of a viable target lesion according to the modified Response Evaluation Criteria in Solid Tumors. Disease progression for the LR group was defined as intrahepatic or extrahepatic recurrence. ct examination All baseline images were derived from our picture archiving and communication system. Portal-phase CT of liver was obtained by the same scanner (LightSpeed VCT 64; GE Medical Systems, Waukesha, WI). After administering iopamidol (370 mg of iodine/mL, Iopamiro; Bracco, Milan, Italy), a non-ionic contrast medium, at 1.5 mL/kg (maximum dose, 100 mL) with a double-tube highpressure syringe at 3.5 mL/s, hepatic imaging acquisition was performed at fixed time points at the portal venous phase at a 70 sec delay. The scan parameters were as follows: 120 kV; automatic mA, 80-500 mA; noise index: 7; pitch/table speed = 0.984/39.37 mm/rot; rotation time, 0.6 s; field of view, 300-450 mm; matrix, 512 mm. A slice thickness of 1.25 mm was routinely reconstructed with soft kernels. texture analysis methodology For each pre-treatment examination, 1.25-mm axial images obtained at the portal venous phase through the largest cross-sectional area of the tumor were selected and transferred to two personal computers for texture analysis. The process of texture analysis comprised three steps: (1) image filtration, (2) wavelet analysis and (3) feature extraction. The first two steps were performed using MATLAB 2014a software (MathWorks Inc., Natick, MA). (1) Image filtration: Laplacian of Gaussian (LoG) spatial band-pass filters were used to reduce the sensitivity to noise. Filter width and sigma (σ) are the two parameters that characterize LoG filter weighting. Three σ values (0, 1.0, and 1.5) and a single filter-width of σ*5 pixels were used (Table S2). Pixels with attenuation of less than -50 HU were removed. The filtration process produced a series of images displaying textual features at different filters. (2) Wavelet analysis: The use of the wavelet transform for texture analysis was first proposed by Mallat [35]. This transform provides a robust methodology for texture analysis in different scales. Initially, it decomposes each image and receives its texture by using a series of elemental functions called wavelet and scaling, where "s" governs the scaling and "u" the translation, as follows: As a result, the Haar wavelet transform decomposes each original image into nine images with different scales, called trends and fluctuations: The former are averaged versions of the original image, and the latter contains the high frequencies. Each image is decomposed into 1, 2, or 3 levels and reconstructed in three directions (diagonal, horizontal and vertical). (3) Texture feature extraction: Two radiologists (Readers 1 and 2, with 5 and 4 years of experience in abdominal CT interpretation, respectively) independently performed textural feature extraction and quantification using ImageJ software (National Institutes of Health, Bethesda, MD). For each reader, a user-defined irregular ROI was drawn manually around the largest crosssectional tumor outline and copied to the nine derived texture feature maps. Subsequently, the values of the texture features were measured and saved for further analysis. statistical analyses The Shapiro-Wilk test was applied to assess normality. Differences in patient demographics and characteristics for those undergoing LR or TACE were tested using independent-sample t-tests, Mann-Whitney U tests and Chi-square tests. Inter-observer agreement on textual features was evaluated using intraclass correlation coefficients (ICCs) [36]. Patient demographics and subjective imaging features were included for adjustment in the analyses. Univariate Cox regression was used as a preliminary screening of candidate variables. Variables of statistical significance in the univariate analysis (P < 0.10) were used as input variables for the subsequent multivariate www.impactjournals.com/oncotarget Cox regression models (Forward: LR method). Textural features at each filter were tested in separate models to assess the independent effects of the CT texture of the primary tumor on OS. Thus, six multivariate models were created (one per group per filter; LR and TACE groups, and three filters). Afterwards, the median values of the independent texture parameters were used to separate patients in the LR and TACE groups for subsequent Kaplan-Meier analysis. To explore the potential role of texture features in deciding between LR and TACE, patients were first divided into two subgroups according to the identified prognostic markers in the LR and TACE groups. Cox regression for all patients was performed to determine whether subgrouping was an independent factor for OS and TTP. Next, one way-ANOVA or Kruskal-Wallis H was used to compare the identified textural parameters among the subgroups. Post hoc multiple comparisons were performed using Bonferroni's correction or Dunnett's T3 test. The thresholds of the identified factors in Cox regression models were also determined using standard receiver operating characteristic curves. Figure S1 & S2 contain detailed discussions regarding this approach. All statistical analyses were performed with SPSS 20.0 (IBM SPSS Statistics, Armonk, NY). A two-tailed P value of less than 0.05 was considered statistically significant.
v3-fos-license
2017-05-31T00:48:08.875Z
2009-01-01T00:00:00.000
246939
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.cl/pdf/bres/v42n4/art04.pdf", "pdf_hash": "7a9c20fcb01b696b579d5e495c7700b49104dd0b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44745", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "7a9c20fcb01b696b579d5e495c7700b49104dd0b", "year": 2009 }
pes2o/s2orc
Biofilm formation and acyl homoserine lactone production in Hafnia alvei isolated from raw milk The objective of this study was to detect the presence of acyl homoserine lactones (AHLs), signal molecules of the quorum sensing system in biofilm formed by Hafnia alvei strains. It also evaluated the effect of synthetic quorum sensing inhibitors in biofilm formation. AHLs were assayed using well diffusion techniques, thin layer chromatography (TLC) and detection directly in biofilm with biomonitors. The extracts obtained from planktonic and sessile cell of H. alvei induced at least two of three monitor strains evaluated. The presence of AHLs with up to six carbon atoms was confirmed by TLC. Biofilm formation by H. alvei was inhibited by furanone, as demonstrated by 96-well assay of crystal violet in microtitre plates and by scanning electron microscopy. The H. alvei 071 halI mutant was deficient in biofilm formation. All these results showed that the quorum sensing system is probably involved in the regulation of biofilm formation by H. alvei. Key terms: acyl homoserines lactones, biofilm, furanones, quorum sensing Corresponding author: Maria Cristina D Vanetti, Departamento de Microbiologia, Universidade Federal de Viçosa , MG, 36570-000, Brasil Tel.: +55 31 38992954; fax: +55 3138992573., E-mail address: mvanetti@ufv.br Received: September 24, 2008. In Revised form: September 9, 2009. Accepted: September 10, 2009. INTRODUCTION Hafnia alvei include a heterogeneous cluster of gram-negative, motile, flagellated rod shaped bacteria that belongs to the family of Enterobacteriacea (Sakazaki and Tamura, 1992). This bacterium is recognized as an opportunistic pathogen, found in many nosocomial infections, such as wounds and enteric, urinary and respiratory tract disorders (Katzenellenbogen et al., 2001). It is isolated from dairy (Desmasures, 1995; Pinto, 2004; Kagkli et al., 2007) meat and fish products (Lindberg et al., 1998; Gram et al., 1999; Bruhn et al., 2004) as a common bacterial food contaminant. H. alvei has the potential to form biofilms (Jack et al., 1992; Vivas et al., 2008) that confer considerable advantages to it, including the ability to resist challenges from the environment, the presence of antibiotics and sanitizers, and host immune systems. It has been demonstrated that H. alvei produces N-acyl homoserines lactones (AHLs), the signaling molecules involved in the mechanism called quorum sensing (QS) (Gram et al., 1999; Ravn et al., 2001; Bruhn et al., 2004; Pinto et al., 2007). QS is used by many bacteria to coordinate community behavior as a function of population density (Fuqua et al., 1994; Whitehead et al., 2001). In some bacteria, quorum signaling is an essential regulatory component of virulence and other attributes, including biofilm formation (Nadell et al., 2008). The control of biofilms represents one of the most persistent challenges to industry in which these microbial communities are problematic (Kumar and Anand, 1998). Quorum quenching has been proposed as a new strategy to control a range VIANA ET AL. Biol Res 42, 2009, 427-436 428 of bacterial phenotypes, including biofilm formation, through interference with the communication system (Hentzer et al., 2002; Al-Bataineh et al., 2006). Many organisms in marine environments protect their surfaces from microbial colonization by producing chemical defense compounds. The Australian marine red algae species Delisea pulchra protects itself against colonization by producing a wide spectrum of halogenated furanone compounds, some of which have interesting biological activities against phenotypes involved in the colonization pathways of marine bacteria (De Nys et al., 1995; Manefield et al., 2002). Furanones were found to interfere with QS, thereby inhibiting biofilm formation by Pseudomonas aeruginosa (Hentzer et al., 2002; Al-Bataineh et al., 2006). Considering that QS and biofilm formation are often closely linked, this research aimed to detect the presence of signaling molecules in biofilm of H. alvei and evaluate the inhibitory effect of synthetic furanones on biofilm formation. INTRODUCTION Hafnia alvei include a heterogeneous cluster of gram-negative, motile, flagellated rod shaped bacteria that belongs to the family of Enterobacteriacea (Sakazaki and Tamura, 1992).This bacterium is recognized as an opportunistic pathogen, found in many nosocomial infections, such as wounds and enteric, urinary and respiratory tract disorders (Katzenellenbogen et al., 2001).It is isolated from dairy (Desmasures, 1995;Pinto, 2004;Kagkli et al., 2007) meat and fish products (Lindberg et al., 1998;Gram et al., 1999;Bruhn et al., 2004) as a common bacterial food contaminant.H. alvei has the potential to form biofilms (Jack et al., 1992;Vivas et al., 2008) that confer considerable advantages to it, including the ability to resist challenges from the environment, the presence of antibiotics and sanitizers, and host immune systems.It has been demonstrated that H. alvei produces N-acyl homoserines lactones (AHLs), the signaling molecules involved in the mechanism called quorum sensing (QS) (Gram et al., 1999;Ravn et al., 2001;Bruhn et al., 2004;Pinto et al., 2007).QS is used by many bacteria to coordinate community behavior as a function of population density (Fuqua et al., 1994;Whitehead et al., 2001).In some bacteria, quorum signaling is an essential regulatory component of virulence and other attributes, including biofilm formation (Nadell et al., 2008).The control of biofilms represents one of the most persistent challenges to industry in which these microbial communities are problematic (Kumar and Anand, 1998).Quorum quenching has been proposed as a new strategy to control a range of bacterial phenotypes, including biofilm formation, through interference with the communication system (Hentzer et al., 2002;Al-Bataineh et al., 2006). Many organisms in marine environments protect their surfaces from microbial colonization by producing chemical defense compounds.The Australian marine red algae species Delisea pulchra protects itself against colonization by producing a wide spectrum of halogenated furanone compounds, some of which have interesting biological activities against phenotypes involved in the colonization pathways of marine bacteria (De Nys et al., 1995;Manefield et al., 2002).Furanones were found to interfere with QS, thereby inhibiting biofilm formation by Pseudomonas aeruginosa (Hentzer et al., 2002;Al-Bataineh et al., 2006). Considering that QS and biofilm formation are often closely linked, this research aimed to detect the presence of signaling molecules in biofilm of H. alvei and evaluate the inhibitory effect of synthetic furanones on biofilm formation. Detection of AHLs in extracts of sessile and planktonic cells Biofilms of H. alvei 059 and 071 were formed on 15 coupons (2 x 6 x 0.1 cm 2 ) of stainless steel in MMS after 24 h of incubation at 26ºC, as suggested by Joseph et al. (2001).Coupons were removed from culture media, washed with 0.85 % saline solution, immersed in 150 ml saline and sonicated for 30 min (Ultrasonic Cleanersmodel 1510 water bath) to remove the added cells.Cell suspensions were centrifuged at 13,000 g for 30 min (RC5S, Dupont, USA) and AHLs in supernatants were extracted twice with equal volumes of ethyl acetate acidified with 0.5 % of formic acid, according to Ravn et al. (2001).Extracts were filtered and evaporated to dryness in a rotative evaporator (MA 120, Marconi, Brazil).After complete evaporation, extracts were suspended in sterile distilled water.The same procedure was used to obtain extracts from cells in planktonic stages and from uninoculated MMS, as a negative control. The presence of AHLs in extracts was verified by assaying violacein production by C. violaceum CV026, β-galactosidase induction by A. tumefaciens WCF47 or bioluminescence production by E. coli (pSB403).Bioluminescence produced by E. coli (pSB403) was monitored in a dark room, after 14 h of incubation and images were captured in Eagle Eye II (Stratagene, La Jolla, CA, USA).Twenty-five microliters of extract was dispensed in a 3 mm diameter well made in 1.2 % LB agar, with the addition of appropriate antibiotics and previously inoculated with approximately 10 7 CFU ml -1 of C. violaceum CV026 or E. coli (pSB403).A. tumefaciens was inoculated in 1.2 % AT agar with appropriate antibiotics. Aliquots of 25 μ of N-hexanoyl-D-Lhomoserine lactone (HHL), 75 nmol l -1 or 25 μ of MMS extract were added to wells as positive and negative controls, respectively.Each bioassay was conducted at least twice in an independent manner. AHL characterization The AHLs contained in the sessile and planktonic cell extracts of H. alvei 059 and 071, obtained as described above, were also analyzed by reverse phase thin layer chromatography (TLC) adapted from Shaw et al. (1997).Cultures extract were dissolved in 400-600 μl of HPLC-grade ethyl acetate.Synthetic AHLs or extract samples dissolved in ethyl acetate, in volumes of 10-20 μl, were spotted onto aluminum sheets (C18, RP, 254s, Merck, Germany) measuring 20 x 20 cm.The standards used were: α-amino-γbutyrolactone hydrobromide, N-hexanoyl-DL-homoserine lactone, N-decanoyl-DLhomoserine lactone, N-dodecanoyl-DLhomoserine lactone and N-tetradecanoyl-DL-homoserine lactone (Fluka, Switzerland).The chromatography was run using a solvent system of methanol: water (60 : 40, v/v) and, after running, the solvent was evaporated.The dried plates were overlaid with 30 μl of an overnight culture of A. tumefaciens WCF47.The spots were visualized after overnight incubation at 30ºC, according to the monitor strain used and retention factor (Rf) values were calculated. Detection of AHLs in biofilms Biofilms of H. alvei 059 and 071 formed on stainless steel after 24 h of incubation were washed in sterilized 0.85 % saline solution to remove non-attached cells.Each side of the coupons was then exposed to UV light (312 nm, 52 cm of distance) for 30 min in order to inactivate most of the cells.The presence of AHLs in situ was determined by covering coupons with LB or AT agar supplemented with appropriate antibiotics and inoculated with c.a. 10 7 CFU ml -1 of each monitor strain.The AT agar was also supplemented with 50 μg ml -1 of X-gal.Petri dishes were incubated at 30ºC and results were observed up to 24 h.It was previously verified that a few viable cells remained in coupons treated with UV, but no changes in the turbidity of the LB medium was observed within 24 h incubation of coupons immersed in it.This observation discarded the production of considerable quantities of AHLs during incubation of the coupons and monitor strains.Each bioassay was conducted at least twice in two independent experiments. Effect of furanones in biofilm formation In order to analyze the relationship between AHL production and biofilm formation in H. alvei, we study the effects of synthetic furanones, known Quorum Sensing Inhibitors (QSI)-molecules, on growth and biofilm formation by using the 96-well assay adapted from O'Toole and Kolter (1998).The H. alvei 071 strain was cultured overnight at 26ºC, under shaking.Eighteen microliters of H. alvei cultures, with approximately10 8 CFU ml -1 , was placed in each well containing 180 μl of MMS media added with synthetic furanones (Fluka Laboratory Chemicals, Milwaukee, Wis., USA).The furanones used were 3-methyl-2(5H) furanone (MF), 2-methyltetrahydro-3-furanone (MTHF), 2(5H)-furanone (F) and 2,2-dimethyl-3(2H) furanone (DMTHF), at the final concentration of 0.01, 0.1 and 1 mol l -1 .The microplates were incubated at 26ºC for 24 h and the cell growth determinate at 600 nm.The planktonic cells were removed and the remaining adherent cells were stained for 30 min with 200 μl of 0.1% (w/v) crystal violet solution dispensed in each well.Excess stain was removed by washing three times with distilled water and 200 μl of 95% (v/v) ethanol was added to the wells to release the stain.The extent of biofilm development was determined by measuring the absorbance of the resulting solution at 600 nm.For each experiment, correction for background staining was made by subtracting the value for crystal violet bound to uninoculated controls.The biofilm assay was performed twice, with triplicates in each assay.The data were expressed as the ratio between the optical density relative to the sessile and total cells and were submitted to variance analysis.When it was significant, the Dunnet test at 5% of probability was employed using the Windows 2006 version of the Genes program (Cruz, 2006). Biofilm observation by scanning electron microscopy Confirmation of the presence of biofilm on polystyrene was obtained by scanning electron microscopy.Two hundred microliters of high DO 600 nm activated cells of H. alvei 071 and H. alvei 071 halI mutant were inoculated in 24 wells on polystyrene microplates containing MMS, with 1 mol l -1 of 2(5H)-furanone added, obtaining a final volume of 2 ml.As controls, MMS was used with a suspension of cells added and MMS without any addition.Polystyrene coupons measuring 1 x 1.2 cm were immersed in the wells and microplates were incubated for 48 h.Coupons of each treatment were removed and submitted to appropriate treatments before being observed in a scanning electron microscope (Leo ® , model VP 1430). AHLs in extract of sessile and planktonic cells Extracts obtained from planktonic cells of H. alvei 059 and 071 cultured in MMS induced violacein production by C. violaceum CV026, β-gal production by A. tumefaciens WCF47 and bioluminescence by E. coli pSB403.However, extract from sessile cells of both strains did not activate a quorum-sensing response in C. violaceum CV026.The results obtained with H. alvei 071 were presented to illustrate the detection of AHLs in extract of sessile and planktonic cells (Fig. 1). AHLs characterization In agreement with the results obtained with the different reporter strains, TLC experiments allowed us to characterize the homoserine lactones molecules produced by the strains in the sessile and planktonic stages (Fig. 2).Comparing the Rf values of spots from samples with standard spots, it can be inferred that H. alvei produced AHLs with carbon chains similar to HHL, whose acyl chain might be substituted or not at the third carbon atom.However, spots corresponding to HHL were not detected in extract of planktonic cells of strain 059 (Fig. 2).The production of molecules with carbon chains composed of less than six carbon atoms was also demonstrated.Although the forms of spots obtained were different, the oxo and hydroxy derivatives, with the same chain length, migrate with no distinguishable mobility in the solvent system. AHLs production in biofilms H. alvei strains produced AHLs in biofilm formed on stainless steel and these molecules were detected by at least one monitor strain, with A. tumefaciens WCF47 being the most sensitive to AHL (Table 1). Effect of furanones in biofilm formation Furanones added to MMS broth up to 1 mol l -1 did not affect OD 600 nm (P>0.05)relative to growth and biofilm formation of H. alvei 071.However, when the ratio between the optical density relative to the sessile and total cells was calculated, a significant reduction (P<0.05) in biofilm formation was determined in the presence of MF, MTHF and F in concentration higher than 0.01mol l -1 when compared to control (Fig. 3).In our experimental conditions, furanone DMTHF did not cause a significant effect (P>0.05) on biofilm formation by H. alvei 071 (Fig. 3). Biofilm observation by scanning electron microscopy The wild type of H. alvei 071 formed a densely packed biofilm (Fig. 4A), whereas the halI mutant appeared to grow rather as discontinuous sheets on the polystyrene surface (Fig. 4B).The addition of 2(5H)furanone (F) to the culture media reduced the cell number of H. alvei 071 adhering to coupons and after 48 h of incubation, isolated cells predominated. DISCUSSION There is clear evidence that biofilm formation is a carefully orchestrated process that is dependent on quorum sensing and that AHLs affect several aspects of biofilm dynamics in gramnegative bacteria, such as heterogeneity, architecture, stress resistance, maintenance and sloughing (Davies et al., 1998;Allison et al., 1998;Kjelleberg and Moli;2002).Our results demonstrate the presence of AHLs in sessile cells of H. alvei on stainless steel determined by using culture techniques (Fig. 1) and by TLC (Fig. 2).This is strong evidence that QS also has a role in regulating biofilm formation in the evaluated strains.This kind of signal molecules has also been reported in biofilms formed by other bacteria, such as Nitrosomonas europea (Batchelor et al., 1997); P. fluorescens B52 (Allison et al., 1998); P. aeruginosa (Davies et al., 1998); Serratia marcescens (Rice and Koh, 2005); Burkholderia cepacia (Gotschilich et al., 2001;Huber et al., 2001;Conway et al., 2002) and Aeromonas hydrophila (Lynch et al., 2002). Although AHL production by H. alvei has already been reported (Gram et al., 1999;Bruhn et al., 2004;Pinto et al., 2007) this is the first time that AHL molecules were characterized in biofilms produced by this bacterium.This finding could help to better understand the adherence properties of H. alvei.Because H. alvei formed biofilms in stainless steel, it has the potential to colonize food-processing surfaces, and to be a continuous source of contamination when cells get detached from 1).This result confirmed the importance of using diverse monitoring systems to detect AHLs.It should be emphasized that a negative response in this assay shows an inability to produce AHLs that are recognizable to C. violaceum CV026 or that the number of AHLs is very low or zero.Every monitoring system presents different degree of sensitivity and responds to specific groups of AHL (Zhu et al., 2003).For example, C. violaceum CV026 does not detect AHL with hydroxy substitution in the acylated chain and lacks sensitivity to many oxo derivatives (McClean et al., 1997;Cha et al., 1998). The observation that the addition of furanone to culture media affected biofilm formation by H. alvei 071 (Fig. 3) can be attributed to inhibition of the QS system of this bacterium.Furanones can bind to the LuxR protein, thereby reducing the amount of protein available to interact with AHL or promoting the degradation of this protein, thus impeding the transcription of genes involved in regulation of biofilm formation (Manefield et al., 1999;Rasmussen et al., 2000;Hentzer et al., 2002).The inhibition of the expression of phenotypes controlled by QS by furanones was already recognized in a range of bacteria.Givskov et al. (1996) demonstrated that swarming migration of pathogenic bacteria Serratia liquefaciens on agar surface was completely inhibited by the addition of 100 mg l -1 of (5Z)-4-brome-5 -( b r o m o m e t h y l e n e ) -3 -b u t y l -2 ( 5 H )furanone.This molecule also suppressed the expression of luminescence genes, localized in a reporter plasmid in S. liquefaciens, without affecting bacterial growth rate.Furanones also inhibits virulence genes, as seen by the production of extracellular toxin in the pathogenic strain of Vibrio harveyi (Manefield et al., 2000). A mutation that blocks generation of the signal molecule hinders abnormal biofilm formation by H. alvei 071, whereas the presence of 1 mol l -1 of 2(5H)-furanone inhibited the biofilm formation (Fig. 4).These results suggested that a cell-to-cell signal is required for the differentiation of individual cells of H. alvei 071 into complex multicellular structures.Inhibition of biofilm formation was verified in P. aeruginosa using furanone 56 and this substance did not influence the initial adhesion process, but affected the biofilm architecture and increased the detachment process, leading to loss of biomass from the substrate (Hentzer et al., 2002).These authors also observed that this substance reduced the expression of virulence genes, indicating a general effect of furanone 56 on target genes of las circuit of QS in P. aeruginosa.The precise role of AHLs in biofilms remains to be established and more studies should be conducted to elucidate the involvement of QS in the different stages of biofilm formation by these bacteria. TABLE I Evaluation of AHL production in biofilm formed by H. alvei in coupons of stainless steel immersed in MMS and incubated at 26ºC, during 24 h using monitor strains.
v3-fos-license
2019-08-18T17:31:12.597Z
2019-03-01T00:00:00.000
166255808
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://figshare.com/articles/presentation/Education_and_Human_Resource_Development/13048748/1/files/24967586.pdf", "pdf_hash": "cf579973bffa1539b7ef316b0d44b69b5a41a739", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44748", "s2fieldsofstudy": [ "Education" ], "sha1": "3947940c7feeb3b06d4e1c33c971d13cfdcb60b3", "year": 2019 }
pes2o/s2orc
EDUCATION AND HUMAN RESOURCE DEVELOPMENT IN AFRICA As many African countries strive to achieve sustainable socio-economic development and peace following seasation of armed conflicts in many areas, education has been underscored as the gateway and cornerstone. Educational reforms have been highlighted as key component for post conflict development and youth marginalization especially if these reforms are focused on eliminating some of the causes of conflict. There has been a marriage between educational expansion and economic development geared toward creation of job opportunities. In this vein many African countries have in the last three decades defined and redefined their national goals of education, and many of them have done vigorous curricula review at all levels of their education systems. As many African countries strive to achieve sustainable socio-economic development and peace following seasation of armed conflicts in many areas, education has been underscored as the gateway and cornerstone.Educational reforms have been highlighted as key component for post conflict development and youth marginalization especially if these reforms are focused on eliminating some of the causes of conflict.There has been a marriage between educational expansion and economic development geared toward creation of job opportunities.In this vein many African countries have in the last three decades defined and redefined their national goals of education, and many of them have done vigorous curricula review at all levels of their education systems.This strive to give good education to the citizenry, and every citizen for that matter, have equal opportunities for employment is what human resource development is about.Education, no doubt is Human Resource Development (HRD).Education works as a self-contained system that strives to provide skills and knowledge that enable youths to engage in meaningful activities in society.This makes it apparent that the critical assets of a country are its human resources.The effective utilization of its human resources is the crucial factor in determining the growth and prosperity of the economy of the nation.Apparently, the transmission of sustained functional skills and talents in individuals makes education a cornerstone for national development and advancement. For education to play these roles and produce the needed human resource capacity some fundamental alteration in national educational policies should be undertaken in some or all of the following areas: 1.The national allocation of resources to the field of education; 2. The allocation of resources within the existing educational system to other levels of the system; 3. The percentage of students completing different levels of the education system; 4. The percentage of students from different social strata; 5.The percentage of female students that complete different levels of the educational system; 6.The objectives and delivery of the curricula and content. Educational reforms for appropriate Human Resource Development must effect favorable changes in these areas listed above.Other agencies like religious, political and economic organizations can in a formal or informal way educate varying proportions of a population to achieve appropriate education and become modern persons.According to Inteles and Smith, (1974: 19-25), a modern person is one who possesses the following traits: 1. Openness to new experience; 2. Readiness for social change; 3. Awareness of the diversity of surrounding attitudes and opinions; 4. Being energetic in acquiring facts and information on which to base opinions; 5. Time orientation toward the present and the future instead of the past; 6.A sense of efficacy or the belief that one can exert influence over one's own environment; 7. Placing high value on technical skill and accepting it as a basis for the distribution of rewards; 8. Placing higher value on formal education and schooling and aspiring to high levels of educational and occupational attainments; 9. Respect for the dignity of others; 10.Understanding the logic underlying production and industry from the above, it is obvious that education s a lifelong process.What a student obtains from the school and college is only a small part of the education that needs for economic and social life of a person.Thus education in holistic terms is imperative to develop special skills in the populace and this has to come from constant and continuous programmes. Human Resource Development (HRD) activities have helped increase GNP and overall productive activities in industrially development countries.According to Rena (2006) Human Resource Development in itself can be understood in different ways.Human Resource Development (HRD) in its broadest sense is an all-inclusive concept, referring to the process of increasing the knowledge, skills and capacities of all people in a society, in economic terms it refers to the accumulation of human capital, and in political terms it refers to preparing people for participation in democratic political processes, while in social and cultural terms, it is helping people to lead fuller lives, less bound by tradition (Tseggai, 1999).This means that a country's most valuable asset is its people.Education, therefore, plays the most critical role in developing the intellectual and creative capacities of the citizenry; Hence education is seen as a panacea for development, which in increasing human capital will lead to other developmental gains (Muller, 2004). Employers continually request for skilled and knowledgeable workers; however, because of the increasingly complex demands placed on our education systems daily we are not preparing the youths to enter the workforce.It is only in the last ten years that many African countries have focused on relevance in their curricula and many are addressing emerging global issues.The next section will address some of these emerging issues in the education systems of some African countries. An Overview of Education in Africa Africa's future really lies in its people.Indeed Africa must solve its current human capacity crisis if it is to show its head above water in this modern world.We need to Invest in people to become critical because Africa's future economic growth depends less on its national resources since these are being depleted and exploited by expatriate experts'.What we will need is labor skills to accelerate growth, based on a flexible educated workforce to take advantage of economic openness.We need to invest in people in order to promote their individual development to be free from poverty. Poverty reduction is now the main focus of African countries' sustainable development efforts.This has also been the major discussion point of bilateral and multilateral donors.Many African governments, in this regard, have developed their own homegrown Poverty Reduction Strategy Papers (PRSPs). But even as African countries strive to fight poverty, the issue of brain drain has remained problematic.Brain drain or human capital flight is not a new phenomenon in Africa and it has assumed a critical dimension.Current statistics estimate that Africa is losing an average of 20,000 African professionals annually through exodus.This number increased far above this during turbulent times when many African countries were faced with tribal and political wars with the attendant brutal rebel incursions.Others flee annually in their large numbers looking for "greener pastures" But it is worthy of note that in many post-conflict African countries, many professionals who fled their countries for fear of losing their lives in the war are now returning home in their large numbers.For example, in Sierra Leone and Liberia, a notable percentage of government officials and university staff consist of 'returnees' from the Diasporas.The human resource is being built up again gradually and this time with 'brain gain'. The issue of globalization is the next thing that African countries are grappling with especially as they strive to sustain human resource.The world is changing and becoming more and more interconnected.The information revolution of the last two decades is best characterized by two seemingly conflicting forces, namely: 'Competition and Cooperation'.In the global economy, successful firms engage in cooperative competition to capture the benefit of strategic research and development alliances and cooperative production networks.Through 'Cooperative Competition' firms that were once rivals now form flexible cooperative ventures to jointly compete in the global market.In this process, telecommunications and computer technologies facilitate the flow of ideas, capital and goods and services, information, and knowledge across national borders.This is what globalization is all about.Since 1995, the internet has revolutionized the way we engage in economic, political social and cultural transactions.Online shopping, libraries, medical information networks, investment, banking, books and virtual classrooms are providing alternative modes for engaging in leisure, work, investment, travel and Study activities.Obviously, computer telecommunications technologies are the drivers of global technological innovation. It is thus pertinent that education and educational practices include delivery mat will promote acquisition of knowledge and skills to understand modern Information Communication Technology (ICT).Universities, the world over are now busy networking, cooperating and collaborating with each other in various activities.This is the spirit of globalization and ICT is a major facilitator.ITC empowers us to network, connect, collaborate, and cooperate, and learn at work and school and at the same time demands that we continue learning beyond the classroom, beyond the office, beyond the eight hour day, and thus is how quality in human resource development is pursued and obtained.Africa will get no where with its education industry until quality education is achieved. Lack of quality education is a serious capacity gap in the education system of many African countries and more so in the post-conflict areas.Indeed, the importance of capacity building for sustained economic development and transformation in Africa is considered the 'missing link' in Africa's development.Capacity building for human resource development is a comprehensive process, which includes the ability to identify constraints and to plan and manage development.It involves both the development of human resources and institutions and a supportive policy environment. UNDP defines capacity building, as the process by which individuals, groups, organizations, institutions, and societies develop their abilities individually and collectively, to perform functions, solve problems, set and achieve objectives (UNDP, 1994).In this vein, one would say that favourable working conditions and appropriate incentives together will first, encourage people to be more productive and Second, prevent all forms of brain drain.Poor salaries encourage staff to practice unprofessional behaviour such as corruption, bribery and misappropriation of public funds. There are serious capacity constraints in almost all sectors in most of the countries characterized by shortages of skilled staff, weak institutional environments which undermine the proper utilization of existing capacity, inadequate training facilities and limited capacity to satisfy the need for skilled people.African countries need capacity for national and regional development as well as for effective participation in the economy.Capacity is needed to develop and sustain good governance, design and manage effective policies and programmes, manage the environment, address poverty, fight HIV/AIDS and apply science and technology to develop and solve problems. In some post war countries like Sierra Leone and Liberia, the UNESCO is conducting several research activities geared towards addressing the capacity gaps in the education system at all levels.This will go a long way towards developing quality human resource base for these countries. In listing down some critical areas in which capacity is required especially in post war African countries, Wangwe and Rweyemamu (2001) included: 1. Conflict resolution and management; 2. Improvement in national statistics; 3. Strengthening consultation among stakeholders in the development process; 4. Rehabilitation of educational institutions and systems; 5. Fostering of regional cooperation and integration; 6. Developing, implementing and monitoring poverty reduction initiatives; 7. Strengthening capacity for international negotiations.Wangwe and Rweyemamu (2001) further observed that even though considerable promising signs of economic recovery and sustained growth as a result of economic and institutional reform programmes are going on, many African countries are still faced with formidable challenges which could be addressed through and institutional capacity.These according to them include: 1. African people are still among the poorest in the world 2. HIV/AIDS is still a threat to growth.3. Institutional and governance reforms are still far from being sufficiently effective to attract private investment.4. The crisis in the education system remains unabated.5. Brain drain remains a continuing threat to human capacity building and retention.6. Political stability and peace still needed in some areas.7. Globalization remains a challenge. Human Resource Development Strategies in Post Conflict Areas Many parts of Africa have experienced wars of various kinds in the last four decades.In West Africa, Sierra Leone and Liberia are believed to have recently fought 'senseless' wars in their countries.These wars destroyed most of the country's social, economic, and physical infrastructure.They left untold scars in the education sector.Schools, infrastructure and teaching materials and facilities were devastated.This resulted to overcrowding in many classrooms, displacement of teachers and delay in paying their meager salaries, disorientation and psychological trauma among children, poor learning outcomes and complete disorientation of curricular content. Let me at this point focus a bit on education strategies geared to develop human resource capacity in Sierra Leone as a case in point.Since the end of the war in 2002, the country has made remarkable recovery in the education sector following series of aggressive reforms in the sector.All these reforms are strategies and action plans geared to sustainable development in the education sector and the national economy of Sierra Leone as a whole. A factor, which among many others that needs to be addressed in striving for sustainable education is capacity in totality.As already mentioned in this paper, the UNESCO, is currently sponsoring various groups of local researchers to identify capacity gaps at all levels of the education system in Sierra Leone.When these are identified, efforts will be made to address them for better performance. Several education reform strategies were also developed giving rise to various Education Acts and Policies in Sierra Leone.Below is a few of them: 1.The National Recovery Strategy Sierra Leone 2002Leone -2003 The document consists of the achievements and constrains in the social sector including education.The policy of free Primary Education which was introduced in 2000 contains accelerated access to primary education as evidenced by the increase in enrolment.This also resulted into more increase in enrolment in Secondary and Tertiary education.However, the side effects were overcrowded classrooms and proliferation of sub standard primary and secondary schools.Strategic responses to address this situation would include: 'Improving access to education and missing the completion rate, especially the Primary and Junior Secondary Schools, improving the quality of education through extensive training programmes for teachers, providing adequate teaching and learning materials; improving the conditions of service for teachers especially in remote areas; providing early childhood and care for more children; and encouraging the girl child to attend and complete school (Dr.Ernest Bai Koroma). The PRSPII outlines the challenges with which the education sector is grappling.These include: a. Weak management and delivery systems; b.Overcrowded classrooms; c.Shortage of teaching and learning materials; d.Poor internal efficiency of the education system. Education in Sierra Leone Present Challenges and Future Opportunities The World Bank publication on education in Sierra Leone (The World Bank, 2007), presents detailed analysis of the challenges of education and proposals on the strategic priorities to address them.These challenges include: a. Lack of complete access to quality universal primary education; b.Low retention of pupils; c. High dropout rate; d.Large out-of-school primary age population; e. Poor pedagogy; f.Poor learning achievement; g.Inadequate resources (human, material and financial. The existence of an enabling policy and legal framework and firm government commitment to education are sure to address these challenges.Findings of this national survey were far-reaching and recommendations were categorized into Immediate, Short term and Long term.Government took these recommendations and transformed them into the Government White Paper (2010).The contents of this White Paper and its origin, the Gbamanja Report informed the review of the National Policy on Education (2007) that gave rise to the National Policy on Education (2010).The Teaching Service Commission was also developed as a result of this process. Teaching Service Commission (2010) The development of this Commission was also as a result of the Gbamanja Commission being one of its recommendations.There had been a general view that the 6-3-3-4 education system was dysfunctional and thus the need for a teaching service commission that would be responsible for all aspects of teacher recruitment, development and management was critical to educational effectiveness in Sierra Leone.Thus, this Teaching Service Commission was developed to pursue excellence in teaching and education by registering, licensing, recruiting, developing and reviewing teachers' conditions and recommend to the Ministry of Education, Science and Technology for quality service and desirable learning outcomes.The establishment of this Commission has already gone through the legal system and Parliament also has already adopted it.Mechanisms are being put in place to make the commission functional. Education and Human Resource DevelopmentIn Africa - Prof. Sahr P. Thomas Gbamanja (Gcor) The Policy of Free Primary Education has caused great increases in enrolments at all levels of the education system.According to the World Bank (2007), enrolments doubled in primary school between 2001/02 and 2004/05 just after the war.Enrolments in the Junior Secondary School and Senior Secondary School also experienced significant increases.An increase in enrolments has also been witnessed in Tertiary Education.The corresponding upward trends are reflected across different levels in education. The following graphs explain the enrolment scenario before and after the war The dotted line means no data for the corresponding years. Primary School enrolment was stable at close to 400,000 in the late 1980s.But the effect of the war was different across regions and increases in enrolment in one area may mask decreases in others.However, the end of the war and the government's decision to offer free primary education in 2001 led to a doubling in student enrolment between 2001/02 and 2004/05 reaching 1.3 million in 2004/05.This increase in enrolment has continued up to recent years. The above graph explains the expansion of the education system as reflected in both JSS and SSS.It is reported that 95 percent of JSS students are enrolled in government or government assisted schools, and only 3 percent are enrolled in private schools (the remaining 2 percent are in schools administered by Non-Governmental Organizations (NGOs).At the SSS level, about 92 percent of students are enrolled in government or government-assisted schools, 2 percent in private schools and 6 percent in Non-Governrnental Organizations (NGOs) schools.But in recent years, more private schools have emerged changing these given percentages. Education and Human Resource DevelopmentIn Africa - Prof. Sahr P. Thomas Gbamanja (Gcor) The above graph shows that total enrolment in Tertiary Institutions more than doubled over the years from about 6000 in 1998/99 to more than 16,000 in 2004/05 after the wars and after the announcement of Free Primary Education.These enrolment trends have continued in recent years especially with the establishment of one more University (a private Catholic University) and several private polytechnic type of institutions offering mostly disciplines in the Management Sciences. These enrolment trends are also evident in the entries for the West African Senior School Examinations in Sierra Leone and other West African countries.These are illustrated in the tables below.At the time of the World Bank Report under review, Sierra Leone had two Universities with their constituent colleges; three Polytechnic institutions and constituent campuses and two teacher training colleges.All these were public institutions.Distance Education was offered by one of Universities and the Teaching Training Colleges.The graph shows all these efforts are geared toward developing adequately qualified human resource capacity for the workforce.But obviously, all these efforts could be fruitless without quality assurance and quality control in the system.Thus, the next section will discuss quality in the education system. Quality Education for Better Human Resource Development It has been explained in this paper that education is human resource development.Thus, for education to develop functional useful human capacity, it must be of a good quality.Quality education, according to Ranga, Gupta and LaI (2010) means that the majority of the students, if not all, are able to meet the expectation of the 'minimum level of learning'.It means stimulating creative thinking, developing problem-solving skills and laying emphasis on application of knowledge.Quality education thus includes: 1. Learners who are healthy, well-nourished and ready to participate and learn, and supported in learning by their families and communications. 2. Environments that are healthy, safe, protective and gender sensitive and provide adequate resources and facilities. 3. Content that is reflected in relevant materials for the acquisition of basic skills, especially in the areas of literacy, numeracy and skills for life, and knowledge in such emerging issues as gender, health, nutrition, HIV/AIDS prevention and peace. 4. Possesses through which trained teachers use child-centred teaching approaches in well-managed classrooms and schools and skillful assessment to facilitate learning and reduce disparities. 5. Outcomes that encompasses knowledge, skills and attitudes, and are linked to national goals for education and positive participation in society. Quality education thus is akin to quality teachers and quality teaching.What then is quality teaching? Teaching is an attempt to help people acquire some skill, attitudes, knowledge, ideas or appreciation.In other words, the 'teacher' task is to create or influence desirable changes in behaviour in his or her learners.Other authors define teaching as the guidance of pupils through planned activities so that they (pupils) may acquire the richest learning Academic Discourse: An International Journal possible to form their experiences.While yet some authors see teaching as the interaction between a teacher and student under the teacher's responsibility in order to bring about the expected change in the student's behaviour. We shall examine more closely, the idea suggested by Dewry (1933) that teaching can be likened to selling.No trader can boast that he or she has sold so many goods when nobody bought anything from him or her.Consequently, effective teaching is one that results in the pupils learning maximally what is taught them.To be able to do this the teacher must identify the needs of his learners and then prepare the materials or learning experiences that best match their needs.Therefore, the preparation, the strategies and the medium through which the learning experiences are communicated must also be compatible with the needs of the learners.It is only when this is done that one can say that teaching is effective.How do we then know that teaching is effective even after proper preparation and delivering of the lessons have been done?We know this through the process of assessment and evaluation.Every effective teaching process must result in assessment.This is the method of knowing whether or not the learners have learnt what they were expected to learn from the lesson and the extent they have learnt.If for instance, after a particular lesson, only 30% of the class is shown to have mastered the objectives of the lesson taught, this lesson cannot be said to be effective.On the other hand, if about 70% or more mastered the objectives, then the lesson could be said to be effective.Thus, one can say that, all things being equal an effective lesson preparation leads to an effective lesson delivery, and an effective lesson delivery leads to an effective mastery of lesson objectives.From the backdrop, we see that teaching is a human undertaking whose purpose is to help people learn.It is an interaction between a teacher and a student under the teacher's responsibility in order to bring about the expected change in the student's behaviour.The purpose of teaching thus, is to help learners to: 1. Acquire, retain and be able to use knowledge 2. Understand, analysis, synthesize and evaluate skills 3. Establish habits and 4. Develop acceptable attitude or behaviour patterns. The Components of Effective Teaching The classification of related set of activities that the teacher engages informs the components of teaching. According to Awotua-Efebo (1999) three major components of teaching that have been identified are preparation, execution and evaluation of teaching events.These are schematized below: Components of Effective Teaching At the preparation stage, every teacher must plan the lesson that is intended to be taught.This includes all the activities of the teaching that lead to putting the lesson together, i.e formulating appropriate objectives, relevant subject matter, teaching aids and the resultant lesson notes. The execution stage is where the teacher communicates the lesson to the learners.His strategies and appropriate methodologies are laid out in the lesson notes.Classroom management, which is a part of execution and evaluation, entails classroom controls, hygiene, and general classroom discipline. Evaluation deals with ascertaining that the lesson is effective.It is a feedback to tell the teacher whether the student has learnt and whether the teacher's strategies and specific objectives of the lesson taught were achieved.All these must involve quality teachers who are well trained. A trained teacher, therefore is an educationist, who underwent pedagogical training including a good knowledge of the principles and practice of education, in addition to his or her teaching subjects or discipline.A good teacher must be trained in the basic elements of what to teach, how to teach it, and when to teach it.He must be familiar with contemporary content of education, methodology and techniques or strategies, Academic Discourse: An International Journal personality of the teacher, and the characteristics or qualities of the learner (Gbamanja, 1991(Gbamanja, ,2002)).Thus, effective teaching involves a fusion between sound academic knowledge and profound knowledge of pedagogical principles which are complex and many.It is vital, therefore, mat the teacher be involved in the decision-making process of curriculum planning.Specifically, he will be involved in planning instruction within and outside the classroom, counseling learners, managing the classroom, providing and organizing a healthy relationship between the community and the school.The teacher's role is of vital importance.He is the promoter of the school curriculum, the interpreter of societal dreams and aspirations into practical realities, and he is a vital intermediary between the society and the learner.Thus, quality teacher is one who has: 1. Good knowledge of subject matter 2. Good personality 3. Professionalism 4. Ability to understand child psychology 5. Ability to inspire learners 6. Ability to be resourceful and he possesses skills to improvise 7. Ability to observe and evaluate Gbamanja (1999Gbamanja ( -2002) ) Challenges of Teachers in a Globalized ICT World As teachers concerned with human resource development in this globalized world your knowledge and use of ICT in your classroom is apparent.As you continually develop your pedagogical use of ICTs to support learning, and curriculum development including assessment of learners and the evaluation of teaching you will: i. Demonstrate understanding of the opportunities and implications of the uses of ICTs for learning and teaching in the curriculum content.ii. Plan, implement, and manage learning and teaching in open flexible learning environments iii. Assess and evaluate learning and teaching in open and flexible learning environments ICTs provide powerful new tools to support communication between learning groups and beyond classrooms.The teacher's role expands to that of a facilitator of collaboration and networking with local and global communities.The expansion of the learning community beyond the classroom also requires respect for diversity, including inter-cultural education, and equitable access to electronic learning resources.Through collaboration and networking, professional teachers promote democratic learning Education and Human Resource DevelopmentIn Africa -Prof.Sahr P. Thomas Gbamanja (Gcor) within the classroom and draw upon expertise both locally and globally.In doing so teachers will: i. Demonstrate a critical understanding of the added value of learning nationals and collaboration within and between countries and other countries.ii. Participate effectively in open and flexible learning environments as learning and as a teacher.iii. Create or develop learning networks that bring value to the education profession and to society.iv. Widen access and promote learning opportunities to all diverse members of the community, including with special needs. The most obvious challenge for professional development of teachers in present day Africa is to provide courses in basic ICTs knowledge and skills.These types of courses, taught at training centers and universities with a syllabus set by regional or national agencies, have been a common practice in many countries.We must note however, that the development of ICT does not necessary improve education if the focus is on ICTs.The vision must focus on what ICTs can do to improve education. For education to reap the full benefit of ICTs in learning, it is essential that preservice and in-service teachers have basic skills and competencies.Teacher education institutions and programmes must provide the leadership for pre-service and in-service teachers and model the new pedagogies and tools for learning. Importantly, they must also provide leadership in determining how the new technologies can best be used in the content of the culture, needs and economic conditions within Africa. Conclusion In national development, human resource development should be made in priority.Thus, capacity building (both human and institutional development) must be a central goal for any donor agency wishing to give assistance to Sub-Saharan Africa. Periodically, we need to review all a country's priority human resource needs, all the ways those needs might be met, and address what might be done to improve the situation on both the demand and the supply sides.In this process, we need to do periodic appraisal, supervision, monitoring and evaluation of education for better quality assurance. The quality teacher who will have to produce a quality graduate or school learner from a quality education set-up is faced with numerous challenges especially in present day Africa, like many other 'struggling' nations is the world.The teacher is operating in an 2. Various Strategic Acts for Education Development Between 2001 and 2005, various Acts leading to National Education Sector Development and Sustainability were promulgated.The following were the main Acts; (a) The Education Act 2004: This document overhauls the entire education system and dictates new segments and roles in the Ministry of Education and the Sector as a Academic Discourse: An International Journal whole.New guidelines were also established for sector effectiveness.(b) The Local Government Act: This devolved the administration of the Primary and Junior Secondary Schools to the Local Councils in whose areas these schools existed.The Local Councils are also responsible for the supervision of these schools on behalf of the Ministry of Education the main service provider of the sector.(c) The Tertiary Education Commission Act (TEC Act 2001): This Act established an autonomous body called the Tertiary Education Commission (TEC) set aside to advise Government on Tertiary education and to provide an institutional liaison with Government and other stakeholder organizations offering assistance in the tertiary education sector and to ensure parity of the products of tertiary instruction.(d) The Polytechnic Act 2001: In this Act, some Teacher Education Colleges were upgraded to a Polytechnic status with separate councils responsible for the administration of the Colleges.This was to develop and sustain skills for middle manpower especially in Vocational and Technical disciplines.Some of these have gained affiliation with various Universities and have become degree awarding institutions.(e) The Universities Act 2005: By this Act, the University of Sierra Leone was reconstituted, Njala University was established and the Act provided for the establishment of other public Universities and Private Universities and to provide for other related matters.3. The Sierra Leone Education Sector Plan (ESP) 2007-2015 The is a strategic document which is based on the government's 2006 Country Status Report (the diagnostic and analytical formulation of the situation of Education in Sierra Leone) and the 2004 Poverty Reduction Strategic Paper.Together, they map out how Government of Sierra Leone will build on the education gains made since the war so that by the year 2015 all children will be going to school and receiving quality education.The plans highlight the major challenges of the sector, suggestions on how to mend these Challenges, and the need to produce qualified and relevant workforce to spearhead the Development of the country.The ESP prioritizes both institutional and individual Capacity Building since the capacity needs of the education sector are great at each and every level and Implementing is ongoing.4. National Policy on Teacher Training and Development is Sierra Leone (2010) as already stated above, the Government of Sierra Leone has already established a Education and Human Resource DevelopmentIn Africa -Teaching Service Commission (TSC), which is in the process of being implemented by the Ministry of Education, Science and Technology?This is geared toward teacher development especially in the areas of recruitment, licensing, monitoring and disciplining where appropriate.The National Policy on Teacher Training and Development in formulated to ensure equitable distribution of teachers among the various regions of the country; redress the comparatively poor salary and conditions of service for teachers and their support staff, the absence of incentive such as remote area and Science and Mathematics allowances; other incentives to encourage females to pursue teacher training ensures in Science, Mathematics and Technology in Colleges, the avoidance of late payment of salaries; and generally, improving the working, health and living conditions as well as retirement conditions of teachers.5.The Poverty Reduction Strategy Paper (PRSP II) In his introduction to the Second Poverty Reduction Strategy Paper (PRSP II), the President of the Republic of Sierra Leone made reference to the significant problem in the health and education sector, "which have largely contributed to Sierra Leone's low ranking in the UN Human Development Index. Report 2010, over the years, students performed poorly in examination especially in the Basic Education Certificate Examination (BECE) and the West African Senior School Examination (WASSCE).No research had been done to identify such reasons for poor performance and the widespread indiscipline in our institutions of learning.Thus, when this poor performance surfaced again in 2008, the Government of President Ernest Bai Koroma got concerned and the President set up the Professor Gbamanja Commission to investigate the causes of poor performance and other related matters and to make appropriate recommendations. Analysis of the Products of the 2010/2011 Academic Year Undergraduates and Postgraduates of the University of Sierra Leone The University of Sierra Leone held its most recent Congregation for the conferment of degrees and award of Diplomas and Certificates in March 2012.This is presented below to indicate that in a small country of about six million people, one institution graduating about 1,500 youths in one congregation is an appreciable human development capacity.Below is the table:
v3-fos-license
2019-03-11T13:05:24.134Z
2013-04-24T00:00:00.000
67782943
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.4172/2165-7904.1000168", "pdf_hash": "76d090ff5df0507f1fdefc52208b02891905d420", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44750", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "f8ebec2ef02395d241acd02adb6078993c015b8b", "year": 2013 }
pes2o/s2orc
A Novel Approach to Obesity from Mental Function Animals including humans are able to maintain almost stable body weight due to regulatory system for energy expenditure and food intake. On the other hand, in human society, body shape varies from extremely thin, anorexia nervosa, to extremely fat, obesity. Body weight is determined by an interaction between genetic, environmental and psychosocial factors. There are physiological time points when humans increase their body weight, such as, while in growth phase, pregnancy, and aging. Obesity also arises when taking energy far above these physiological demands. Recent drastic increment of overweight and obese is mainly due to decrease physical activity and increase energy intake. Regular exercise and modest food intake are well recognized for weight control. The health and psychosocial benefits of sustained weight loss are well established, even though this knowledge is not sufficient to motivate long-term behavioral change. It is most important for weight loss therapy that motivation for weight control beyond motivation for food intake. The interaction of the hypothalamus, which is the classical homeostatic energy regulatory site, and extra-hypothalamic brain areas related to regulation of emotion, cognition, and reward are the main construction in regulation of food intake. Recently, vulnerability to stress, wrong body image, low self-esteem, and disregulation of hedonic hunger which are determined by these brain areas, contribute to development of obesity. In this context, such mental function is now recognized as a pivotal player in the management of weight loss therapy. The Mechanisms of Weight Control Animals including humans are able to maintain almost stable body weight due to regulatory system for energy expenditure and food intake. On the other hand, in human society, body shape varies from extremely thin, anorexia nervosa, to extremely fat, obesity. Body weight is determined by an interaction between genetic, environmental and psychosocial factors. There are physiological time points when humans increase their body weight, such as, while in growth phase, pregnancy, and aging. Obesity also arises when taking energy far above these physiological demands. Recent drastic increment of overweight and obese is mainly due to decrease physical activity and increase energy intake. Regular exercise and modest food intake are well recognized for weight control. The health and psychosocial benefits of sustained weight loss are well established, even though this knowledge is not sufficient to motivate long-term behavioral change. It is most important for weight loss therapy that motivation for weight control beyond motivation for food intake. The interaction of the hypothalamus, which is the classical homeostatic energy regulatory site, and extra-hypothalamic brain areas related to regulation of emotion, cognition, and reward are the main construction in regulation of food intake. Recently, vulnerability to stress, wrong body image, low self-esteem, and disregulation of hedonic hunger which are determined by these brain areas, contribute to development of obesity. In this context, such mental function is now recognized as a pivotal player in the management of weight loss therapy. Mental Aspect of Obesity In adults, high prevalence of mental disorders including cognitive impairment is observed in obesity [1][2][3][4][5]. Among mental disorders, eating disorders are often comorbid with obesity [6]. Especially, bingeeating disorder is thought to be present in 20-40% of obese patient [6]. Moreover, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV TR, obesity is categorized as eating disorder [7]. Some population of obesity is even characterized as mental disorder with "compulsive food consumption" similar to drug addiction. Recent functional Magnetic Resonance Image (fMRI) study suggests that anorexia nervosa, which might have opposite phenotype to obesity, might have motivation and reinforcement for starving and hedonic for hunger [8]. This result speculates that obesity might have motivation and reinforcement for consumption of palatable food and fear for Abstract Obesity is well recognized as serious problem in the world. Regular exercise and modest food intake are the basic strategies for healthy body weight. Although, it is very difficult to lose weight and it is much more difficult to avoid weight regain. Recently, from basic and clinical studies, some part of this difficulty might be explained by impairment of central nervous system due to obesity. Indeed, mental function, such as cognitive impairment, depression, vulnerability to stress, wrong body image, low self-esteem and disregulation of hedonic hunger contribute to development of obesity. The link between such mental disorders and obesity is likely to be bidirectional. Brain inflammation and imbalance of neuronal plasticity caused by disregulation of metabolic signals are candidates which cause mental disorders associated with obesity. normal weight participants [9]. These findings suggest the existence of "compulsive food consumption". This "compulsive food consumption" is difficult to be modified, and even if weight loss is achieved, the neural plasticity "fixed" by palatable food leads individuals to crave more palatable food and thus substantially regain weight. Moreover, a weakened Top/Down inhibition signal for food cravings and inadequate sensing of ingested nutrients resulting in hyperphagia of obesity has been detected in fMRI studies [10]. Obesity is also associated with an increased risk of developing depression and a higher likelihood of current depression [11][12][13][14]. Most obese individuals tend to have higher scores in depression, and the projected increase in the rates of being overweight and obesity in future years could generate a parallel increase in obesity-related depression. According to the DSM-IV, an episode of major depressive disorder can be classified clinically as depression with melancholic features and depression with atypical features. Unlike melancholic depression characterized by a loss of appetite or weight, atypical depression and seasonal depression decrease activity and increase appetite and weight. Epidemiologic studies have demonstrated that the incidence of cognitive impairment is higher in obese individuals than in individuals with normal body weight [4,5]. From the study of Anstey et al. risks of cognitive impairment appeared to be highest for those with underweight and obese in midlife [15]. Increasing evidence suggests that obesity is associated with impairment of certain cognitive functions, such as executive function, attention, visuomotor skills, and memory [4,16]. The link between such mental disorder and obesity is likely to be bidirectional: obesity can lead to mental disorder and, in turn, mental disorder can be an obstacle to treatments of obesity and attaining longterm weight-loss goals, thereby contributing to weight gain [6]. relationship; however, the mechanism is almost unknown, yet. Brain inflammation and imbalance of neuronal plasticity caused by dysregulation of metabolic signals are candidates which damage neurons and result in mental disorder associated with obesity according to the results of animal and human studies [17][18][19]. Obesity and Brain Inflammation Adiposity is thought to have a direct effect on neuronal degradation [5]. Microglia, macrophage-like cells of the central nervous system that are activated by pro-inflammatory signals causing local production of specific interleukins and cytokines, play a pivotal role in brain inflammation [20]. Experimental studies in animals have confirmed neurologic vulnerability to obesity and a high-fat diet and further demonstrated that diet-induced metabolic dysfunction increased brain inflammation, reactive gliosis, and vulnerability to injury, especially in the hypothalamus [21,22]. Recent studies with animals and humans have shown that other brain structures, such as the hippocampus and orbitofrontal cortex, are also affected [20,23,24]. Anti-inflammatory agent, regular treadmill running and calorie restriction were reported to be effective for improvement of these inflammatory changes in mice [22,25,26]. Obesity and Imbalance of Neuronal Plasticity Modulated by Metabolic Signals To explain mutual relationship between obesity and mental function, the focus of research is on imbalance of neural plasticity caused by disregulation of metabolic signal. Leptin, adipocyte-derived hormone, insulin, secreted from pancreas β-cells, ghrelin, a stomach-derived hormone and glucagon-like peptide (GLP)-1, secreted from the L cells of intestinal tract, turned out to be main players as metabolic signals linking between obesity and imbalance of neural plasticity. Leptin is reported to induce an antidepressant-like activity in the hippocampus, which is considered to be an important region for regulation of the depressive state in rodents [27,28]. We previously demonstrated that development of depression associated with obesity might be due in part to impaired leptin activity in the hippocampus [28]. Given the high comorbidity of metabolic disorders, such as diabetes and obesity, with depression, several lines of evidence suggest that insulin signaling in the brain is also an important regulator in depression related to obesity. Clinical investigations show the relationship between insulin resistance and depression, but the underlying mechanisms are still unclear [29]. Ghrelin also play a potential role in defense against the consequences of stress, including stress-induced depression and anxiety and prevent their manifestation in experimental animals [30]. There might be different subtypes of depression which are better treated with leptin, insulin or ghrelin. Postulated mechanisms which obesity results in cognitive impairment are the effects of hyperglycemia, hyperinsulinemia, poor sleep with obstructive sleep apnea, and vascular damage to the central nervous system [31,32]. In animal studies, chronic dietary fat intake, especially saturated fatty acid intake, contributes to deficits in hippocampus-and amygdala-dependent learning and memory in rodents with diet-induced obesity by changes in neuronal plasticity [33,34]. Several lines of electrophysiological and behavioral evidence demonstrate that leptin and insulin enhance hippocampal synaptic plasticity and improve learning and memory [32,35]. Therefore, it is likely that impairment of the actions of leptin or insulin might be attributable to cognitive deficits in obesity and diabetes mellitus [36,37]. Through both direct and indirect actions, leptin and insulin diminish perception of food reward-the palatability of food-while enhancing the response to satiety signals generated during food consumption that inhibit feeding and lead to meal termination. By contrast, ghrelin enhances hedonic and incentive responses to food-related cues [38]. Orexin signaling is required in these ghrelin's action on food reward [38]. Ghrelin is also reported to mediate stress-induced food-reward behavior in mice [39]. GLP-1 is turned to be an important player in reward from animal studies. Recently, GLP-1 analogue liraglutide in addition to an energydeficit diet and exercise program, led to a sustained, clinically relevant, dose-dependent weight loss in human [40]. This successful result might arise, at least in part, from improvement of dysregulation of reward circuit. In obesity, dysregulation of these metabolic signals might change neural plasticity in many brain regions resulting in behavioral change. Literature reviews and numerous empirical studies which described significant improvements in psychosocial functioning after bariatric surgery support these ideas [41]. Remarks Mental aspect of obesity has been catching light very recently. To assess and treat mental aspect of obesity was only vaguely recognized so far. Being overweight and obesity might be a phenotype of overadaptation for coping with continuous dynamic metabolic changes to protect brain. Such over-adaptation via dysregulation of brain inflammation and imbalance of neural plasticity might result in mental disorders. Clinical studies suggest that mental disorders associated with obesity can be reversible by body weight loss therapy [42][43][44]. We need clinical prospective data on how body weight, adiposity and muscle mass correlate with brain inflammation and imbalance of neural plasticity, eventually mental functions.
v3-fos-license
2016-05-04T20:20:58.661Z
2012-05-24T00:00:00.000
12707337
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-12-82", "pdf_hash": "a45d68113c0176b4033c3b2465cbe48594a04f67", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:44753", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "a45d68113c0176b4033c3b2465cbe48594a04f67", "year": 2012 }
pes2o/s2orc
Multi-locus variable number tandem repeat analysis of 7th pandemic Vibrio cholerae Background Seven pandemics of cholera have been recorded since 1817, with the current and ongoing pandemic affecting almost every continent. Cholera remains endemic in developing countries and is still a significant public health issue. In this study we use multilocus variable number of tandem repeats (VNTRs) analysis (MLVA) to discriminate between isolates of the 7th pandemic clone of Vibrio cholerae. Results MLVA of six VNTRs selected from previously published data distinguished 66 V. cholerae isolates collected between 1961–1999 into 60 unique MLVA profiles. Only 4 MLVA profiles consisted of more than 2 isolates. The discriminatory power was 0.995. Phylogenetic analysis showed that, except for the closely related profiles, the relationships derived from MLVA profiles were in conflict with that inferred from Single Nucleotide Polymorphism (SNP) typing. The six SNP groups share consensus VNTR patterns and two SNP groups contained isolates which differed by only one VNTR locus. Conclusions MLVA is highly discriminatory in differentiating 7th pandemic V. cholerae isolates and MLVA data was most useful in resolving the genetic relationships among isolates within groups previously defined by SNPs. Thus MLVA is best used in conjunction with SNP typing in order to best determine the evolutionary relationships among the 7th pandemic V. cholerae isolates and for longer term epidemiological typing. Background Diarrhoeal diseases have been and continue to be a cause of mortality and morbidity, especially in developing countries. Of particular note is the disease cholera, a severe watery diarrhoeal disease caused by Vibrio cholerae. V. cholerae is a diverse species of Gram negative bacilli. Serological testing has enabled strains of V. cholerae to be divided into over 200 serogroups based on the O-antigen present [1]. However, only the O1 and O139 serogroups have been known to cause pandemic and epidemic level disease [2]. Since 1817, seven pandemics of cholera have been recorded [3]. The ongoing epidemic started in 1961 and has affected almost every continent, particularly countries of Southeast Asia, Africa, and South America. Cholera remains endemic in developing countries and outbreaks still pose a significant public health issue [4]. The developments of DNA based typing methods have allowed epidemiological studies of cholera. Methods such as Pulse Field Gel Electrophoresis [5,6], Amplified Fragment Length Polymorphism [7] as well as population structure studies including Multi-Locus Sequence Typing [8][9][10] have all been applied to V. cholerae isolates. Such methods have all been able to distinguish between environmental and clinical strains of V. cholerae [6,8,11], but they have had limited success in drawing evolutionary relationships between 7th pandemic strains. Previously, we investigated the evolution of V. cholerae using Single Nucleotide Polymorphism (SNP) analysis and found that 7th pandemic V. cholerae isolates could be distinguished into groups with a stepwise accumulation of SNPs. The 7th pandemic SNP relationships were confirmed by a large genome sequencing based study by Mutreja et al. [12]. SNP Groups were correlated with the spread of pandemic cholera into Africa and were also able to separate the O139 isolates into a distinct SNP profile [13]. However, further resolution of isolates within each group is required. Multilocus variable number tandem repeat analysis (MLVA) is a PCR based typing method based on regions of tandemly repeated short DNA sequence elements. Variations in the number of copies of repeat DNA sequences form the basis of differentiation [14]. Recent studies have shown that MLVA is a highly discriminating method for the typing of environmental and clinical isolates of V. cholerae and is able to differentiate closely related isolates from outbreak situations [15,16]. In this report, we applied MLVA to isolates spanning the 7th pandemic to further determine the genetic and evolutionary relationships within the 7th pandemic clone and to evaluate the potential of MLVA as a long term epidemiological typing tool. VNTR variation and discriminative power The MLVA data of 61 7th pandemic isolates including its O139 derivative and 5 genome sequenced strains from Grim et al. [17] are presented as repeat numbers for each locus (Table 1). Additionally, 3 pre-7th pandemic isolates were included for comparison but were excluded from the calculation of diversity statistics below. The 66 7th pandemic isolates were distinguished into 60 MLVA profiles. All MLVA profiles were represented by a single isolate except for 4 MLVA profiles that were represented by 4, 2, 2 and 2 isolates respectively. Two of these profiles belonged to SNP group II and had allelic profile of 9-6-4-7-26-14 and 9-6-4-7-25-13. Note that an MLVA profile is made up of the repeat numbers for the following loci (in order): vc0147, vc0437, vc1457, vc1650, vca0171 and vca0283. The remaining profiles were within SNP group VI and differed at vca0171 by only one repeat, with the profiles 10-7-3-9-(22/23)-11. The level of variation differed across the six VNTRs analysed. In total, 7, 6, 3, 5, 19 and 24 alleles were observed for vc0147, vc0437, vc1457, vc1650, vca0171 and vca0283 respectively. It is also interesting to note that the 2 most variable VNTRs are located in the small chromosome while the other 4 less variable VNTRs are on the large chromosome. Additionally, one isolate (M542) amplified two products that differed by one repeat for vc1457 which has been observed previously [16]. However, for phylogenetic analysis and scoring of alleles, only the fragment with the strongest signal was recorded. This VNTR is located within the cholera toxin subunit A promoter region which may have contributed to the decreased variation [18]. The discriminatory power of each VNTR and all 6 VNTRs combined was measured by Simpson's Index of Diversity (D). The highest D value was 0.957 and was recorded for vca0283. Except for vca0283 and vca0171, all D values were lower than previously reported. Our focus on 7th pandemic isolates which have been shown to be highly homogeneous may have contributed to these lower D values. VNTR vc1457 had the lowest D value of 0.437, which was lower than previously reported (D value = 0.58) [16]. The combined D value of 7th pandemic isolates for all 6 VNTRs in this study was 0.995. We also calculated D values from previous studies by excluding MLVA data of environmental and non-7th pandemic isolates [19][20][21][22] and found that the D values were similar and ranged from 0.962 to 0.990 [19][20][21][22], when only 7th pandemic isolates were analysed. Analysis using the two most variable VNTRs, vca0171 and vca0283, produced comparable D values, which could potentially reduce the need to use the other markers. This would be particularly useful in outbreak situations where there is limited time and resources available to type isolates. However, typing the isolates in this study using only two loci would not reveal any useful relationships. Phylogenetic analysis using MLVA We analysed the MLVA using eBURST [23]. Using the criteria of 5 out of 6 loci identical as definition of a clonal complex, 26 MLVA profiles were grouped into 7 clonal complexes with 37 singletons. For the 7 clonal complexes, a minimal spanning network (MSN) was constructed to show the relationships of the MLVA profiles (Figure 1 A). Many nodes in the 2 largest clonal complexes showed multiple alternative connections. There were 27 possible nodes differing by 1 locus, 4 nodes were due to the difference in vc0147 and 23 others were due to VNTR loci in chromosome II. Out of the 23 single locus difference in the 2 chromosome II VNTRs, the majority (57%) also differed by gain or loss of a single repeat unit. Thus 1 repeat change was the most frequent for the VNTRs on both chromosomes. It has been shown previously that it is more likely for a VNTR locus to differ by the gain or loss of a single repeat unit as seen in E. coli [24] and we have also found this was the case in V. cholerae. We then used the MLVA data for all 7th pandemic isolates to construct a minimal spanning tree (Additional file 1 Figure S1A). For nodes where alternative connections of equal minimal distance were present we selected the connection with priority rules in the order of: between nodes within the same SNP group, between nodes differing by 1 repeat difference and between nodes by closest geographical or temporal proximity. The majority of isolates differed by either 1 or 2 loci, which is attributable to vca0171 and vca0283 being the 2 most variable loci. It should be noted that node connections differing by more than one VNTR locus are less reliable as there were more alternatives. Since the 2 VNTRs on chromosome II were highly variable, exclusion of these 2 VNTRs may increase the reliability of the minimum spanning tree MST (Kendall et al [21]). The number of unique MLVA profiles was reduced from 60 to 32. Nine profiles had multiple Figure 1 eBURST analysis and minimum Spanning Networks of 7th pandemic V. cholerae isolates based on MLVA. A) MLVA using 6 VNTR loci and B) MLVA using 4 VNTR loci from chromosome I. Each circle represents a unique MLVA profile, with the isolate number/s belonging to the MLVA type within the circles. The colour of each circle denotes the group to which each isolate belongs according to Single Nucleotide Polymorphism (SNP) typing [13] (see Figure 2). Singletons are arranged by SNP groups while members of clonal complexes are connected using minimum spanning network. Thick connecting lines represent differences of one repeat unit with red lines indicating connections chosen in the minimum spanning tree shown in Additional file 1 Figure S1 based on priority rules described in the text and thin solid lines represent one locus difference with more than one repeat difference. The size of each circle reflects the number of isolates within the circle. connections of the nodes ( Figure 1B). Using the same principle as above to resolve alternative nodes with equal minimum distance, an MST was constructed to display the relationships of these MLVA profiles and the 4 more distantly related MLVA profiles as shown in Additional file 1 Figure S1B. A previous SNP analysis with the same isolates had shown that 7th pandemic cholera had undergone stepwise evolution [13]. None of these groups were clearly distinct from the either the 4 loci or 6 loci MLVA MST aside from SNP group VI which consists of O139 isolates ( Figure 1). However, a distinctive pattern can be seen when the consensus alleles within a SNP group are compared as shown in Table 1. We allocated a consensus allele if more than half of the MLVA profiles carried a given allele in the SNP group and if there was no consensus, the consensus allele was represented by an x for discussion below. The 2 most variable VNTRs (vca0171 and vca0283) had no consensus alleles within any of the SNP groups except vca0171 in group VI. The allelic profile that initiated the 7th pandemic was likely to be 8-6-4-7-x-x based on the allelic profiles of the prepandemic stains which is also consistent with the profile of the earliest 7th pandemic isolate M793 from Indonesia. Group I had an 8-6-4-7-x-x allelic profile which evolved into 9-6-4-7-x-x in group II. By changing the 2 nd VNTR allele from 6 to 7, groups III and IV had consensus profiles of 9-7-4-7-x-x and 9-7-4-x-20-x respectively, with the latter being most likely a 9-7-4-8-20-x profile (see Table 1). Group V had the first VNTR allele reverted back to 8 and had an 8-7-4-8-x-x profile. SNP group VI showed the most allele changes with a 10-7-3-9-23-x profile compared with 8, 7,-, 8, 21/22, 23/16 from Stine et al. [15]. Although vca0171 and vca0283 offered no group consensus alleles, it is interesting to note that the trend for vca0171 increased in the number of repeats while vca0283 decreased in the number of repeats over time (Table 1). Each SNP group was most likely to have arisen once with a single MLVA type as the founder, identical VNTR alleles between SNP groups are most likely due to reverse/parallel changes. This has also contributed to the inability of MLVA to resolve relationships. The comparison of the SNP and MLVA data allowed us to see the reverse/parallel changes of VNTR alleles within known genetically related groups. However, the rate of such changes is difficult to quantitate with the current data set. In order to resolve isolates within the established SNP groups of the 7th pandemic, all 6 VNTR loci were used to construct a MST for each SNP profile containing more than 2 isolates. Six separate MSTs were constructed and assigned to their respective SNP profiles as shown in Figure 2. The largest VNTR difference within a SNP group was 5 loci which was seen between two sequenced strains, CIRS101 and B33. In contrast, there were several sets of MLVA profiles which differed by only one VNTR locus within the MSTs which showed that they were most closely related. The first set consisted of 5 MLVA profiles of six isolates within SNP group II, all of which were the earlier African isolates. The root of group II was M810, an Ethiopian isolate from 1970 which was consistent with previous results using AFLP [7] and SNPs [13]. However, the later African and Latin American isolates were not clearly resolved. We previously proposed that Latin American cholera originated from Africa based on SNP analysis, which was further supported by the clustering of recently sequenced strain C6706 from Peru [25]. Note that C6706 is not on Figure 2 as we cannot extract VNTR data from the incomplete genome sequence. M2314 and M830 from Peru and French Guiana were the most closely related, with 2 VNTR differences, however the remainder of isolates in this subgroup were more diverse than earlier isolates. The second set of MLVA profiles differing by one locus consisted of all O139 isolates in SNP group VI except M834, which was separated by two VNTR loci. This finding is similar to a study by Ghosh et al. [26], who found that isolates collected within a year differed at only one locus, while isolates from later years differed at more than one locus. A similar trend was also seen between closely related samples taken from the same household or same individual [21]. Isolates from SNP group V were collected from Thailand and 3 regions of Africa and contained 3 genome sequences, MJ-1236, B33 and CIRS101, from Mozambique and Bangladesh [17]. These isolates were shown to be identical based on 30 SNPs [13]. The genetic relatedness of these isolates was also reflected by their MLVA profiles, which differ by only 2 loci. The consensus alleles for SNP group V was 8, 7, 4, 8, x, x, which was identical to the consensus alleles of MLVA group I (8, 7,-, 8, x, x) according to a 5-loci study by Choi et al. [19]. No other consensus alleles of MLVA groups matched the current SNP group consensus alleles. However, there (See figure on previous page.) Figure 2 Composite tree of 7th pandemic V. cholerae isolates. Isolates were separated into six groups according to Single Nucleotide Polymorphism (SNP) typing. Isolates with identical SNP profiles were further separated using Multilocus Variable number tandem repeat Analysis (MLVA). A minimum spanning tree (MST) was constructed for each group and combined with the original parsimony tree. Numbers at the node of each between groups indicate the number of SNP differences, whereas numbers at the node of each branch within a group indicate the number of VNTR differences between isolates. were 2 isolates from Africa (M823 and M826) with the profiles 10, 6, -, 7/8, x, x from this study, which matched 2 MLVA profiles of isolates from MLVA group III Vietnam from Choi et al. [19]. These African isolates were collected in 1984 and 1990 while isolates from Choi et al. [19] were collected between 2002-2008. It is unlikely that the isolates from these two studies are epidemiologically linked. This further highlights the need for SNP analysis to resolve evolutionary relationships before MLVA can be applied for further differentiation. Based on a 5-loci MLVA study performed by Ali et al. [27] the ancestral profile of the 2010 Haitian outbreak isolates was determined to be 8, 4, -, 6, 13, 36. Nine MLVA profiles differing by 1 locus were found in total and were mapped against our SNP study. A previous study showed that 2010 Haitian cholera outbreak strain belong to SNP group V [25]. However, based on the ancestral profile of the Haitian isolates, only the first locus was shared with our group V consensus allele and no other Haitian alleles were found in any of the group V isolates. Thus, no relationships could be made between group V isolates and the Haitian outbreak strains. Similarly, in another 5-loci MLVA study of 7th pandemic isolates sampled from 2002 to 2005 in Bangladesh [21], no MLVA profiles were found to be identical at more than 2 loci to our MLVA profiles. Therefore, while MLVA may be highly discriminatory, it may not be reliable for longer term epidemiology and evolutionary relationships. Our studies of Salmonella enterica serovar Typhi also reached a similar conclusion [28]. However, it should be noted that although our isolates are representative of the spread of the 7th cholera pandemic, our sample size is relatively small. A study with a much larger sample may be useful to affirm this conclusion. Conclusions We have shown that MLVA of 6 VNTR loci is highly discriminatory in differentiating closely related 7th pandemic isolates and shown that SNP groups share consensus VNTR patterns. We have also shown that relationships among isolates can only be inferred if they differ by 1 to 2 VNTRs. MLVA is best used for outbreak investigations or tracing the source of outbreaks, such as the recent outbreak in Haiti [27]. The advantage of MLVA is that there is no phylogenetic discovery bias as is the case with SNPs [13]. However, VNTRs alone are too variable to be used for longer term epidemiological studies as they were unable to resolve relationships of the isolates over a 40 year span. MLVA needs to be used in combination with SNPs for evolutionary or longer term epidemiological studies. The SNP and MLVA analyses of the Haitian outbreak and its possible Nepal origin illustrate well the usefulness of this approach [27,29]. Strain selection and DNA extraction In total, 66 isolates of 7th pandemic V. cholerae collected between 1961 and 1999 were used in this study, including 14 isolates of the O139 Bengal biotype (Table 1). Three pre-7th pandemic isolates were also included for comparative purposes. Isolates were grown on TCBS (Oxoid) for 24 hr at 37°C and subcultured for single colonies. Genomic DNA was extracted using the phenol-chloroform method. Where available, VNTR data from sequenced V. cholerae genomes was also included in the analysis. VNTR selection and MLVA typing The details of 17 VNTR loci was previously identified and studied by Danin-Poleg et al. [16]. Six VNTR loci with D values >0.5 (vc0147, vc0437, vc1457, vc1650, vca0171 and vca0283) were selected and amplified by PCR using published primer sequences which were modified to include a 5' universal M13 tail as done previously [28]. An additional M13 primer with a fluorescent dye attached was added to the PCR mix to bind to the modified tail. Fluorescent dyes were FAM, VIC, NED and PET for blue, green, black and red fluorescence, respectively. PCR conditions included a touchdown cycling profile as follows: 95°C for 5 min; 96°C for 1 min, 68°C for 5 min (−2°C/cycle, a decrease of 2°C after each cycle) and 72°C for 1 min for 5 cycles; 96°C for 1 min, 58°C for 2 min (−2°C/cycle) and 72°C for 1 min for 5 cycles; 96°C for 1 min, 50°C for 1 min and 72°C for 1 min for 25 cycles; and final extension at 72°C for 5 min. The fluorescence labelled PCR products of vc0147 (FAM), vc0437 (VIC), vc1457 (PET), vc1650 (NED) in one sample and vca0171 (PET) and vca0283 (NED) in a second sample were pooled for capillary electrophoresis on an Automated GeneScan Analyser ABI3730 (Applied Biosystems) at the sequencing facility of the School of Biotechnology and Biomolecular Sciences, the University of New South Wales. The fragment size was determined using the LIZ600 size standard (Applied Biosystems) and analysed using GeneMapper v 3.7 software (Applied Biosystems). Sequencing was performed to confirm the number of repeats for representative alleles. Phylogenetic analysis A Minimum spanning tree (MST) using pairwise difference was generated using Arlequin v. 3.1, available from http://cmpg.unibe.ch/software/arlequin3, in which if alternative connections of equal distance were present, the connection between isolates with closest geographical or temporal proximity was selected. The Simpson's Index of Diversity (D value) [30] was calculated using an inhouse program, MLEECOMP package [31]. Additional file Additional file 1: Figure S1. Minimum Spanning trees of 66 V. cholerae isolates using MLVA of A) 6 VNTR loci and B) 4 VNTR loci from chromosome I. Each circle represents a MLVA profile, with the isolate number/s belonging to the MLVA type within the circles. The colour of each circle denotes the group to which each isolate belongs according to SNP typing [12] (see Figure 2). If isolates from different SNP groups shared a MLVA profile, the circle was divided to reflect the proportion of isolates in each SNP group. Thick solid connecting lines represent differences of one repeat unit, thin solid lines and dashed lines represent 1 and 2 loci differences respectively, and longer dashed lines represent more than 2 loci differences. The size of each circle reflects the number of isolates within the circle. Authors' contributions Experimental work and data collection were carried out by CL. CL, SO and RL contributed to data analysis and interpretation. The study was conceived and designed by RL. The manuscript was drafted by CL and SO, and revised by PR and RL. All authors have read and approved the final manuscript.
v3-fos-license